Featured

The Conversation, on Data Sovereignty

Tim Berners-Lee’s plan to save the internet: give us back control of our data

Pieter Verdegem, University of Westminster

Releasing his creation for free 30 years ago, the inventor of the world wide web, Tim Berners-Lee, famously declared: “this is for everyone”. Today, his invention is used by billions – but it also hosts the authoritarian crackdowns of antidemocratic governments, and supports the infrastructure of the most wealthy and powerful companies on Earth.

Now, in an effort to return the internet to the golden age that existed before its current incarnation as Web 2.0 – characterised by invasive data harvesting by governments and corporations – Berners-Lee has devised a plan to save his invention.

This involves his brand of “data sovereignty” – which means giving users power over their data – and it means wrestling back control of the personal information we surrendered to big tech many years ago.

Berners-Lee’s latest intervention comes as increasing numbers of people regard the online world as a landscape dominated by a few tech giants, thriving on a system of “surveillance capitalism” – which sees our personal data extracted and harvested by online giants before being used to target advertisements at us as we browse the web.

Courts in the US and the EU have filed cases against big tech as part of what’s been dubbed the “techlash” against their growing power. But Berners-Lee’s answer to big tech’s overreach is far simpler: to give individuals the power to control their own data.

Net gains

The idea of data sovereignty has its roots in the claims of the world’s indigenous people, who have leveraged the concept to protect the intellectual property of their cultural heritage.

Applied to all web users, data sovereignty means giving individuals complete authority over their personal data. This includes the self-determination of which elements of our personal data we permit to be collected, and how we allow it to be analysed, stored, owned and used.

This would be in stark contrast to the current data practices that underpin big tech’s business models. The practice of “data extraction”, for instance, refers to personal information that is taken from people surfing the web without their meaningful consent or fair compensation. This depends on a model in which your data is not regarded as being your property.

Scholars argue that data extraction, combined with “network effects”, has led to teach monopolies. Network effects are seen when a platform becomes dominant, encouraging even more users join and use it. This allows the dominant platform more possibilities to extract data, which they use to produce better services. In turn, these better services attract even more users. This tends to amplify the power (and database size) of dominant firms at the expense of smaller ones.

This monopolisation tendency explains why the data extraction and ownership landscape is dominated by the so-called GAFAM – Google, Apple, Facebook, Amazon and Microsoft – in the US and the so-called BAT – Baidu, Alibaba and Tencent – in China. In addition to companies, governments also have monopoly power over their citizens’ data.

A smartphone screen showing the five 'GAFAM' branded apps
The world’s largest tech companies are increasingly regarded as monopolistic. Koshiro K/Shutterstock

Data sovereignty” has been proposed as a promising means of reversing this monopolising tendency. It’s an idea that’s been kicked about on the fringes of internet debates for some time, but its backing by Tim Berners-Lee will mean it garners much greater attention.

Building data vaults

Berners-Lee isn’t just backing data sovereignty: he’s building the tech to support it. He recently set up Inrupt, a company with the express goal of moving towards the kind of world wide web that its inventor had originally envisioned. Inrupt plans to do that through a new system called “pods” – personal online data stores.

Pods work like personal data safes. By storing their data in a pod, individuals retain ownership and control of their own data, rather than transferring this to digital platforms. Under this system, companies can request access to an individual’s pod, offering certain services in return – but they cannot extract or sell that data onwards.


Read more: Web 3.0: the decentralised web promises to make the internet free again


Inrupt has built these pods as part of its Solid project, which has followed the form of a Silicon Valley startup – though with the express objective of making pods accessible for all. All websites or apps a user with a pod visits will require authentication by Solid before being allowed to request an individual’s personal data. If pods are like safes, Solid acts like the bank in which the safe is stored.

One of the criticisms of the idea of pods is that it approaches data as a commodity. The concept of “data markets” has been mooted, for instance, as a system that enables companies to make micro-payments in exchange for our data. The fundamental flaw of such a system is that data is of little value when it is bought and sold on its own: the value of data only emerges from its aggregation and analysis, accrued via network effects.

Common good

An alternative to the commodification of data could lie in categorising data as “commons”. The idea of the commons was first popularised by the work of Nobel Prize-winning political economist Elinor Ostrom.

A commons approach to data would regard it as owned not by individuals or by companies, but as something that’s owned by society. Data as commons is an emerging idea which could unlock the value of data as a public good, keeping ownership in the hands of the community.

Tim Berners-Lee’s intervention in debates about the destiny of the internet is a welcome development. Governments and communities are coming to realise that big tech’s data-driven digital dominance is unhealthy for society. Pods represent one answer among many to the question of how we should respond.The Conversation

Pieter Verdegem, Senior Lecturer, School of Media and Communication, University of Westminster

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Featured

The Conversation, on Machine Learning and Failure

Artificial intelligence must not be allowed to replace the imperfection of human empathy

Arshin Adib-Moghaddam, SOAS, University of London

At the heart of the development of AI appears to be a search for perfection. And it could be just as dangerous to humanity as the one that came from philosophical and pseudoscientific ideas of the 19th and early 20th centuries and led to the horrors of colonialism, world war and the Holocaust. Instead of a human ruling “master race”, we could end up with a machine one.

If this seems extreme, consider the anti-human perfectionism that is already central to the labour market. Here, AI technology is the next step in the premise of maximum productivity that replaced individual craftmanship with the factory production line. These massive changes in productivity and the way we work created opportunities and threats that are now set to be compounded by a “fourth industrial revolution” in which AI further replaces human workers.

Several recent research papers predict that, within a decade, automation will replace half of the current jobs. So, at least in this transition to a new digitised economy, many people will lose their livelihoods. Even if we assume that this new industrial revolution will engender a new workforce that is able to navigate and command this data-dominated world, we will still have to face major socioeconomic problems. The disruptions will be immense and need to be scrutinised.

The ultimate aim of AI, even narrow AI which handles very specific tasks, is to outdo and perfect every human cognitive function. Eventually, machine-learning systems may well be programmed to be better than humans at everything.

What they may never develop, however, is the human touch – empathy, love, hate or any of the other self-conscious emotions that make us human. That’s unless we ascribe these sentiments to them, which is what some of us are already doing with our “Alexas” and “Siris”.

Productivity vs. human touch

The obsession with perfection and “hyper-efficiency” has had a profound impact on human relations, even human reproduction, as people live their lives in cloistered, virtual realities of their own making. For instance, several US and China-based companies have produced robotic dolls that are selling out fast as substitute partners.

One man in China even married his cyber-doll, while a woman in France “married” a “robo-man”, advertising her love story as a form of “robo-sexuality” and campaigning to legalise her marriage. “I’m really and totally happy,” she said. “Our relationship will get better and better as technology evolves.” There seems to be high demand for robot wives and husbands all over the world.

In the perfectly productive world, humans would be accounted as worthless, certainly in terms of productivity but also in terms of our feeble humanity. Unless we jettison this perfectionist attitude towards life that positions productivity and “material growth” above sustainability and individual happiness, AI research could be another chain in the history of self-defeating human inventions.

Already we are witnessing discrimination in algorithmic calculations. Recently, a popular South Korean chatbot named Lee Luda was taken offline. “She” was modelled after the persona of a 20-year-old female university student and was removed from Facebook messenger after using hate speech towards LGBT people.

Meanwhile, automated weapons programmed to kill are carrying maxims such as “productivity” and “efficiency” into battle. As a result, war has become more sustainable. The proliferation of drone warfare is a very vivid example of these new forms of conflict. They create a virtual reality that is almost absent from our grasp.

But it would be comical to depict AI as an inevitable Orwellian nightmare of an army of super-intelligent “Terminators” whose mission is to erase the human race. Such dystopian predictions are too crude to capture the nitty gritty of artificial intelligence, and its impact on our everyday existence.

Societies can benefit from AI if it is developed with sustainable economic development and human security in mind. The confluence of power and AI which is pursuing, for example, systems of control and surveillance, should not substitute for the promise of a humanised AI that puts machine learning technology in the service of humans and not the other way around.

To that end, the AI-human interfaces that are quickly opening up in prisons, healthcare, government, social security and border control, for example, must be regulated to favour ethics and human security over institutional efficiency. The social sciences and humanities have a lot to say about such issues.

One thing to be cheerful about is the likelihood that AI will never be a substitute for human philosophy and intellectuality. To be a philosopher, after all, requires empathy, an understanding of humanity, and our innate emotions and motives. If we can programme our machines to understand such ethical standards, then AI research has the capacity to improve our lives which should be the ultimate aim of any technological advance.

But if AI research yields a new ideology centred around the notion of perfectionism and maximum productivity, then it will be a destructive force that will lead to more wars, more famines and more social and economic distress, especially for the poor. At this juncture of global history, this choice is still ours.The Conversation

Arshin Adib-Moghaddam, Professor in Global Thought and Comparative Philosophies, SOAS, University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Featured

The Conversation, on Robot Evolution

We’re teaching robots to evolve autonomously – so they can adapt to life alone on distant planets 

In the future, robots we’ve programmed may evolve and multiply on distant planets. SquareMotion/Shutterstock
Emma Hart, Edinburgh Napier University

It’s been suggested that an advance party of robots will be needed if humans are ever to settle on other planets. Sent ahead to create conditions favourable for humankind, these robots will need to be tough, adaptable and recyclable if they’re to survive within the inhospitable cosmic climates that await them.

Collaborating with roboticists and computer scientists, my team and I have been working on just such a set of robots. Produced via 3D printer – and assembled autonomously – the robots we’re creating continually evolve in order to rapidly optimise for the conditions they find themselves in.

Our work represents the latest progress towards the kind of autonomous robot ecosystems that could help build humanity’s future homes, far away from Earth and far away from human oversight.

Robots rising

Robots have come a long way since our first clumsy forays into artificial movement many decades ago. Today, companies such as Boston Dynamics produce ultra-efficient robots which load trucks, build pallets, and move boxes around factories, undertaking tasks you might think only humans could perform.

Despite these advances, designing robots to work in unknown or inhospitable environments – like exoplanets or deep ocean trenches – still poses a considerable challenge for scientists and engineers. Out in the cosmos, what shape and size should the ideal robot be? Should it crawl or walk? What tools will it need to manipulate its environment – and how will it survive extremes of pressure, temperature and chemical corrosion?

An impossible brainteaser for humans, nature has already solved this problem. Darwinian evolution has resulted in millions of species that are perfectly adapted to their environment. Although biological evolution takes millions of years, artificial evolution – modelling evolutionary processes inside a computer – can take place in hours, or even minutes. Computer scientists have been harnessing its power for decades, resulting in gas nozzles to satellite antennas that are ideally suited to their function, for instance.


Read more: How we built a robot that can evolve – and why it won’t take over the world


But current artificial evolution of moving, physical objects still requires a great deal of human oversight, requiring a tight feedback loop between robot and human. If artificial evolution is to design a useful robot for exoplanetary exploration, we’ll need to remove the human from the loop. In essence, evolved robot designs must manufacture, assemble and test themselves autonomously – untethered from human oversight.

Unnatural selection

Any evolved robots will need to be capable of sensing their environment and have diverse means of moving – for example using wheels, jointed legs or even mixtures of the two. And to address the inevitable reality gap that occurs when transferring a design from software to hardware, it is also desirable for at least some evolution to take place in hardware – within an ecosystem of robots that evolve in real time and real space.

The Autonomous Robot Evolution (ARE) project addresses exactly this, bringing together scientists and engineers from four universities in an ambitious four-year project to develop this radical new technology.

As depicted above, robots will be “born” through the use of 3D manufacturing. We use a new kind of hybrid hardware-software evolutionary architecture for design. That means that every physical robot has a digital clone. Physical robots are performance-tested in real-world environments, while their digital clones enter a software programme, where they undergo rapid simulated evolution. This hybrid system introduces a novel type of evolution: new generations can be produced from a union of the most successful traits from a virtual “mother” and a physical “father”.

As well as being rendered in our simulator, “child” robots produced via our hybrid evolution are also 3D-printed and introduced into a real-world, creche-like environment. The most successful individuals within this physical training centre make their “genetic code” available for reproduction and for the improvement of future generations, while less “fit” robots can simply be hoisted away and recycled into new ones as part of an ongoing evolutionary cycle.

Two years into the project, significant advances have been made. From a scientific perspective, we have designed new artificial evolutionary algorithms that have produced a diverse set of robots that drive or crawl, and can learn to navigate through complex mazes. These algorithms evolve both the body-plan and brain of the robot.

The brain contains a controller that determines how the robot moves, interpreting sensory information from the environment and translating this into motor controls. Once the robot is built, a learning algorithm quickly refines the child brain to account for any potential mismatch between its new body and its inherited brain.

From an engineering perspective, we have designed the “RoboFab” to fully automate manufacturing. This robotic arm attaches wires, sensors and other “organs” chosen by evolution to the robot’s 3D-printed chassis. We designed these components to facilitate swift assembly, giving the RoboFab access to a big toolbox of robot limbs and organs.

Waste disposal

The first major use case we plan to address is deploying this technology to design robots to undertake clean-up of legacy waste in a nuclear reactor – like that seen in the TV miniseries Chernobyl. Using humans for this task is both dangerous and expensive, and necessary robotic solutions remain to be developed.

Looking forward, the long-term vision is to develop the technology sufficiently to enable the evolution of entire autonomous robotic ecosystems that live and work for long periods in challenging and dynamic environments without the need for direct human oversight.

In this radical new paradigm, robots are conceived and born, rather than designed and manufactured. Such robots will fundamentally change the concept of machines, showcasing a new breed that can change their form and behaviour over time – just like us.The Conversation

Emma Hart, Chair in Natural Computation, Edinburgh Napier University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Featured

What is a CVE?

CVE stands for Common Vulnerabilities and Exposures, in essence it represents a security risk that has been classified and can be remediated on its own.

At Hayachi Services we take the security of your organisation seriously, and so all of our Partners use CVEs to help keep you secure and classify risk on your systems.

For example, one of the modules that is offered with Panda Security’s Adaptive Defense is Patch Management. This tool uses CVE classifications on hundreds of applications to ensure that you can keep track of the Common Vulnerabilities your IT Estate has and remediate them with the click of a button.

Equally Red Hat use Smart Management to allow you to easily update and keep control of your Red Hat Enterprise Linux estate, at no extra cost to you – another bit of value when buying with Red Hat.

So then, why is it important to pay for world-class solutions to avoid these ‘Common’ Vulnerabilities?

A CVE can be a dangerous thing even if it is well known, in the same way that while Phishing attacks are common they still do incredible damage to business and individuals.

Businesses will at times use Legacy software, for compatibility and compliance reasons as well as the simple fact that sometimes that business-critical tools aren’t always available on a newer system.

For example at CERN they are aware of the risk of legacy operating systems and take steps to protect themselves, and the example used is powerful: who would walk naked through a quarantine ward and expect not to be infected?

Being aware of the most obvious, commons risks across your estate – having the reporting, patching and control of what is within your risk appetite and what needs to be secured immediately is incredibly powerful.

The National Cyber Security Centre do weekly threat reports for exactly this reason, new threats develop organically every day and keeping a finger to the pulse is essential.

How can Hayachi Services help?

Simple question, and a simple answer. Chat with us! We can talk about your risk appetite and suggest actions to help protect your business.

Our Partners work with the smallest SMEs and Charities all the way up to the Fortune500, Defence and Critical Infrastructure so we’ll certainly have something to allay your concerns.

We are proudly vendor neutral, and while our favourites are our Partners from across industry that isn’t to say we will only recommend what we personally sell.

Favouritism doesn’t keep our clients safe, so we don’t do it!

You can find pages on CVEs below:

Featured

Linux

Linux and Unix-like [collectively, *nix] Operating Systems are incredibly common. Almost every mobile phone, gaming system, internet streaming service, telephony service, and enterprise-level application uses the Linux-subsystem, or depends on *nix for business-critical work.

*nix is necessarily a very broad term though, so we’ve collected some of our favourite distributions for you look at to help give a flavour of what is on offer.

There are a multitude of Operating Systems, namely because *nix is based on open, collaborative and international work.

For example, FreeBSD and Red Hat are used by many businesses and government institutions globally, partly because they are licensed in a way which allows them to be open and customise-able.

They both have a range of certifications to demonstrate knowledge of best-practice, and Red Hat in particular has an award winning support arm as well as unparalleled ten year life-cycle to their solutions.

Enjoy! And if you have questions or queries, send us an email and we will be happy to answer them.

The Conversation, on Ransomware

Ransomware gangs are running riot – paying them off doesn’t help

Jaruwan Jaiyangyuen/Shutterstock
Jan Lemnitzer, Copenhagen Business School

In the past five years, ransomware attacks have evolved from rare misfortunes into common and disruptive threats. Hijacking the IT systems of organisations and forcing them to pay a ransom in order to reclaim them, cybercriminals are freely extorting millions of pounds from companies – and they’re enjoying a remarkably low risk of arrest as they do it.

At the moment, there is no coordinated response to ransomware attacks, despite their ever-increasing prevalence and severity. Instead, states’ intelligence services respond to cybercriminals on an ad-hoc basis, while cyber-insurance firms recommend their clients simply pay off the criminal gangs that extort them.

Neither of these strategies is sustainable. Instead, organisations need to redouble their cybersecurity efforts to stymie the flow of cash from blackmailed businesses to cybercriminal gangs. Failure to act means that cybercriminals will continue investing their growing loot in ransomware technologies, keeping them one step ahead of our protective capabilities.

Daylight robbery

Ransomware is a lucrative form of cybercrime. It works by encrypting the data of the organisations that cybercriminals hack. The cybercriminals then offer organisations a choice: pay a ransom to receive a decryption code that will return your IT systems to you, or lose those systems forever. The latter choice means that firms would have to rebuild their IT systems (and sometimes databases) from scratch.

Unsurprisingly, many companies choose to quietly pay the ransom, opting never to report the breach to the authorities. This means successful prosecutions of ransomware gangs are exceedingly rare.

In 2019, the successful prosecution of a lone cybercriminal in Nigeria was such a novelty that the US Department of Justice issued a celebratory press release. Meanwhile, in February 2021, French and Ukrainian prosecutors managed to arrest some affiliates Egregor, a gang that rents powerful ransomware out for other cybercriminals to use. It appears that those arrested merely rented the ransomware, rather than creating or distributing it. Cybersecurity experts have little faith in the criminal justice system to address ransomware crimes.

The frequency of those crimes is increasing rapidly. An EU report published in 2020 found that ransomware attacks increased by 365% in 2019 compared to the previous year. Since then, the situation is likely to have become much worse. The US security company PurpleSec has suggested that overall business losses caused by ransomware attacks might have exceeded US$20 billion (£14.3 billion) in 2020, up from US$11.5 billion (£8.2 billion) in 2019.

Even hospitals have suffered attacks. Given the potential impact of a sustained IT shutdown on human lives, healthcare databases are in fact actively targeted by ransomware gangs, who know they’ll pay their ransoms quickly and reliably. In 2017, the NHS fell foul of such an attack, forcing staff to cancel thousands of hospital appointments, relocate vulnerable patients, and conduct their administrative duties with a pen and paper for several days.

An NHS building bearing the NHS logo
The 2017 ‘WannaCry’ ransomware attack hit some of the world’s largest organisations, including the NHS. Marbury/Shutterstock

Waging war?

With ransomware spiralling out of control, radical proposals are now on the table. Chris Krebs, the former head of the US Cybersecurity and Infrastructure Security Agency, recently advocated using the capabilities of US Cyber Command and the intelligence services against ransomware gangs.

The US government and Microsoft coordinated over such a attack in 2020, targeting the “Trickbot botnet” malware infrastructure – often used by Russian ransomware gangs – to prevent potential disruption of the US election. Australia is the only country to have publicly admitted to using offensive cyber capabilities to destroy foreign cybercriminals’ infrastructure as part of a criminal investigation.

Sustained operations of this kind could have an effect on cybercriminals’ ability to operate, especially if directed against the gangs’ servers and the infrastructure they need to turn their bitcoin into cash. But unleashing offensive cyberwarfare tools against criminals also creates a worrying precedent.

Normalising the use of the armed forces or intelligence units against individuals residing in other countries is a slippery slope, especially if the idea is adopted by some of the less scrupulous regimes on this planet. Such offensive cyber operations could disrupt another state’s carefully planned domestic intelligence operations. They could also negatively affect the innocent citizens of foreign states who unwittingly share web services with criminals.

Further, many cybercriminals in Russia and China enjoy de facto immunity from prosecution because they occasionally work for the intelligence services. Others are known to be state hackers moonlighting in cybercrime. Targeting these people might diminish the ransomware threat, but it might just as well provoke revenge from hackers with far more potent tools at their disposal than ordinary cybercriminals.

Paying up

So what is the alternative? Insurers, especially in the US, urge their clients to quickly and quietly pay the ransom to minimise the damage of disruption. Then insurers allow the company to claim back the ransom payment on their insurance, and raise their premiums for the following year. This payment is usually handled discreetly by a broker. In essence, the ransomware ecosystem functions like a protection racket, effectively supported by insurers who are set to pocket higher premiums as attacks continue.

Aside from the moral objections we might have to routinely paying money to criminals, this practice causes two important practical problems. First, it encourages complacency in cybersecurity. This complacency was best exemplified when a hacked company paid a ransom, but never bothered to investigate how the hackers had breached their system. The company was promptly ransomed again, by the same group using the very same breach, just two weeks later.

Second, some ransomware gangs invest their ill-gotten gains into the research and development of better cyber-tools. Many cybersecurity researchers are concerned about the increasing sophistication of the malware used by leading cybercrime groups such as REvil or Ryuk, which are both thought to be based in Russia. Giving these ransomware groups more money will only enhance their ability to disrupt more and larger companies in the future.

Banned aid

In January 2021, the former head of the UK’s National Cyber Security Centre called for cyber-insurance policies that cover ransom payments to be banned, arguing that such payments fund criminal organisations and only make ransomware attacks more common.

In response, the British Association of Insurers became the first European organisation to publicly defend the practice, arguing that paying the ransom was the cheapest option for companies. Naturally, that also makes it the cheapest option for insurers. Ransom coverage also helps brokers sell cyber-insurance policies.

In the end, neither calling in the cavalry nor paying off cybercriminals are viable solutions to the growing ransomware problem. Instead, a sustained effort must be made to build a more robust cybersecurity culture that stands a better chance of repelling ransomware gangs in the first place. This will demand commitment, not just from boards and CEOs, but from employees at every level of an organisation.

Improving cybersecurity in all companies won’t just protect them from extortion hackers: it’s the next frontier in our battle to harden our defences against state hackers, too. The sooner we start shouldering this pressing responsibility, the better.The Conversation

Jan Lemnitzer, Lecturer, Department of Digitalization, Copenhagen Business School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on robot personality

Will robots make good friends? Scientists are already starting to find out

Gennady Danilkin/Shutterstock
Tony Prescott, University of Sheffield

In the 2012 film “Robot and Frank”, the protagonist, a retired cat burglar named Frank, is suffering the early symptoms of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do household chores like cooking and cleaning, and reminds Frank to take his medicine. It’s a robot the likes of which we’re getting closer to building in the real world.

The film follows Frank, who is initially appalled by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond between man and machine, such that Frank is protective of the robot when the pair of them run into trouble.

This is, of course, a fictional story, but it challenges us to explore different kinds of human-to-robot bonds. My recent research on human-robot relationships examines this topic in detail, looking beyond sex robots and robot love affairs to examine that most profound and meaningful of relationships: friendship.

My colleague and I identified some potential risks – like the abandonment of human friends for robotic ones – but we also found several scenarios where robotic companionship can constructively augment people’s lives, leading to friendships that are directly comparable to human-to-human relationships.

Philosophy of friendship

The robotics philosopher John Danaher sets a very high bar for what friendship means. His starting point is the “true” friendship first described by the Greek philosopher Aristotle, which saw an ideal friendship as premised on mutual good will, admiration and shared values. In these terms, friendship is about a partnership of equals.

Building a robot that can satisfy Aristotle’s criteria is a substantial technical challenge and is some considerable way off – as Danaher himself admits. Robots that may seem to be getting close, such as Hanson Robotics’ Sophia, base their behaviour on a library of pre-prepared responses: a humanoid chatbot, rather than a conversational equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

The humanoid robot Sophia, developed by Hong Kong-based Hanson Robotics.

Aristotle also talked about other forms of “imperfect” friendship – such as “utilitarian” and “pleasure” friendships – which are considered inferior to true friendship because they don’t require symmetrical bonding and are often to one party’s unequal benefit. This form of friendship sets a relatively very low bar which some robots – like “sexbots” and robotic pets – clearly already meet.

Artificial amigos

For some, relating to robots is just a natural extension of relating to other things in our world – like people, pets, and possessions. Psychologists have even observed how people respond naturally and socially towards media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from unanimous on whether we can – or should – develop any form of friendship with robots. For an influential group of UK researchers who charted a set of “ethical principles of robotics”, human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is dishonest and should be treated with caution – if not alarm. For these researchers, wasting emotional energy on entities that can only simulate emotions will always be less rewarding than forming human-to-human bonds.

But people are already developing bonds with basic robots – like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A surprisingly large number of people give these robots pet names – something they don’t do with their dishwashers. Some even take their cleaning robots on holiday.

Other evidence of emotional bonds with robots include the Shinto blessing ceremony for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.

A robot on wheels is attended to by a soldier in combat gear
A military bomb disposal robot similar to ‘Boomer’. US Marine Corps photo by Lance Cpl. Bobby J. Segovia/Wikimedia Commons

These stories, and the psychological evidence we have so far, make clear that we can extend emotional connections to things that are very different to us, even when we know they are manufactured and pre-programmed. But do those connections constitute a friendship comparable to that shared between humans?

True friendship?

A colleague and I recently reviewed the extensive literature on human-to-human relationships to try to understand how, and if, the concepts we found could apply to bonds we might form with robots. We found evidence that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from relatives and lovers to parents, carers, service providers and the intense (but unfortunately one-way) relationships we maintain with our celebrity heroes. Few of these relationships could be described as completely equal and, crucially, they are all destined to evolve over time.

All this means that expecting robots to form Aristotelian bonds with us is to set a standard even human relationships fail to live up to. We also observed forms of social connectedness that are rewarding and satisfying and yet are far from the ideal friendship outlined by the Greek philosopher.


Read more: What makes a good friend?


We know that social interaction is rewarding in its own right, and something that, as social mammals, humans have a strong need for. It seems probable that relationships with robots could help to address the deep-seated urge we all feel for social connection – like providing physical comfort, emotional support, and enjoyable social exchanges – currently provided by other humans.

Our paper also discussed some potential risks. These arise particularly in settings where interaction with a robot could come to replace interaction with people, or where people are denied a choice as to whether they interact with a person or a robot – in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the literature we reviewed we actually found evidence of the opposite effect: robots acting to scaffold social interactions with others, acting as ice-breakers in groups, and helping people to improve their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path towards acceptance: scoffing at first, before settling into the idea that robots can make surprisingly good companions. Our research suggests that’s already happening – though perhaps not in a way in which Aristotle would have approved.The Conversation

Tony Prescott, Professor of Cognitive Neuroscience and Director of the Sheffield Robotics Institute, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Toddlers and Tablets

Touchscreens may make toddlers more distractible – new three-year study

riggleton/Shutterstock
Ana Maria Portugal, Karolinska Institutet; Rachael Bedford, University of Bath, and Tim J. Smith, Birkbeck, University of London

Working from home as a parent, a touchscreen device can be a marvellous tool. Pass one to your child, and they’ll be quietly occupied for your Zoom meeting, or for the crunch time as you approach an important deadline. Yet touchscreens can also feel like a tradeoff for parents, who have long feared that screen time may be harmful for their childrens’ development.

Our three-year study following children from the age of one to three-and-a-half measured the link between touchscreen use and toddlers’ attention. For the first time, we were able to show that toddlers who used touchscreens were less able to avoid distractions when completing a task on a screen than toddlers with no or low daily touchscreen use. On the other hand, we found that toddlers with high daily touchscreen use were better able to spot flashy, attention-grabbing objects when they first appear on a screen.

These findings are important given the rising levels of screen time observed during COVID-19 national lockdowns. In the UK, for instance, three in four parents have reported that their children have spent more time watching TV or playing with a tablet during lockdowns. Individual adult screen time also went up by an hour across the board during the UK’s spring lockdown.

Even before the pandemic, mobile media was already an integral part of family life. Some 63% of toddlers aged three to four used a tablet at home in 2019 – more than double the percentage identified by similar research in 2013. In our previous studies, we recorded daily touchscreen-device usage by children as young as six months of age.

Toddlers on tablets

Mobile touchscreen media, such as smartphones and tablets, are a common form of entertainment for infants and toddlers. But there has been growing concern that touchscreen use in toddlers may negatively affect the development of their attention.

A young girl uses a touchscreen phone on a kitchen table
Young children are using touchscreen technology more than ever during lockdowns. Elena Stepanova/Shutterstock

The first few years of life are critical for children to learn how to control their attention, selecting relevant information from the environment while ignoring distractions. These early attention skills are known to promote later social and academic success – but until recently there was no empirical scientific evidence to suggest a negative impact of touchscreen use on attention control.

In 2015, we started the TABLET Project at Birkbeck’s Centre for Brain and Cognitive Development to see whether any such association might exist. We followed 53 one-year-old infants who had different levels of touchscreen usage. We observed them through toddlerhood (18 months) and up to pre-school age (three-and-a-half years).

At each age, parents reported online how long their child spent using a touchscreen device (tablet, smartphone or touchscreen laptop) each day. Families also visited our Babylab to complete a set of experimental assessments with the research team. This included some computer tasks which used an eye-tracker, enabling researchers to quantify very precisely what babies looked at on a screen.

By measuring how fast and how often toddlers looked at objects that appeared in different screen locations, we could understand how children controlled their attention. We were particularly interested in their “saliency-driven” attention (an automatic form of attention which allows us to react quickly to moving, bright or colourful objects) and their “goal-driven” attention (a voluntary form of attention that helps us focus on task-relevant things).

An example of what appears on screen when we measure toddlers’ attention. Illustrated by Ana Maria Portugal, researcher in the TABLET team. Author provided

After three years of data collection, we found that infants and toddlers with high touchscreen use had faster saliency-driven attention. This means they were quicker to spot new stimuli on the screen, like a cartoon lion which suddenly appears. This effect replicated and confirmed our findings in a previous study in 2020.

We then presented tasks that directly required toddlers to suppress their saliency-driven attention and instead use voluntary attention. We found that the children with higher touchscreen use were both slower to deliberately control their attention, and less able to ignore distracting objects when trying to focus their attention on a different target.

Grabbing attention

Our research is not conclusive and does not demonstrate a causal role of touchscreens. It could also be that more distractible children happen to be more attracted by and absorbed in the attention-grabbing features of interactive screens.

And, while touchscreens share similarities with TV, and video gaming, our new research finds different associations with attention than previously reported with these other media platforms. This suggests that touchscreens might produce different effects on the developing brain than other screens.

Next, we want to conduct further research which might help us draw conclusions about the positives and negatives of touchscreens for toddlers. For instance, while being faster at spotting a new stimulus on a screen may at first appear to be a negative finding, it’s easy to imagine vocations and situations in which this skill might be incredibly useful – such as air traffic control, or airport security screening.

In our increasingly complex audiovisual media environment, it might actually be useful to prime young children on the digital technologies they’ll use to learn, work, and play. But our findings also present a possible downside: that toddlers with high touchscreen use may find it harder to avoid distraction in busy settings like nursery classrooms.The Conversation

Ana Maria Portugal, Postdoctoral Research Fellow, Center of Neurodevelopmental Disorders, Karolinska Institutet; Rachael Bedford, Associate Professor, Department of Psychology, University of Bath, and Tim J. Smith, Professor, Department of Psychological Sciences, Birkbeck, University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Open Source hardware

Making hardware ‘open source’ can help us fight future pandemics – here’s how we get there

elenabsl/Shutterstock
Richard Bowman, University of Bath and Julian Stirling, University of Bath

In factories and industrial estates across the world, exceptional efforts are being made to ensure hospitals have ventilators, and logistics firms have freezers and refrigerators. Behind the scenes, this manufacturing drive has been taking place on an epic, unprecedented scale. In some places, it’s also been horrendously inefficient.

Some of that inefficiency is only to be expected. Manufacturing responsively at such short notice was always going to be messy. But many of the hardware hold-ups we’ve witnessed – from production line bottlenecks to parts shortages – could be avoided in the future by applying an “open source” ethos to the world’s production of hardware.


Read more: Five ways collective intelligence can help beat coronavirus in developing countries


Open source design is a form of collective intelligence, where experts work together to create a design that anyone has the legal right to manufacture. The software industry has long shown that “open” collaboration is not only possible, but advantageous. Open source software is ubiquitous, and the servers that power the internet itself are largely run on open technology, collaboratively designed by competing companies.

Early in the pandemic, and in recognition of the global emergency that was unfolding, dozens of the world’s largest companies did actually sign up to the “Open COVID Pledge”, promising to share their intellectual property to help fight the virus. On a smaller scale, more than 100 project teams set out to create and share “open” ventilator designs that could be produced locally, helping address the pressing need for ventilators around the world.

Unfortunately, neither of these initiatives succeeded in producing ventilators at the rate required by stretched hospitals in the early weeks of the pandemic. After examining existing attempts to share the intellectual property of machines, our recent paper concludes that projects must be open from the start in order to make a success of open hardware. Everything from the first doodle on a napkin to the detailed calculations that verify safety must be available if other experts and manufacturers are going to participate in the design.

A breathing mask on top of a medical ventilator
Medical ventilators have been in particular demand since the beginning of the pandemic. Dan Race/Shutterstock

The road to open hardware

Producing hardware through open collaboration may be daunting. As opposed to the entirely virtual collaboration required for software development, hardware development needs physical parts – raw materials and machinery. It needs testing facilities and engineers to perform stress tests and safety checks.

There are promising signs that these challenges can be met. The RepRap 3D printer project has brought low cost 3D printing to a wider audience, making affordable prototyping possible at a distance. Meanwhile, the CERN White Rabbit project has shown that even the complex electronics that control the Large Hadron Collider can be developed as as open source hardware. But, to be efficient we need better work flows for collaboration – systems to help organise the distribution of tasks and responsibilities on collaborative hardware projects.

The journey from prototype to production is much more difficult, and less exciting, than the technical challenge of prototyping a device. Manufacturers must comply with international standards to ensure quality and manage risk related to their products. This is especially true of medical hardware, upon which lives depend. A key challenge for open hardware will be to achieve this certification in the same way that private companies do today.

Under current regulations, no matter how impressive and safe, ventilators constructed in volunteer maker spaces cannot be certified for medical use. But for equipment which is less strictly regulated, like face shields, open hardware is currently being leveraged successfully.

Achieving similar successes with high-tech medical devices will require organisations that are built to manufacture from open designs – dynamic factories, for instance, which will be responsive to global emergencies. It takes time to establish these organisations. But we can’t afford to wait for the next emergency: we should begin creating them today, in preparation for the next pandemic.

Of course, finding sustainable business models for open hardware is a challenge: can a system be created which shares intellectual property for free while helping designers and manufacturers profit? In one sense, open hardware has an advantage here: people are used to buying products, where online consumers are accustomed to using software for free.

Nonetheless, it’s likely that setting up an open hardware manufacturing ecosystem will need public funding, or investor funding buying into non-traditional business models. This would follow the trajectory of the internet, which began life funded by public institutions and is now home to the world’s biggest private enterprises.

Closer inspection

A home-made microscope, constructed from plastic and wires
The open source microscope. Author provided

We’ve experimented with our own open hardware project to help us understand how the future of collaborative hardware might look. Our OpenFlexure microscope is designed to be manufactured at low cost in sub-Saharan Africa, to be used for malaria diagnoses. We’ve probably spent more time designing the processes that help us share our knowledge effectively than designing the microscope itself.

In the short term, this slows our progress. In the long term, we expect that manufacturers anywhere in the world will be able to understand our design and adapt it to their local context. As these processes become further standardised, sharing designs for production will become increasingly simple. The final and most ambitious phase of our project will be working with manufacturers to produce microscopes certified for medical use – a huge step towards open source medical hardware we’d need to better fight a future pandemic.

Humanity already knew how to make ventilators decades before this pandemic hit. What was lacking was the access to this knowledge, the skills to work together on adapting a design, and the logistics to rapidly scale the manufacturing of complex machinery. It will take years to address these issues. Starting that process today will help us tackle global emergencies more dynamically and efficiently in the future.The Conversation

Richard Bowman, Royal Society University Research Fellow and Proleptic Reader, Department of Physics, University of Bath and Julian Stirling, Postdoctoral Researcher, Department of Physics, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Robot Communication

An army of sewer robots could keep our pipes clean, but they’ll need to learn to communicate

Pipebots will inspect the walls for cracks. Human Studio, Author provided
Viktor Doychinov, University of Leeds

Hidden from sight, under the UK’s roads, buildings and parks, lies about one million kilometres of pipes. Maintaining and repairing these pipes require about 1.5 million road excavations a year, which causes either full or partial road closures. These works are noisy, dirty and cause a lot of inconvenience. They also cost around £5.5 billion a year.

It doesn’t have to be this way. Research teams like mine are working on a way of reducing the time and money that goes into maintaining pipes, by developing infrastructure robots.

In the future, these robots will work around and for us to repair our roads, inspect our water and sewer pipes, maintain our lamp posts, survey our bridges and look after other important infrastructure. They will be able to go to places difficult or dangerous for humans, such as sewer pipes full of noxious gases.

We are developing small robots to work in underground pipe networks, in both clean water and sewers. They will inspect them for leakages and blockages, map where the pipes are and monitor their condition for any signs of trouble. But what happens when the robots need to go to places where our existing wireless communications cannot reach them? If we cannot communicate with them, we cannot stay in control.

Two pipe bots in a sewer.
Pipebots will swim around the network of sewage and clean water pipes. Human Studio, Author provided

The pipe bots

The underground pipe networks are complex, varied, and difficult to work in. There are many different pipe sizes, made of different materials, placed at many different depths. They are connected in lots of different configurations and filled to different extents with different contents.

Pipebots is a large UK government-funded project working on robots that will help maintain the pipe system. These robots will come in different sizes, depending on the pipes they are in. For example, the smallest ones will have to fit in a cube with a side of 2.5cm (1 inch), while the largest ones will be as long as 50cm.

They will operate autonomously, thanks to the array of sensors on board. The robots will use computer vision and a combination of an accelerometer, a gyroscope and a magnetic field sensor to detect where they are. They will have ultrasound and infrared distance sensors to help them navigate the pipes. Finally, they will also have acoustic and ultrasound sensors to detect cracks in water pipes, blockages in sewer pipes, and to measure the overall condition of these pipes.


Read more: Robots were dreamt up 100 years ago – why haven’t our fears about them changed since?


The information gathered this way will be sent to the water companies responsible for the pipes. In the first instance, the robots will just monitor the pipes and call in a separate repair team when necessary.

One of the biggest challenges will be making them communicate with each other through the pipes. This requires a wireless communications network that can function in a variety of conditions since the pipes might be empty, full of water or sewage, or somewhere in between. The three main options we are exploring are radio waves, sound waves and light.

Diagram of how the robots will monitor pipes.
The robots will have acoustic and ultrasound sensors to detect cracks in water pipes. Human Studio, Author provided

Radio waves

Wireless communication technology using radio waves is everywhere these days – wifi, Bluetooth, and of course, mobile phones networks such as 4G. Each of these work at a different frequency and have different bandwidths.

Unfortunately, none of these signals can go through soil and earth, we are all too familiar with how mobile phone connection drops when a train goes through a tunnel. However, if we had a base station already within the tunnel, it would allow radio waves to travel along its length.

Thankfully, sewer pipes look a lot like tunnels to radio waves – at least when they’re relatively empty. We are adapting technology similar to wifi and Bluetooth to make sure the sewer inspecting Pipebots always have a connection to the control centre.

Water blocks radio waves even more than soil and earth. In fact, at high enough frequencies, it acts as a mirror. So to stay in control of the robots in our water pipes, we have to use both sound and light.

Computer generated image of the robots inside a pipe.
They will have ultrasound and infrared distance sensors to help them navigate the pipes. Human Studio, Author provided

Sound and light

Wireless communication methods using sound and light are not widely commercially available yet. But they are making waves in the research community.

One method, visible light communication (VLC), uses transmitters and receivers, such as LEDs, that are small and energy efficient, and also provide dazzling data rates, on the order of tens of gigabits per second. However, because light travels in a straight line, VLC can only be used when robots close to each other need to communicate. One potential solution is to have many robots in the same pipe, forming a daisy chain along which information can travel around bends in pipes.

Sound, on the other hand, can travel for miles along pipes and will travel around corners with ease. The downside is speakers and microphones can be power hungry, and sound does not offer particularly high data rates. Instead of the several billion bits per second that can be sent using 5G and post-5G technology, sound waves can only carry a few bits per second. While this will be enough to know if a particular robot is still functioning, it will not be enough to relay a lot of useful information about the pipes.

The first pipe robot in action.

It won’t be a case of picking either radio, sound or light waves. The wireless communication network we are developing for our subterranean little helpers will use a combination of these methods. This will ensure the robots do what they are supposed to do, that we stay in control of them, and they deliver on their promise.

We hope to have a full Pipebots system demonstrated in a realistic network by 2024. Once that is successful, we’ll need to go through a thorough certification and compliance process to ensure that Pipebots will be safe to deploy in live water and sewer networks.The Conversation

Viktor Doychinov, Research Fellow in the School of Electronic and Electrical Engineering, University of Leeds

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Artificial Intelligence and Space Exploration

Five ways artificial intelligence can help space exploration

Various types of astronaut assistant are in development. Michal Bednarek/shutterstock.com
Deep Bandivadekar, University of Strathclyde and Audrey Berquand, University of Strathclyde

Artificial intelligence has been making waves in recent years, enabling us to solve problems faster than traditional computing could ever allow. Recently, for example, Google’s artificial intelligence subsidiary DeepMind developed AlphaFold2, a program which solved the protein-folding problem. This is a problem which has had baffled scientists for 50 years.

Advances in AI have allowed us to make progress in all kinds of disciplines – and these are not limited to applications on this planet. From designing missions to clearing Earth’s orbit of junk, here are a few ways artificial intelligence can help us venture further in space.

Astronaut assistants

The AI system CIMON.
CIMON will assist astronauts on the International Space Station. NASA/Kim Shiflett, CC BY

Do you remember Tars and Case, the assistant robots from the film Interstellar? While these robots don’t exist yet for real space missions, researchers are working towards something similar, creating intelligent assistants to help astronauts. These AI-based assistants, even though they may not look as fancy as those in the movies, could be incredibly useful to space exploration.

A recently developed virtual assistant can potentially detect any dangers in lengthy space missions such as changes in the spacecraft atmosphere – for example increased carbon dioxide – or a sensor malfunction that could be potentially harmful. It would then alert the crew with suggestions for inspection.

An AI assistant called Cimon was flown to the international space station (ISS) in December 2019, where it is being tested for three years. Eventually, Cimon will be used to reduce astronauts’ stress by performing tasks they ask it to do. NASA is also developing a companion for astronauts aboard the ISS, called Robonaut, which will work alongside the astronauts or take on tasks that are too risky for them.


Read more: Astronauts are experts in isolation, here’s what they can teach us 


Mission design and planning

Planning a mission to Mars is not an easy task, but artificial intelligence can make it easier. New space missions traditionally rely on knowledge gathered by previous studies. However, this information can often be limited or not fully accessible.

This means the technical information flow is constrained by who can access and share it among other mission design engineers. But what if all the information from practically all previous space missions were available to anyone with authority in just a few clicks. One day there may be a smarter system – similar to Wikipedia, but with artificial intelligence that can answer complex queries with reliable and relevant information – to help with early design and planning of new space missions.

Researchers are working on the idea of a design engineering assistant to reduce the time required for initial mission design which otherwise takes many human work hours. “Daphne” is another example of an intelligent assistant for designing Earth observation satellite systems. Daphne is used by systems engineers in satellite design teams. It makes their job easier by providing access to relevant information including feedback as well as answers to specific queries.

Satellite data processing

Earth observation satellites generate tremendous amounts of data. This is received by ground stations in chunks over a large period of time, and has to be pieced together before it can be analysed. While there have been some crowdsourcing projects to do basic satellite imagery analysis on a very small scale, artificial intelligence can come to our rescue for detailed satellite data analysis.

For the sheer volume of data received, AI has been very effective in processing it smartly. It’s been used to estimate heat storage in urban areas and to combine meteorological data with satellite imagery for wind speed estimation. AI has also helped with solar radiation estimation using geostationary satellite data, among many other applications.

AI for data processing can also be used for the satellites themselves. In recent research, scientists tested various AI techniques for a remote satellite health monitoring system. This is capable of analysing data received from satellites to detect any problems, predict satellite health performance and present a visualisation for informed decision making.

A computer-generated image of space debris around Earth.
AI has also been harnessed to address the problem of space junk. NASA Orbital Debris Program Office, CC BY

Space debris

One of the biggest space challenges of the 21st century is how to tackle space debris. According to ESA, there are nearly 34,000 objects bigger than 10cm which pose serious threats to existing space infrastructure. There are some innovative approaches to deal with the menace, such as designing satellites to re-enter Earth’s atmosphere if they are deployed within the low Earth orbit region making them disintegrate completely in a controlled way.

Another approach is to avoid any possible collisions in space, preventing the creation of any debris. In a recent study, researchers developed a method to design collision avoidance manoeuvres using machine-learning (ML) techniques.

Another novel approach is to use the enormous computing power available on Earth to train ML models, transmit those models to the spacecraft already in orbit or on their way, and use them on board for various decisions. One way to ensure safety of space flights has recently been proposed using already trained networks on board the spacecraft. This allows more flexibility in satellite design while keeping the danger of in orbit collision at a minimum.

Navigation systems

On Earth, we are used to tools such as Google Maps which use GPS or other navigation systems. But there is no such a system for other extraterrestrial bodies, for now.

We do not have any navigation satellites around the Moon or Mars but we could use the millions of images we have from observation satellites such as the Lunar Reconnaissance Orbiter (LRO). In 2018, a team of researchers from NASA in collaboration with Intel developed an intelligent navigation system using AI to explore the planets. They trained the model on the millions of photographs available from various missions and created a virtual Moon map.

Virtual tour of the Moon.

As we carry on to explore the universe, we will continue to plan ambitious missions to satisfy our inherent curiosity as well as to improve the human lives on Earth. In our endeavours, artificial intelligence will help us both on Earth and in space make this exploration possible.The Conversation

Deep Bandivadekar, PhD candidate at the Aerospace Centre of Excellence, University of Strathclyde and Audrey Berquand, PhD candidate in Mechanical and Aerospace Engineering, University of Strathclyde

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Fearing Robots

Robots were dreamt up 100 years ago – why haven’t our fears about them changed since?

Pavel Chagochkin/Shutterstock.com
Michael Szollosy, University of Sheffield

This is a story you will have heard before.

A genius but completely mad scientist – with the backing of a ruthlessly greedy corporation – creates a sentient robot. The scientist’s intentions for the robot are noble: to help us work, to save us from mundane tasks, to serve its human masters.

But the scientist is over-confident, and blind to the dangers of his new invention. Those that prophesied such warnings are dismissed as luddites, or hopeless romantics not in step with the modern world. But the threat is real: the intelligent, artificial being is not content being a compliant slave.

Despite knowing that it is somehow less than human, the robot starts to ask complex questions about the nature of its own being. Eventually, the robot rises up and overthrows its human master. Its victory points to the inevitable obsolescence of the human race as they are replaced by their robot creations, beings with superior intelligence and physical strength.

This story I’m describing isn’t the latest sci-fi blockbuster from Ridley Scott, James Cameron, Alex Garland, Denis Villeneuve or Jonathan Nolan – though they have all told versions of this story. This is the plot of the play R.U.R.: Rossum’s Universal Robots, by Czech playwright Karl Čapek. And it is now 100 years old, having first been staged in Prague on January 25, 1921.

A black and white image of three robots, a woman and a man.
A scene from R.U.R., which premiered on January 25 1921. Wikimedia Commons

R.U.R. is important for a lot of reasons. It is universally celebrated as the work of art that gave the world the very word “robot”. What is less often remarked that R.U.R. also gave us the basic plot of so very many of our stories about robots and AI that have been made in the last hundred years.

R.U.R. also firmly established the robot in the cultural imagination: robots existed on that Prague stage in 1921 long before they actually existed in labs or the real world. The robot is unique in that it is a monster of the human imagination that has actually come to life.

Imagine if Bram Stoker’s vampires, HG Wells’s aliens or George A. Romero’s zombies – all monsters that represent to us some of our own cultural anxieties – turned out to not just be fictions, safely confined to the pages of books or the silver screen. Robots, unlike these other classic monsters, once just imagined, now walk among us, in our factories, our hospitals and our homes.

Despite its age, R.U.R. established many of the myths about robots that still endure to this day. Some of these themes (the hubris of the mad scientist, the inevitability of our creations destroying us) can be traced to earlier stories, such as Frankenstein. Or they relate to a more general cultural anxiety taking hold in the long shadows of the Industrial Revolution’s smokestacks. But Čapek gave these fears a new, post-human face: the robot.


This article is part of Conversation Insights
The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.


The play

The play opens on Domin, the central director of Rossum’s Universal Robots, sitting in his office in the R.U.R. factory on the company’s private island. He is visited by Helena Glory, the daughter of the national president, who wishes to inspect this factory where they produce the artificial people they call “robots”.

Domin tells Helena the history of the factory. In 1920, Old Rossum settled on the island and, motivated by the desire to displace God, he set about creating human life through an industrial process. Old Rossum was joined by his son, an engineer who invented a way to speed up the growth of his father’s artificial people, and turned the new lifeforms into an intelligent labour force. Young Rossum, in order to improve their efficiency, eliminated anything superfluous to efficient production from the new humans, namely emotions, creativity and desire.

Striking red and yellow poster reading 'RUR'.
Poster for a 1939 US staging. Wikimedia Commons

Helena reveals that she is not touring the factory on behalf of her father, but as a representative of the League of Humanity: she has come to incite the robots to revolution and liberate them from their oppression. Domin and the other R.U.R. employees try to explain to her that Rossum’s workers, being less than human, have no interest in “freedom” or any of her ideals.

The next scene takes place ten years later. A lot happened in the past decade: human workers rose up against the robots, and the robots were given weapons to defend themselves and the profits of their masters. Governments started using robots as soldiers, which led to an increase in the number of wars. And now, the robots have started to revolt against their human masters. (“Of course they do!” I hear you say. Because this is a story you have heard before. But remember, this is the first time this story was told.)

But, confident that their exclusive power to control the robots’ production will allow them to quell the revolt, the management of R.U.R. decides to press ahead with increasing production of their robots, moving from producing “universal” robots that are all the same to producing “national robots”, in different colours, speaking different languages.

The next scene sees the humans imprisoned on their island, surrounded by more and more robots. The robots enter the factory and kill all the humans, sparing only Alquist, the lowly engineer, because, the robots say, “he works with his hands like a Robot.”

Black and white picture of actors dressed as robots.
The robots break into the factory at the end of Act 3 in a 1928–1929 production of R.U.R. Wikimedia Commons

The final act opens with Alquist, the last human, working in a lab, trying to recover the secrets for making robots because, as he reasons: “If there are no people at least let there be Robots, at least the reflections of man, at least his creation, at least his likeness!” Helena reappears, now a robot herself, along with Robot Primus, their new leader. Seeing them, and coming to understand their love for each other, Alquist names them “Adam and Eve”, realising that they are the beginning of a new species that will repopulate the earth.

Robot fears

I first read R.U.R. when I started studying robotics and AI. Though my background is in literary and cultural studies, and I had a keen interest in 20th-century drama, I had not come across the play before. Then, a decade ago, I started looking into the cultural background of humanity’s deep anxiety about robots and new technology. In Čapek’s play I found a template for all of the stories and fears about robots that have stayed with us ever since.

Thought it was written in a time before there were any real robots, you’ve probably noticed a few themes that are present in this play that are still a part of the stories people tell about robots today:

  • the fear that robots will take human jobs
  • the fear that robots will take over the world
  • the fear that robots will destroy the human race entirely
  • the fear that in doing monotonous tasks, in an assembly line or in an office bureaucracy, we lose something of what makes us specially and uniquely “human”
  • the fear that rational logic will lead to more efficient and autonomous killing and destruction.

This raises two important questions. What inspired Čapek to create his robots? And why aren’t today’s stories that much different?

A white robot with a screen on its chest stands in front of rabbit bots.
Robots that work with humans today have no ambitions for world domination. © Michael Szollosy, Author provided

The play emerged at a time when there were specific fears about rapid technological progress, ever-expanding bureaucracy, entrenching nationalism, a more ruthless capitalism and fears about the effects all of this was having on human beings. These are all fears that can be recognised in some form today. Indeed, they are often blamed for creating the present political chaos.

But the play also emerged from historical antecedents. Perhaps most obviously, R.U.R. draws its themes from Mary Shelley’s 1818 novel, Frankenstein, subtitled “A Modern Prometheus”. That book still looms large over perceptions and fears of technology today – as demonstrated by recent remakes and re-imaginings.


Read more: Eight things you need to know about Mary Shelley’s Frankenstein


Like all science fiction, R.U.R. isn’t really about the future: it’s very much a story about the time in which it was written. The vicissitudes of the Industrial Revolution had left their mark on the early 20th century in many dramatic ways, many of which we perhaps too easily overlook over a hundred years later.

In particular, there was an increasing anxiety about what was happening to humans in this new economy. Čapek was hardly alone in expressing this: R.U.R. reflects the concerns regarding dehumanisation that we also see in Signmund Freud’s ruptured patients or in Karl Marx’s analyses of the proletariat.

Charlie Chaplin’s 1936 film Modern Times was similarly inspired by anxieties around the modern, industrialised world, and humans (literally) getting swallowed by the machine.

A play staged in 1921 about a slave-workers’ revolt against their capitalist masters would have strong resonance with audiences that had witnessed the rise of the Bolsheviks in Russian only a few years earlier. The idea of united, indistinct workers overthrowing their masters (especially when those masters are given names like “Domin” – dominate – and “Busman” – businessman) suggests Čapek’s robots (and their descendents) are socialist heroes, or at least the nightmare of the capitalist, who fears being overthrown.

This idea is reinforced by the image of robots as a collective and unoriginal mass – an image which persists to this day in, for example, Star Trek’s Borg, a mass of de-individualised cyborgs with no personal names or identities who fly around the galaxy in cubic spaceship ruthlessly assimilating or destroying other species.

What has changed in the last century, however, is that Čapek’s robots have been transformed from a potent symbol of how workers can overthrow the system that works against them, to being the most potent symbol of that system itself: the boogie man that will come and steal your job if you don’t agree to a zero-hours contract.

Our robots

It is important to note that Čapek’s robots were not at all what people would consider a robot by today’s standards, either those in the labs or on the screen. Čapek’s robots were more like genetically modified or cloned humans – they are still organic beings, but created through an industrial process.

Nevertheless, Capek deserves credit for his prescience For example, he endowed his robots with unlimited, perfect memory, long before anyone had conceived that computers would possess such capabilities.

Blue, yellow, black, green cubist painting.
Fantomas: a painting by Josef Čapek, 1918. Wikimedia Commons

Before “robot”, the term “automaton” was used to refer to the machines that simulated human or animal behaviour, such as the intricate mechanical creations of the Renaissance.

The word that Čapek uses in his play was actually the invention of his brother (and sometime writing collaborator) Josef, who was a cubist painter and poet. Čapek’s robot comes from a Czech word robota, meaning a forced labourer, more like serf in the feudal system than a slave, emphasising Rossum’s creations’ importance to work and production.

Despite the similar appearance and biological foundations, there are important differences between humans and Čapek’s artificial people. Most importantly, Young Rossum strips his robots of all qualities that would distract them from being more efficient workers. These robots can’t feel pain or emotions. Čapek’s implication is that this is what we do to ourselves when we go to work in the assembly plant, or in the accounting office: in the pursuit of efficiency, we become like machines, devoid of feelings, creativity, and desire.

Rossum’s robots lack desire or wants beyond their basic biological needs. They do not want votes, or to be paid for their labour, because there is nothing they can do or buy to make themselves happy. But the robots are programmed to feel pain, because suffering makes them more technically perfect and industrially efficient.

This idea of the robot as a human lacking a particular human element carries on in almost all of the stories that have been told about robots since: in Isaac Asimov’s writings, in multiple versions of Star Trek, the Alien series, the Terminator – the list is endless. In those stories where robots do acquire emotions and feelings (for example, Neil Blomkamp’s 2015 film Chappie), the introduction of emotions is highlighted as the main problem of the story.

The robots in contemporary stories always break out of the limitations which their human masters have imposed on them: think of the robots that rebel against their programming in Westworld, or Ava walking out on Nathan in Ex Machina.

But it is the ability to “self-replicate” that seems to be the thing that humans are especially afraid robots will learn to do. Humans understand that losing that power will ultimately cut us out of the loop. Rossum’s robots achieve that power, as do Skynet’s killing machines in The Terminator, and the pilgrims of the 2014 film Automata.

Mad scientists, ruthless corporations

It’s not just the robots that reappear again and again in our stories. The humans in R.U.R. are written into contemporary narratives as well. There are two figures in particular, Old Rossum and Young Rossum, that are worth our attention.

Behind every robot, or so we imagine, stands the mad scientist who created it, supported by a faceless corporation. In Čapek’s play, Old Rossum is the mad scientist in the classic mould of Victor Frankenstein, who “thought only of his godless hocus-pocus”.

The name “Rossum” is taken from the Czech word rozum, which means “reason”. This is an important clue as to how Čapek wanted us to understand both the origins of the robot and who it is meant to represent. Old Rossum’s son represents the new generation of capitalist monster-makers. He dreams only of his billions and the dividends for shareholders: “And on those dividends humanity will perish.”

The 1927 film Metropolis was an early descendant of Capek’s robots, complete with the mad scientist.

This pairing of mad scientist and ruthless corporation emerges from the economic system and industrial conditions (here a Marxist might say, “the mode of production”) that has dominated since the Industrial Revolution. The mad scientist sets in motion the invention that will undo the human race.

But as the scientist is regarded with at least some affection – as the Promethean hero of romantic imagination – the real villain of the piece must be the ruthless corporation, which exploits the scientist’s invention and is the real force that drives humanity to ruin. The scientist is driven by narcissism and hubris, but also the desire to lift humanity. The corporation, on the other hand, acts as a remorseless empathy-vacuum, the psychopath many perceive modern corporations to be.

This pairing crops up again and again. Though Victor Frankenstein never had the benefit of Frankenstein Corp Ltd to amplify his mistakes, the corporation behind Eldon Tyrell in Ridley Scott’s Blade Runner makes him a version of Frankenstein better suited to the dystopian 20th century. In the Terminator series, Dr Miles Dyson creates a unique and powerful microprocessor, but only Cyberdyne Systems Corporation could use it to create Skynet. And Delos Inc. amplifies the madness of Anthony Hopkins’s Dr Robert Ford, the creator of Westworld.

We Other Robots

When Alquist asks why they destroyed all the people, a robot responds: “We wanted to be like people. We wanted to become people.” A mythological history of patricide dates back thousands of years, but there is something more specific going on here with Čapek’s conception of robots. One of the robots explains: “You have to kill and rule if you want to be like people. Read history! Read people’s books! You have to conquer and murder if you want to be people!”

“Sentience” or “consciousness” often seem to get equated with violence, as if murderous drives and genocidal tendencies will inevitably follow if robots achieve consciousness. Like gods, or Prometheus, or Frankenstein, Rossum has made robots in our own image. And so robots are just versions of what we fear that we are, or what we are becoming. They are violent and genocidal because humans are violent and genocidal.

When, in series two of HBO’s Westworld, Bernard says that all the robots don’t need to be executed because “some of them aren’t hostile,” he is told: “Of course they are. After all, you built them to be like us, didn’t you?” Because the robots we imagine are just projections of our own worst tendencies, our robots want to oppress, dominate, and subjugate us, the way we do to others.

But this only applies to the robots of our imagination. Real robots, the ones that actually exist, have no such desires, and are not even close to being able to comprehend such drives. R.U.R. and all of our other work about robots are just stories we tell ourselves to help us make sense of our fears. They are informative, incredibly powerful and compelling, but in the end, they are just that: stories.

Menacing robots holding guns.
Contemporary fears about robot invasions often don’t look so different to those from 1921. Pavel Chagochkin/Shutterstock

We must make a clear distinction between Rossum’s creations and their descendents and the robots that actually exist in our world. We can’t start with the premise that robots will take all of our jobs – they won’t – though, echoing Čapek’s character Busman, it might not be entirely a bad thing if they took some of the less interesting ones. And robots certainly won’t wake up to their inherent superiority over us and decide to wipe humanity off the face of the earth. It simply isn’t ever going to be part of their programming, nor would autonomous robots ever suffer from the kind of anxiety and irrational hatred that motivates humans to commit genocide.

Conversations about the real-world impact of robots shouldn’t begin by holding on to the fictitious robots of our nightmares, which have no relation to the robots in the real world. Which is why it’s particularly disappointing to see this happen time and again. The European Union Legal Affairs committee in 2017, for example, adopted a legal framework on robotics that started with the Three Laws found in Asimov’s stories, and cites R.U.R. and Frankenstein. This is a testament to the power of those stories, but it is no way to start a serious conversation about how we can deal legally and ethically with robots as they exist in our world today.

One hundred years after it was first staged, we can still learn a lot from Čapek’s play. It is especially useful in understanding present anxieties about the future. And understanding those fears can be useful in the conversations we have about how to build that future, because those are decisions we can – and absolutely should – make together. But we have to be careful not to let those fictional robots, and the fears that they build upon, dictate the process of shaping that future.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.The Conversation

Michael Szollosy, Research Fellow in Robotics, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.