The Conversation, on the Simulation Hypothesis

Curious Kids: could our entire reality be part of a simulation created by some other beings?

Unsplash, CC BY-SA
Sam Baron, Australian Catholic University

Is it possible the whole observable universe is just a thing kept in a container, in a room where there are some other extraterrestrial beings much bigger than us? Kanishk, Year 9

Hi Kanishk!

I’m going to interpret your question in a slightly different way.

Let’s assume these extraterrestrial beings have a computer on which our universe is being “simulated”. Simulated worlds are pretend worlds – a bit like the worlds on Minecraft or Fortnite, which are both simulations created by us.

If we think about it like this, it also helps to suppose these “beings” are similar to us. They’d have to at least understand us to be able to simulate us.

By narrowing the question down, we’re now asking: is it possible we’re living in a computer simulation run by beings like us? University of Oxford professor Nick Bostrom has thought a lot about this exact question. And he argues the answer is “yes”.

Not only does Bostrom think it’s possible, he thinks there’s a decent probability it’s true. Bostrom’s theory is known as the Simulation Hypothesis.

A simulated world that feels real

I want you to imagine there are many civilisations like ours dotted all around the universe. Also imagine many of these civilisations have developed advanced technology that lets them create computer simulations of a time in their own past (a time before they developed the technology).

Do you think our whole world could be created by someone using more advanced technology than we have today? Yash Raut/Unsplash

The people in these simulations are just like us. They are conscious (aware) beings who can touch, taste, move, smell and feel happiness and sadness. However, they have no way of proving they’re in a simulation and no way to “break out”.


Read more: Curious Kids: is time travel possible for humans?


Hedge your bets

According to Bostrom, if these simulated people (who are so much like us) don’t realise they’re in a simulation, then it’s possible you and I are too.

Suppose I guess we’re not in a simulation and you guess we are. Who guessed best?

Let’s say there is just one “real” past. But these futuristic beings are also running many simulations of the past — different versions they made up.

They could be running any number of simulations (it doesn’t change the point Bostrom is trying to make) — but let’s go with 200,000. Our guessing-game then is a bit like rolling a die with 200,000 sides.

When I guess we are not simulated, I’m betting the die will be a specific number (let’s make it 2), because there can only be one possible reality in which we’re not simulated.

This means in every other scenario we are simulated, which is what you guessed. That’s like betting the die will roll anything other than 2. So your bet is a far better one.

Simulated or not simulated, would you bet on it? Ian Gonzalez/Unsplash

Are we simulated?

Does that mean we’re simulated? Not quite.

The odds are only against my guess if we assuming these beings exist and are running simulations.

But, how likely is it there are beings so advanced they can run simulations with people who are “conscious” like us in the first place? Suppose this is very unlikely. Then it would also be unlikely our world is simulated.

Second, how likely is it such beings would run simulations even if they could? Maybe they have no interest in doing this. This, too, would mean it’s unlikely we are simulated.

Laying out all our options

Before us, then, are three possibilities:

  1. there are technologically advanced beings who can (and do) run many simulations of people like us (likely including us)

  2. there are technologically advanced beings who can run simulations of people like us, but don’t do this for whatever reason

  3. there are no beings technologically advanced enough to run simulations of people like us.

But are these really the only options available? The answer seems to be “yes”.

You might disagree by bringing up one of several theories suggesting our universe is not a simulation. For example, what if we’re all here because of the Big Bang (as science suggests), rather than by a simulation?

That’s a good point, but it actually fits within the Simulation Hypothesis, under option 1 — in which we’re not simulated. It doesn’t go against it. This is why the theory leaves us with only three options, one of which then must be true.

So which is it? Sadly, we don’t have enough evidence to help us decide.


Read more: Curious Kids: what started the Big Bang?


The principle of indifference

When we’re faced with a set of options and there is not enough evidence to believe one over the others, we should give an equal “credence” to each option. Generally speaking, credence is how likely you believe something to be true based on the evidence available.

Giving equal credence in cases such as the Simulation Hypothesis is an example of what philosophers call the “principle of indifference”.

Suppose you place a cookie on your desk and leave the room. When you come back, it’s gone. In the room with you were three people, all of which are strangers to you.

You have to start by piecing together what you know. You know someone in the room took the cookie. If you knew person A had been caught stealing cookies in the past, you could guess it was probably them. But on this occasion, you don’t know anything about these people.

Would it be fair to accuse anyone in particular? No.

Our universe, expanding

And so it is with the simulation argument. We don’t have enough information to help us select between the three options.

What we do know is if option 1 is true, then we’re very likely to be in a simulation. In options 2 and 3, we’re not. Thus, Bostrom’s argument seems to imply our credence of being simulated is roughly 1 in 3.

To put this into perspective, your credence in getting “heads” when you flip a coin should be 1 in 2. And your credence in winning the largest lottery in the world should be around 1 in 300,000,000 (if you believe it isn’t rigged).

If that makes you a little nervous, it’s worth remembering we might make discoveries in the future that could change our credences. What that information might be and how we might discover it, however, remains unknown.The Conversation

Famous astrophysicist Neil deGrasse Tyson has said it’s “hard to argue against” Bostrum’s Simulation Hypothesis.

Sam Baron, Associate professor, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Quantum Computation

Major quantum computational breakthrough is shaking up physics and maths

Quantum computers may be more trustworthy. Yurchanka Siarhei/Shutterstock
Ittay Weiss, University of Portsmouth

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of quantum complexity theory. Complexity theory is a zoo of “complexity classes” – collections of computational problems – of which MIP* and RE are but two.

The 165-page paper shows that these two classes are the same. That may seem like an insignificant detail in an abstract theory without any real-world application. But physicists and mathematicians are flocking to visit the zoo, even though they probably don’t understand it all. Because it turns out the discovery has astonishing consequences for their own disciplines.

In 1936, Alan Turing showed that the Halting Problem – algorithmically deciding whether a computer program halts or loops forever – cannot be solved. Modern computer science was born. Its success made the impression that soon all practical problems would yield to the tremendous power of the computer.

But it soon became apparent that, while some problems can be solved algorithmically, the actual computation will last long after our Sun will have engulfed the computer performing the computation. Figuring out how to solve a problem algorithmically was not enough. It was vital to classify solutions by efficiency. Complexity theory classifies problems according to how hard it is to solve them. The hardness of a problem is measured in terms of how long the computation lasts.

RE stands for problems that can be solved by a computer. It is the zoo. Let’s have a look at some subclasses.

The class P consists of problems which a known algorithm can solve quickly (technically, in polynomial time). For instance, multiplying two numbers belongs to P since long multiplication is an efficient algorithm to solve the problem. The problem of finding the prime factors of a number is not known to be in P; the problem can certainly be solved by a computer but no known algorithm can do so efficiently. A related problem, deciding if a given number is a prime, was in similar limbo until 2004 when an efficient algorithm showed that this problem is in P.

Another complexity class is NP. Imagine a maze. “Is there a way out of this maze?” is a yes/no question. If the answer is yes, then there is a simple way to convince us: simply give us the directions, we’ll follow them, and we’ll find the exit. If the answer is no, however, we’d have to traverse the entire maze without ever finding a way out to be convinced.

Such yes/no problems for which, if the answer is yes, we can efficiently demonstrate that, belong to NP. Any solution to a problem serves to convince us of the answer, and so P is contained in NP. Surprisingly, a million dollar question is whether P=NP. Nobody knows.

Trust in machines

The classes described so far represent problems faced by a normal computer. But computers are fundamentally changing – quantum computers are being developed. But if a new type of computer comes along and claims to solve one of our problems, how can we trust it is correct?

Picture of computer code.
Complexity science helps explain what problems a computer can solve. Phatcharapon/Shutterstock

Imagine an interaction between two entities, an interrogator and a prover. In a police interrogation, the prover may be a suspect attempting to prove their innocence. The interrogator must decide whether the prover is sufficiently convincing. There is an imbalance; knowledge-wise the interrogator is in an inferior position.

In complexity theory, the interrogator is the person, with limited computational power, trying to solve the problem. The prover is the new computer, which is assumed to have immense computational power. An interactive proof system is a protocol that the interrogator can use in order to determine, at least with high probability, whether the prover should be believed. By analogy, these are crimes that the police may not be able to solve, but at least innocents can convince the police of their innocence. This is the class IP.

If multiple provers can be interrogated, and the provers are not allowed to coordinate their answers (as is typically the case when the police interrogates multiple suspects), then we get to the class MIP. Such interrogations, via cross examining the provers’ responses, provide the interrogator with greater power, so MIP contains IP.

Quantum communication is a new form of communication carried out with qubits. Entanglement – a quantum feature in which qubits are spookishly entangled, even if separated – makes quantum communication fundamentally different to ordinary communication. Allowing the provers of MIP to share an entangled qubit leads to the class MIP*.

It seems obvious that communication between the provers can only serve to help the provers coordinate lies rather than assist the interrogator in discovering truth. For that reason, nobody expected that allowing more communication would make computational problems more reliable and solvable. Surprisingly, we now know that MIP* = RE. This means that quantum communication behaves wildly differently to normal communication.

Far-reaching implications

In the 1970s, Alain Connes formulated what became known as the Connes Embedding Problem. Grossly simplified, this asked whether infinite matrices can be approximated by finite matrices. This new paper has now proved this isn’t possible – an important finding for pure mathematicians.

In 1993, meanwhile, Boris Tsirelson pinpointed a problem in physics now known as Tsirelson’s Problem. This was about two different mathematical formalisms of a single situation in quantum mechanics – to date an incredibly successful theory that explains the subatomic world. Being two different descriptions of the same phenomenon it was to be expected that the two formalisms were mathematically equivalent.

But the new paper now shows that they aren’t. Exactly how they can both still yield the same results and both describe the same physical reality is unknown, but it is why physicists are also suddenly taking an interest.

Time will tell what other unanswered scientific questions will yield to the study of complexity. Undoubtedly, MIP* = RE is a great leap forward.The Conversation

Ittay Weiss, Senior Lecturer, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on AI Chatbots

AI called GPT-3 can write like a human but don’t mistake that for thinking – neuroscientist

Bas Nastassia/Shutterstock
Guillaume Thierry, Bangor University

Since it was unveiled earlier this year, the new AI-based language generating software GPT-3 has attracted much attention for its ability to produce passages of writing that are convincingly human-like. Some have even suggested that the program, created by Elon Musk’s OpenAI, may be considered or appears to exhibit, something like artificial general intelligence (AGI), the ability to understand or perform any task a human can. This breathless coverage reveals a natural yet aberrant collusion in people’s minds between the appearance of language and the capacity to think.

Language and thought, though obviously not the same, are strongly and intimately related. And some people tend to assume that language is the ultimate sign of thought. But language can be easily generated without a living soul. All it takes is the digestion of a database of human-produced language by a computer program, AI-based or not.

Based on the relatively few samples of text available for examination, GPT-3 is capable of producing excellent syntax. It boasts a wide range of vocabulary, owing to an unprecedentedly large knowledge base from which it can generate thematically relevant, highly coherent new statements. Yet, it is profoundly unable to reason or show any sign of “thinking”.

For instance, one passage written by GPT-3 predicts you could suddenly die after drinking cranberry juice with a teaspoon of grape juice in it. This is despite the system having access to information on the web that grape juice is edible.

Another passage suggests that to bring a table through a doorway that is too small you should cut the door in half. A system that could understand what it was writing or had any sense of what the world was like would not generate such aberrant “solutions” to a problem.

If the goal is to create a system that can chat, fair enough. GPT-3 shows AI will certainly lead to better experiences than what has been available until now. And it certainly allows for a good laugh.

But if the goal is to get some thinking into the system, then we are nowhere near. That’s because AI such as GPT-3 works by “digesting” colossal databases of language content to produce, “new”, synthesised language content.

The source is language; the product is language. In the middle stands a mysterious black box a thousand times smaller than the human brain in capacity and nothing like it in the way it works.

Reconstructing the thinking that is at the origin of the language content we observe is an impossible task, unless you study the thinking itself. As philosopher John Searle put it, only “machines with internal causal powers equivalent to those of brains” could think.

And for all our advances in cognitive neuroscience, we know deceptively little about human thinking. So how could we hope to program it into a machine?

What mesmerises me is that people go to the trouble of suggesting what kind of reasoning that AI like GTP-3 should be able to engage with. This is really strange, and in some ways amusing – if not worrying.

Why would a computer program, based on AI or not, and designed to digest hundreds of gigabytes of text on many different topics, know anything about biology or social reasoning? It has no actual experience of the world. It cannot have any human-like internal representation.

It appears that many of us fall victim of a mind-language causation fallacy. Supposedly there is no smoke without fire, no language without mind. But GPT-3 is a language smoke machine, entirely hollow of any actual human trait or psyche. It is just an algorithm, and there is no reason to expect that it could ever deliver any kind of reasoning. Because it cannot.

Filling in the gaps

Part of the problem is the strong illusion of coherence we get from reading a passage produced by AI such as GPT-3 because of our own abilities. Our brains were created by hundreds of thousands of years of evolution and tens of thousands of hours of biological development to extract meaning from the world and construct a coherent account of any situation.

When we read a GPT-3 output, our brain is doing most of the work. We make sense that was never intended, simply because the language looks and feels coherent and thematically sound, and so we connect the dots. We are so used to doing this, in every moment of our lives, that we don’t even realise it is happening.

We relate the points made to one another and we may even be tempted to think that a phrase is cleverly worded simply because the style may be a little odd or surprising. And if the language is particularly clear, direct and well constructed (which is what AI generators are optimised to deliver), we are strongly tempted to infer sentience, where there is no such thing.

When GPT-3’s predecessor GPT-2 wrote, “I am interested in understanding the origins of language,” who was doing the talking? The AI just spat out an ultra-shrunk summary of our ultimate quest as humans, picked up from an ocean of stored human language productions – our endless trying to understand what is language and where we come from. But there is no ghost in the shell, whether we “converse” with GPT-2, GPT-3, or GPT-9000.The Conversation

Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on password breakers

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?

Paul Haskell-Dowland, Author provided
Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?


Read more: The long history, and short future, of the password


What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.


Read more: Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?


Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.


Read more: Facebook hack reveals the perils of using a single account to log in to other services The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Apple and its new privacy-centric approach to data collection

Apple is starting a war over privacy with iOS 14 – publishers are naive if they think it will back down

Ten four, let’s go to war! DANIEL CONSTANTE
Ana Isabel Domingos Canhoto, Brunel University London

iPhone users are about to receive access to Apple’s latest mobile operating system, iOS 14. It will come with the usual array of shiny new features, but the real game-changer will be missing – at least until January.

For the first time, iOS 14 is to require apps to get permission from users before collecting their data – giving users an opt-in to this compromise to their privacy.

This caused a major backlash from companies that rely on this data to make money, most notably Facebook. So why did Apple decide to jeopardise the business models of major rivals and their advertisers, and will the postponement make any difference?

The backlash

The opt-in is not the only change in iOS 14 that gives users more privacy protection, but it has attracted the most attention. Privacy campaigners will applaud the move, but the reaction from the media business has been mixed. The likes of American online publishing trade body Digital Content Next thought it would potentially benefit members.

But Facebook warned the opt-in could halve publishers’ revenues on its advertising platform, while some publishers are loudly concerned. The owner of UK news site Mail Online, DMG Media, threatened to delete its app from the App Store.

Facebook logo with a silhouette of a padlock
You are the product. Ink Drop

Whether publishers win or lose very much depends on their business model and customer base. Publishers’ model of selling their product to consumers and selling space to advertisers has been badly damaged by the internet. All the free content online drove down physical sales, which in turn eroded advertising income.

A few publications, like The Times in the UK, managed to convert readers into online subscribers, but the majority didn’t. Consequently online advertising revenues have become very important for most publishers – particularly behavioural or targeted advertising, which displays different ads to different viewers of the same page according to factors like their location, browser, and which websites they have visited. The adverts they see are decided by an ad trader, which is often Google.

Despite the importance of behavioural advertising to online publishers, they receive under 30% of what advertisers pay. Most of the remainder goes to Google and Facebook.

These two companies’ power comes from ad-trading, and because they own many platforms on which publishers’ content is consumed – be it Facebook, Instagram, YouTube or the Google search engine – and sell advertising on the back of the user data. To increase behavioural advertising income on their own sites, publishers are left with either attracting lots of viewers via clickbait or inflammatory content, or attracting difficult to reach, valuable customers via niche content.

Clickbait tends to upset customers, especially highly educated ones, while niche content tends to be too smalltime for large media publishers. The reason some publishers welcome Apple’s move is that it could give them more control over advertising again, not only through selling more traditional display ads but also by developing other streams such as subscriptions and branded content.

For instance, the New York Times saw its ad revenues increase when it ditched targeted ads in favour of traditional online display in Europe in 2018 to get around the GDPR data protection restrictions. Conversely, DMG Media’s reaction to iOS 14 is because it collects and sells customer data on the Mail Online app, and also uses content with shock value to attract visitors and advertisers.

Privacy politics

Another important factor is the growing signs of a pushback against highly targeted advertising. With online users becoming increasingly concerned about online privacy, they are likely to engage less with ads, which reduces’ publishers’ income. They might also stop visiting sites displaying the targeted ads.

This is particularly true of more educated users, so curbing data collection could help publishers who serve these people. Online advertising also attracts more clicks when users control their data, so this could be a selling point to advertisers.

More generally, making traditional display advertising more important will benefit large publishers, since they have bigger audiences to sell to advertisers; but also those with clearly defined niche audiences (the Financial Times, say), since they offer a great way for advertisers to reach these people.

Online advertising represents 99% of Facebook revenues, so its resistance is not surprising. Online advertising is also important to Google revenues, though less so, and Google is also betting on the growing importance of consumer privacy by limiting data collection too – for instance, by third-party websites on the Chrome browser.

Apple’s perspective

Apple has little to gain here in the short term. It may even lose out if the likes of Mail Online leave the platform. But this is not a short-term move.

Apple wants to be known for a few things, such as user-friendly interfaces. It is also known for not aggressively collecting and exploiting user data, and standing up for consumer privacy.

Following the Cambridge Analytica scandal, which exposed Facebook’s lax privacy practices, Apple CEO Tim Cook famously said his company would never monetise customers’ information because privacy was a human right. The iOS 14 uneviling fits this approach. It helps Apple differentiate from competitors. It protects the company from privacy scandals. And it helps develop customer trust.

Tim Cook standing below a sign that says 'liberal'
Privacy pays. John Gress Media Inc

Moreover, Apple doesn’t need to exploit customers’ data. Apple’s revenues derive mostly from hardware products and software licences. This business model requires large upfront investment and constant innovation, but is difficult to copy (due to patents, technical capacity and talent) and creates high barriers to entry.

Therefore Apple’s decision to postpone the opt-in until January is not a sign that it might backtrack on the feature. Privacy is core to Apple, and the company’s share of the app market is such that ultimately it is unlikely to feel threatened by some publishers withdrawing. The delay simply makes Apple look reasonable at a time when it is fighting accusations of monopolistic behaviour and unfair practices. So publishers should get ready for significant changes in the app ecosystem, whether they like it or not.The Conversation

Ana Isabel Domingos Canhoto, Reader in Marketing, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Robots in high-contact medical environments

Robots to be introduced in UK care homes to allay loneliness – that’s inhuman

Fay Bound Alberti, University of York

Some UK care homes are to deploy robots in an attempt to allay loneliness and boost mental health. The wheeled machines will “initiate rudimentary conversations, play residents’ favourite music, teach them languages, and offer practical help, including medicine reminders”. They are being introduced after an international trial found they reduced anxiety and loneliness.

These robots can hold basic conversations and be programmed to people’s interests. This is positive, but they are not a viable alternative to human interaction. It’s a sad state of affairs when robots are presented as solutions to human loneliness. Though intended as a way to fill in for carers in a “stretched social care system” rather than as a long-term solution, the use of robots is a slippery slope in removing the aged and infirm still further from the nerves and fibres of human interaction.

Robot companions have been trialled in the UK and Japan, from dogs that sit to attention to young women greeting isolated businessmen after a long day at work. They certainly serve a function in reminding people what it is to have companionship, helping with crude social interaction and providing cues to what it is to be human.

But robots cannot provide the altruism and compassion that should be at the core of a caring system. And they might even increase loneliness in the long term by reducing the actual contact people have with humans, and by increasing a sense of disconnect.

While there have been studies showing robotic pets can reduce loneliness, such research is generally based on a contrast with no interaction at all, rather than a comparison of human and robotic interaction.

It’s also important to factor in the role of novelty, which is often missing in care home environments. In 2007, a Japanese nursing home introduced Ifbot, a resident robot that provided emotional companionship, sang songs and gave trivia quizzes to elderly residents. The director of the faculty reported that residents were interested for about a month before they lost interest, preferring “stuffed animals” to the “communication robot”.

Tactile connection

The preference for stuffed animals is, I think, important, because it also connects to the sensory experience of loneliness. Cuddly toys can be hugged and even temporarily moulded by the shape and temperature of the human body. Robots cannot. There is a limit to the sense of connection and embodied comfort that can come from robotic caregivers, or pets.

This is not only because robots show insufficient cultural awareness, and that their gestures might sometimes seem a little, well, mechanical. It’s because robots do not have flesh and blood, or even the malleability of a stuffed toy.

Consider the controversial experiments conducted by Harry Harlow in the 1950s that showed rhesus monkeys always preferred physical comfort to a mechanical caregiver, even if the latter had milk. Similarly, robots lack the warmth of a human or animal companion. They don’t respond intuitively to the movement of their companions, or regulate the heartbeats of their owners through the simple power of touch.

Loneliness is a physical affliction as well as a mental one. Companionship can improve health and increase wellbeing, but only when it is the right kind.

Stroking a dog can be soothing for the person as well as the animal. Walking a dog also gets people out of the house where that is possible, and encourages social interaction.

As the owner of a young labrador, I am not always a fan of early rising. But I can see the positive emotional impact a pet has had on my young son, in contrast to many hours of technological absorption. An Xbox can’t curl up on your bed in the middle of the night to keep you warm.

And the infamous Labrador stink is like perfume to my son, who claims it makes him feel less lonely. So it’s smell, as well as touch, that is involved loneliness – along with all the senses.

Aerial view of a cat and a dog on a person's lap.
Robots can’t do this. Chendongshan/Shutterstock.com

Techno-fix

I am not a technophobe. In the Zoom world of COVID-19, technological solutions have a critical role in making people feel included, seen and listened to. In time, it may be that some of the distancing effects of technology, including the glitchy movements, whirring sounds and stilted body language will improve and become more naturalised. Similarly, robot companions may well in time become more lifelike. Who will remember the early, clunky days of Furreal pets?

But care robots are offering a solution that should not be needed. There is no reason for care home residents to be so devoid of human companionship (or animal support) that robot friends are the answer.

There is something dysfunctional about the infrastructure in which care is delivered, if robots are an economically motivated solution. Indeed, the introduction of robots into emotional care de-skills the complex work of caring, while commercialising and privatising responses to elderly loneliness.

It is often presented as “natural” or inevitable that elderly and infirm people live in homes, with other elderly and infirm people, shuttered away from the rest of the world. Care homes are an architectural way of concealing those that are least economically productive. There may be good homes, filled with happy residents, but there are many stories of people being ignored and neglected, especially during a pandemic.

How we care for the elderly and the infirm is a cultural and political choice. Historically, elderly and infirm people were part of the social fabric and extended families. With a globally ageing population, many countries are revisiting how best to restructure care homes in ways that reflect demographic, economic and cultural needs.

Care home schemes in the Netherlands, house students with elderly people and is popular with both. With a little imagination, care homes can be radically rethought.

New technologies have a role to play in society, just as they always have had in history. But they shouldn’t be used to paper over the gaps left by a withdrawal of social care and a breakdown in what “community” means in the 21st century. That’s inhuman.The Conversation

Fay Bound Alberti, Reader in History and UKRI Future Leaders Fellow, University of York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the Fourth Industrial Revolution and Agriculture

The fourth agricultural revolution is coming – but who will really benefit?

kung_tom/shutterstock
David Rose, University of Reading and Charlotte-Anne Chivers, University of Gloucestershire

Depending on who you listen to, artificial intelligence may either free us from monotonous labour and unleash huge productivity gains, or create a dystopia of mass unemployment and automated oppression. In the case of farming, some researchers, business people and politicians think the effects of AI and other advanced technologies are so great they are spurring a “fourth agricultural revolution”.

Given the potentially transformative effects of upcoming technology on farming – positive and negative – it’s vital that we pause and reflect before the revolution takes hold. It must work for everyone, whether it be farmers (regardless of their size or enterprise), landowners, farm workers, rural communities or the wider public. Yet, in a recently published study led by the researcher Hannah Barrett, we found that policymakers and the media and policymakers are framing the fourth agricultural revolution as overwhelmingly positive, without giving much focus to the potential negative consequences.

The first agricultural revolution occurred when humans started farming around 12,000 years ago. The second was the reorganisation of farmland from the 17th century onwards that followed the end of feudalism in Europe. And the third (also known as the green revolution) was the introduction of chemical fertilisers, pesticides and new high-yield crop breeds alongside heavy machinery in the 1950s and 1960s.

The fourth agricultural revolution, much like the fourth industrial revolution, refers to the anticipated changes from new technologies, particularly the use of AI to make smarter planning decisions and power autonomous robots. Such intelligent machines could be used for growing and picking crops, weeding, milking livestock and distributing agrochemicals via drone. Other farming-specific technologies include new types of gene editing to develop higher yielding, disease-resistant crops; vertical farms; and synthetic lab-grown meat.

These technologies are attracting huge amounts of funding and investment in the quest to boost food production while minimising further environmental degradation. This might, in part, be related to positive media coverage. Our research found that UK coverage of new farming technologies tends to be optimistic, portraying them as key to solving farming challenges.

However, many previous agricultural technologies were also greeted with similar enthusiasm before leading to controversy later on, such as with the first genetically modified crops and chemicals such as the now-banned pesticide DDT. Given wider controversies surrounding emergent technologies like nanotechnology and driverless cars, unchecked or blind techno-optimism is unwise.

We mustn’t assume that all of these new farming technologies will be adopted without overcoming certain barriers. Precedent tells us that benefits are unlikely to be spread evenly across society and that some people will lose out. We need to understand who might lose and what we can do about it, and ask wider questions such as whether new technologies will actually deliver as promised.

Cows in a large circular robotic milking machine
Robotic milking might be efficient but creates new stresses. Mark Brandon/Shutterstock

Robotic milking of cows provides a good example. In our research, a farmer told us that using robots had improved his work-life balance and allowed a disabled farm worker to avoid dextrous tasks on the farm. But they had also created a “different kind of stress” due to the resulting information overload and the perception that the farmer needed to be monitoring data 24/7.

The National Farmers’ Union (NFU) argues that new technologies could attract younger, more technically skilled entrants to an ageing workforce. Such breakthroughs could enable a wider range of people to engage in farming by eliminating the back-breaking stereotypes through greater use of machinery.

But existing farm workers at risk of being replaced by a machine or whose skills are unsuited to a new style of farming will inevitably be less excited by the prospect of change. And they may not enjoy being forced to spend less time working outside, becoming increasingly reliant on machines instead of their own knowledge.

Power imbalance

There are also potential power inequalities in this new revolution. Our research found that some farmers were optimistic about a high-tech future. But others wondered whether those with less capital, poor broadband availability and IT skills, and access to advice on how to use the technology would be able to benefit.

History suggests technology companies and larger farm businesses are often the winners of this kind of change, and benefits don’t always trickle down to smaller family farms. In the context of the fourth agricultural revolution, this could mean farmers not owning or being able to fully access the data gathered on their farms by new technologies. Or reliance on companies to maintain increasingly important and complex equipment.

Driverless tractor and drone pass over a field.
Advanced machinery can tie farmers to tech firms. Scharfsinn/Shutterstock

The controversy surrounding GM crops (which are created by inserting DNA from other organisms) provides a frank reminder that there is no guarantee that new technologies will be embraced by the public. A similar backlash could occur if the public perceive gene editing (which instead involves making small, controlled changes to a living organism’s DNA) as tantamount to GM. Proponents of wearable technology for livestock claim they improve welfare, but the public might see the use of such devices as treating animals like machines.

Instead of blind optimism, we need to identify where benefits and disadvantages of new agricultural technology will occur and for whom. This process must include a wide range of people to help create society-wide responsible visions for the future of farming.

The NFU has said the fourth agricultural revolution is “exciting – as well as a bit scary … but then the two often go together”. It is time to discuss the scary aspects with the same vigour as the exciting part.The Conversation

David Rose, Elizabeth Creak Associate Professor of Agricultural Innovation and Extension, University of Reading and Charlotte-Anne Chivers, Research Assistant, Countryside and Community Research Institute, University of Gloucestershire

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on how digital communication is less rich than in-person

Why our screens leave us hungry for more nutritious forms of social interaction

Shutterstock/LukyToky
mc schraefel, University of Southampton

COVID-19 has seen all the rules change when it comes to social engagement. Workplaces and schools have closed, gatherings have been banned, and the use of social media and other online tools has risen to bridge the gap.

But as we continue to adapt to the various restrictions, we should remember that social media is the refined sugar of social interaction. In the same way that producing a bowl of white granules means removing minerals and vitamins from the sugarcane plant, social media strips out many valuable and sometimes necessarily challenging parts of “whole” human communication.

Fundamentally, social media dispenses with the nuance of dealing with a person in the flesh and all the signalling complexities of body language, vocal tone and speed of utterance. The immediacy and anonymity of social media also remove the (healthy) challenges of paying attention, properly processing information and responding with civility.

As a result, social media is a fast and easy way to communicate. But while the removal of complexity is certainly convenient, a diet high in connections through social media has been widely shown to have a detrimental effect on our physical and emotional wellbeing.

Increased anxiety and depression are well-known side-effects. There are also consequences for making decisions based on simplistic, “refined” sources of information. We may be less discerning when it comes to evaluating such information, responding with far less reflection. We see a tweet, and we are triggered by it immediately – not unlike a sugar hit from a bar of chocolate.

More complex kinds of communication demand more of us, as we learn to recognise and engage with the complexities of face-to-face interaction – the tempo, closeness and body language that make up the non-verbal cues of communication that are missing in social media.

These cues may even exist because we have evolved to be with others, to work with others. Consider, for example, the hormone oxytoxin, which is associated with trust and lower stress levels and triggered when we are in the physical company of others.

Another indicator of trust and engagement is the fact that group heart rates synchronise when working together. But achieving such rhythm of communication takes effort, skill and practice.

Pause for thought

There’s an interesting element of elite athletic performance known as “quiet eye”. It refers to the brief moment of pause before a tennis player serves or a footballer takes a penalty to focus on the goal. Good communicators, too, seem to take this pause, whether it’s in a presentation or a conversation – a moment lost in social media’s rush for an immediate anonymous response.

Having said all this, I don’t believe social media – or table sugar for that matter – is fundamentally wrong. As with a slice of cake on a special occasion, it can be a delight, a treat and a rush. But problems appear when it is our dominant form of communication. As with only eating cake, it weakens us, leaving us far less able to thrive in more challenging environments.

COVID-19 has meant a greater proportion of many people’s lives are spent online. But even Zoom meetings and gatherings, while more intimate than a tweet or social media post, also have limitations and lead to fatigue.

In physiological terms, part of the reason for these experiences being so challenging is that we are supposed to connect with each other in person. We are wired to deal with every aspect of physically present personal contact – from the uncomfortable conversations to the hugely gratifying exchanges.

We suffer without it. We see this in energy levels, overall health and mental stability. It’s physical as well as emotional in effect. Indeed, researchers have shown for over a decade now that loneliness kills. What research has yet to show is if social media mitigates this.


Read more: Coronavirus: experts in evolution explain why social distancing feels so unnatural


Again, virtual meetings are not intrinsically wrong. But they are not sufficient, in human physiological terms, to sustain what we have come to need after 300,000 years of evolution.

Even in the days before coronavirus, social media had been evolving into a dominant means of communication for many. Fast and easy, but also often mean, judgemental, fleeting – something that does not bring out the best in us.

The hope in offering this analogy is that by contextualising how social media works in terms of our physiology, we can start to understand how we may need to balance social media with other more challenging, but ultimately more satisfying forms of communication. And also how we may need to design virtual methods of communication that embrace more of the physiology of social contact that we need, and which helps us to thrive.The Conversation

mc schraefel, Professor of Computer Science and Human Performance, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the moral compass of Tech billionaires

How tech billionaires’ visions of human nature shape our world

Simon McCarthy-Jones, Trinity College Dublin

In the 20th century, politicians’ views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of human nature may shape the 21st century. We must know what technologists see in humanity’s heart.

The economist Thomas Sowell proposed two visions of human nature. The utopian vision sees people as naturally good. The world corrupts us, but the wise can perfect us.

The tragic vision sees us as inherently flawed. Our sickness is selfishness. We cannot be trusted with power over others. There are no perfect solutions, only imperfect trade-offs.

Science supports the tragic vision. So does history. The French, Russian and Chinese revolutions were utopian visions. They paved their paths to paradise with 50 million dead.

The USA’s founding fathers held the tragic vision. They created checks and balances to constrain political leaders’ worst impulses.

Technologists’ visions

Yet when Americans founded online social networks, the tragic vision was forgotten. Founders were trusted to juggle their self-interest and the public interest when designing these networks and gaining vast data troves.

Users, companies and countries were trusted not to abuse their new social-networked power. Mobs were not constrained. This led to abuse and manipulation.

Belatedly, social networks have adopted tragic visions. Facebook now acknowledges regulation is needed to get the best from social media.

Tech billionaire Elon Musk dabbles in both the tragic and utopian visions. He thinks “most people are actually pretty good”. But he supports market, not government control, wants competition to keep us honest, and sees evil in individuals.

Musk’s tragic vision propels us to Mars in case short-sighted selfishness destroys Earth. Yet his utopian vision assumes people on Mars could be entrusted with the direct democracy that America’s founding fathers feared. His utopian vision also assumes giving us tools to think better won’t simply enhance our Machiavellianism.

Bill Gates leans to the tragic and tries to create a better world within humanity’s constraints. Gates recognises our self-interest and supports market-based rewards to help us behave better. Yet he believes “creative capitalism” can tie self-interest to our inbuilt desire to help others, benefiting all.

Peter Tiel stood in front of screen displaying computer code.
Peter Thiel considers the code of human nature. Heisenberg Media/Flickr, CC BY-SA

A different tragic vision lies in the writings of Peter Thiel. This billionaire tech investor was influenced by philosophers Leo Strauss and Carl Schmitt. Both believed evil, in the form of a drive for dominance, is part of our nature.

Thiel dismisses the “Enlightenment view of the natural goodness of humanity”. Instead, he approvingly cites the view that humans are “potentially evil or at least dangerous beings”.

The consequences of seeing evil

The German philosopher Friedrich Nietzsche warned that those who fight monsters must beware of becoming monsters themselves. He was right.

People who believe in evil are more likely to demonise, dehumanise, and punish wrongdoers. They are more likely to support violence before and after another’s transgression. They feel that redemptive violence can eradicate evil and save the world. Americans who believe in evil are more likely to support torture, killing terrorists and America’s possession of nuclear weapons.

Technologists who see evil risk creating coercive solutions. Those who believe in evil are less likely to think deeply about why people act as they do. They are also less likely to see how situations influence people’s actions.

Two years after 9/11, Peter Thiel founded Palantir. This company creates software to analyse big data sets, helping businesses fight fraud and the US government combat crime.

Thiel is a Republican-supporting libertarian. Yet, he appointed a Democrat-supporting neo-Marxist, Alex Karp, as Palantir’s CEO. Beneath their differences lies a shared belief in the inherent dangerousness of humans. Karp’s PhD thesis argued that we have a fundamental aggressive drive towards death and destruction.

Just as believing in evil is associated with supporting pre-emptive aggression, Palantir doesn’t just wait for people to commit crimes. It has patented a “crime risk forecasting system” to predict crimes and has trialled predictive policing. This has raised concerns.

Karp’s tragic vision acknowledges that Palantir needs constraints. He stresses the judiciary must put “checks and balances on the implementation” of Palantir’s technology. He says the use of Palantir’s software should be “decided by society in an open debate”, rather than by Silicon Valley engineers.

Yet, Thiel cites philosopher Leo Strauss’ suggestion that America partly owes her greatness “to her occasional deviation” from principles of freedom and justice. Strauss recommended hiding such deviations under a veil.

Thiel introduces the Straussian argument that only “the secret coordination of the world’s intelligence services” can support a US-led international peace. This recalls Colonel Jessop in the film, A Few Good Men, who felt he should deal with dangerous truths in darkness.

Can we handle the truth?

Seeing evil after 9/11 led technologists and governments to overreach in their surveillance. This included using the formerly secret XKEYSCORE computer system used by the US National Security Agency to colllect people’s internet data, which is linked to Palantir. The American people rejected this approach and democratic processes increased oversight and limited surveillance.

Facing the abyss

Tragic visions pose risks. Freedom may be unnecessarily and coercively limited. External roots of violence, like scarcity and exclusion, may be overlooked. Yet if technology creates economic growth it will address many external causes of conflict.

Utopian visions ignore the dangers within. Technology that only changes the world is insufficient to save us from our selfishness and, as I argue in a forthcoming book, our spite.

Technology must change the world working within the constraints of human nature. Crucially, as Karp notes, democratic institutions, not technologists, must ultimately decide society’s shape. Technology’s outputs must be democracy’s inputs.

This may involve us acknowledging hard truths about our nature. But what if society does not wish to face these? Those who cannot handle truth make others fear to speak it.

Straussian technologists, who believe but dare not speak dangerous truths, may feel compelled to protect society in undemocratic darkness. They overstep, yet are encouraged to by those who see more harm in speech than its suppression.

The ancient Greeks had a name for someone with the courage to tell truths that could put them in danger – the parrhesiast. But the parrhesiast needed a listener who promised to not to react with anger. This parrhesiastic contract allowed dangerous truth-telling.

We have shredded this contract. We must renew it. Armed with the truth, the Greeks felt they could take care of themselves and others. Armed with both truth and technology we can move closer to fulfilling this promise.The Conversation

Simon McCarthy-Jones, Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on nanotechnology and viruses

Coronavirus nanoscience: the tiny technologies tackling a global pandemic

Kateryna Kon/Shutterstock
Josh Davies, Cardiff University

The world-altering coronavirus behind the COVID-19 pandemic is thought to be just 60 nanometres to 120 nanometres in size. This is so mind bogglingly small that you could fit more than 400 of these virus particles into the width of a single hair on your head. In fact, coronaviruses are so small that we can’t see them with normal microscopes and require much fancier electron microscopes to study them. How can we battle a foe so minuscule that we cannot see it?

One solution is to fight tiny with tiny. Nanotechnology relates to any technology that is or contains components that are between 1nm and 100nm in size. Nanomedicine that takes advantage of such tiny technology is used in everything from plasters that contain anti-bacterial nanoparticles of silver to complex diagnostic machines.

Nanotechnology also has an impressive record against viruses and has been used since the late 1880s to separate and identify them. More recently, nanomedicine has been used to develop treatments for flu, Zika and HIV. And now it’s joining the fight against the COVID-19 virus, SARS-CoV-2.

Diagnosis

If you’re suspected of having COVID, swabs from your throat or nose will be taken and tested by reverse transcription polymerase chain reaction (RT-PCR). This method checks if genetic material from the coronavirus is present in the sample.

Despite being highly accurate, the test can take up to three days to produce results, requires high-tech equipment only accessible in a lab, and can only tell if you have an active infection when the test is taken. But antibody tests, which check for the presence of coronavirus antibodies in your blood, can produce results immediately, wherever you’re tested.

Antibodies are formed when your body fights back against a virus. They are tiny proteins that search for and destroy invaders by hunting for the chemical markers of germs, called antigens. This means antibody tests can not only tell if you have coronavirus but if you have previously had it.

Antibody tests use nanoparticles of materials such as gold to capture any antibodies from a blood sample. These then slowly travel along a small piece of paper and stick to an antigen test line that only the coronavirus antibody will bond to. This makes the line visible and indicates that antibodies are present in the sample. These tests are more than 95% accurate and can give results within 15 minutes.

Vaccines and treatment

A major turning point in the battle against coronavirus will be the development of a successful vaccine. Vaccines often contain an inactive form of a virus that acts as an antigen to train your immune system and enable it develop antibodies. That way, when it meets the real virus, your immune system is ready and able to resist infection.

But there are some limitations in that typical vaccine material can prematurely break down in the bloodstream and does not always reach the target location, reducing the efficiency of a vaccine. One solution is to enclose the vaccine material inside a nanoshell by a process called encapsulation.

These shells are made from fats called lipids and can be as thin as 5nm in diameter, which is 50,000 times thinner than an egg shell. The nanoshells protect the inner vaccine from breaking down and can also be decorated with molecules that target specific cells to make them more effective at delivering their cargo.

This can improve the immune response of elderly people to the vaccine. And critically, people typically need lower doses of these encapsulated vaccines to develop immunity, meaning you can more quickly produce enough to vaccinate an entire population.

Encapsulation can also improve viral treatments. A major contribution to the deaths of virus patients in intensive care is “acute respiratory distress syndrome”, which occurs when the immune system produces an excessive response. Encapsulated vaccines can target specific areas of the body to deliver immunosuppressive drugs directly to targeted organs and helping regulate our immune system response.

Transmission reduction

It’s hard to exaggerate the importance of wearing face masks and washing your hands to reducing the spread of COVID-19. But typical face coverings can have trouble stopping the most penetrating particles of respiratory droplets, and many can only be used once.

New fabrics made from nanofibres 100nm thick and coated in titanium oxide can catch droplets smaller than 1,000nm and so they can be destroyed by ultraviolet (UV) radiation from sunlight. Masks, gloves and other personal protective equipment (PPE) made from such fabrics can also be washed and reused, and are more breathable.

Close-up of intricately woven fibres.
New fabrics made from coated nanofibres could produce better PPE. AnnaVel/Shutterstock

Another important nanomaterial is graphene, which is formed from a single honeycomb layer of carbon atoms and is 200 times stronger than steel but lighter than paper. Fabrics laced with graphene can capture viruses and block them from passing through. PPE containing graphene could be more puncture, flame, UV and microbe resistant while also being light weight.

Graphene isn’t reserved for fabrics either. Nanoparticles could be placed on surfaces in public places that might be particularly likely to facilitate transmission of the virus.

These technologies are just some of the ways nanoscience is contributing to the battle against COVID-19. While there is no one answer to a global pandemic, these tiny technologies certainly have the potential to be an important part of the solution.The Conversation

Josh Davies, PhD Candidate in Chemistry, Cardiff University

This article is republished from The Conversation under a Creative Commons license. Read the original article.