The Conversation, on personality profiling using VR

How we discovered that VR can profile your personality

Mark Nazh/Shutterstock
Stephen Fairclough, Liverpool John Moores University

Virtual reality (VR) has the power to take us out of our surroundings and transport us to far-off lands. From a quick round of golf, to fighting monsters or going for a skydive, all of this can be achieved from the comfort of your home.

But it’s not just gamers who love VR and see its potential. VR is used a lot in psychology research to investigate areas such as social anxiety, moral decision-making and emotional responses. And in our new research we used VR to explore how people respond emotionally to a potential threat.

We knew from earlier work that being high up in VR provokes strong feelings of fear and anxiety. So we asked participants to walk across a grid of ice blocks suspended 200 metres above a snowy alpine valley.

We found that as we increased the precariousness of the ice block path, participants’ behaviour became more cautious and considered – as you would expect. But we also found that how people behave in virtual reality can provide clear evidence of their personality. In that we were able to pinpoint participants with a certain personality trait based on the way they behaved in the VR scenario.

While this may be an interesting finding, it obviously raises concerns in terms of people’s data. As technology companies could profile people’s personality via their VR interactions and then use this information to target advertising, for example. And this clearly raises concerns about how data collected through VR platforms can be used.

Virtual fall

As part of our study, we used head-mounted VR displays and handheld controllers, but we also attached sensors to people’s feet. These sensors allowed participants to test out a block before stepping onto it with both feet.

As participants made their way across the ice, some blocks would crack and change colour when participants stepped onto them with one foot or both feet. As the experiment progressed, the number of crack blocks increased.

We also included a few fall blocks. These treacherous blocks were identical to crack blocks until activated with both feet, when they shattered and participants experienced a short but uncomfortable virtual fall.

We found that as we increased the number of crack and fall blocks, participants’ behaviour became more cautious and considered. We saw a lot more testing with one foot to identify and avoid the cracks and more time spent considering the next move.

But this tendency towards risk-averse behaviour was more pronounced for participants with a higher level of a personality trait called neuroticism. People with high neuroticism are more sensitive to negative stimuli and potential threat.

Personality and privacy

We had participants complete a personality scale before performing the study. We specifically looked at neuroticism, as this measures the extent to which each person is likely to experience negative emotions such as anxiety and fear. And we found that participants with higher levels of neuroticism could be identified in our sample based on their behaviour. These people did more testing with one foot and spent longer standing on “safe” solid blocks when the threat was high.

Neuroticism is one of the five major personality traits most commonly used to profile people. These traits are normally assessed by a self-report questionnaire, but can also be assessed based on behaviour – as demonstrated in our experiment.

Excited teen hipster girl playing virtual reality video game wear vr goggles headset.
As advances in technology continue to develop, so too does the power of surveillance. insta_photos/Shutterstock

Our findings show how users of VR could have their personality profiled in a virtual world. This approach, where private traits are predicted based on implicit monitoring of digital behaviour, was demonstrated with a dataset derived from Facebook likes back in 2013. This paved the way for controversial commercial applications and the Cambridge Analytica scandal – when psychological profiles of users were allegedly harvested and sold to political campaigns. And our work demonstrates how the same approach could be applied to users of commercial VR headsets, which raises major concerns for people’s privacy.

Users should know if their data is being tracked, whether historical records are kept, whether data can be traced to individual accounts, along with what the data is used for and who it can be shared with. After all, we wouldn’t settle for anything less if such a comprehensive level of surveillance could be achieved in the real world.The Conversation

Stephen Fairclough, Professor of Psychophysiology in the School of Psychology, Liverpool John Moores University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


What is a CVE?

CVE stands for Common Vulnerabilities and Exposures, in essence it represents a security risk that has been classified and can be remediated on its own.

At Hayachi Services we take the security of your organisation seriously, and so all of our Partners use CVEs to help keep you secure and classify risk on your systems.

For example, one of the modules that is offered with Panda Security’s Adaptive Defense is Patch Management. This tool uses CVE classifications on hundreds of applications to ensure that you can keep track of the Common Vulnerabilities your IT Estate has and remediate them with the click of a button.

Equally Red Hat use Smart Management to allow you to easily update and keep control of your Red Hat Enterprise Linux estate, at no extra cost to you – another bit of value when buying with Red Hat.

So then, why is it important to pay for world-class solutions to avoid these ‘Common’ Vulnerabilities?

A CVE can be a dangerous thing even if it is well known, in the same way that while Phishing attacks are common they still do incredible damage to business and individuals.

Businesses will at times use Legacy software, for compatibility and compliance reasons as well as the simple fact that sometimes that business-critical tools aren’t always available on a newer system.

For example at CERN they are aware of the risk of legacy operating systems and take steps to protect themselves, and the example used is powerful: who would walk naked through a quarantine ward and expect not to be infected?

Being aware of the most obvious, commons risks across your estate – having the reporting, patching and control of what is within your risk appetite and what needs to be secured immediately is incredibly powerful.

The National Cyber Security Centre do weekly threat reports for exactly this reason, new threats develop organically every day and keeping a finger to the pulse is essential.

How can Hayachi Services help?

Simple question, and a simple answer. Chat with us! We can talk about your risk appetite and suggest actions to help protect your business.

Our Partners work with the smallest SMEs and Charities all the way up to the Fortune500, Defence and Critical Infrastructure so we’ll certainly have something to allay your concerns.

We are proudly vendor neutral, and while our favourites are our Partners from across industry that isn’t to say we will only recommend what we personally sell.

Favouritism doesn’t keep our clients safe, so we don’t do it!

You can find pages on CVEs below:


Password Security

We all have passwords. Secret, hidden things that only we and everyone we tell the password to are meant to know.

Sadly, it is often the case that gleaning a little knowledge of the life of a particular person allows us to guess passwords with a degree of ease (and software).

Passwords are muscle memory more than anything else: patterns arise and so even if your current password is secure, it is possible that it is not dissimilar to one of your previous passwords.

A solution: Passbolt is a Cloud-Ready, Open Source password manager. We highly recommend it, and use it ourselves. Password managers are important because there is the risk that 1. someone can see you typing your password and 2. there is something called a ‘Keylogger’ that records what your type, most anti-virus do not detect these and so typing a password repeatedly can become a big risk.

And to keep an eye on data breaches out in the wild, the famed Cyber Security expert Troy Hunt has a website especially for you. HaveIBeenPwned is an excellent tool to ensure that users of your domain are aware of other firms’ data-breaches which would necessarily also jeopardize your firms’ footing. You simply put in your domain or email address and this tool scours the usual suspects to see if your these details have been compromised at say, an online shop you buy chocolates from that suffered a databreach.

Alongside stronger alpha-numeric passwords (e.g. H@Yach!S3rvic3s) the NCSC recommend the use of two-factor authentication, which means even though your password is right there is a secret ‘handshake’ that only you and the system can recognise. It proves you are indeed physically, and digitally, you.

Want to go a step further? Drop us a note or a call us and we will be happy to talk tech and discuss making your firm more cyber-secure.



Linux and Unix-like [collectively, *nix] Operating Systems are incredibly common. Almost every mobile phone, gaming system, internet streaming service, telephony service, and enterprise-level application uses the Linux-subsystem, or depends on *nix for business-critical work.

*nix is necessarily a very broad term though, so we’ve collected some of our favourite distributions for you look at to help give a flavour of what is on offer.

There are a multitude of Operating Systems, namely because *nix is based on open, collaborative and international work.

For example, FreeBSD and Red Hat are used by many businesses and government institutions globally, partly because they are licensed in a way which allows them to be open and customise-able.

They both have a range of certifications to demonstrate knowledge of best-practice, and Red Hat in particular has an award winning support arm as well as unparalleled ten year life-cycle to their solutions.

Enjoy! And if you have questions or queries, send us an email and we will be happy to answer them.

The Conversation, on Spreadsheet errors

Excel errors: the UK government has an embarrassingly long history of spreadsheet horror stories

Simon Thorne, Cardiff Metropolitan University

When the UK prime minister, Boris Johnson, first said the country would develop a £12 billion “world-beating” system for testing and tracing cases of COVID-19, few people probably imagined that it would be based around a £120 generic spreadsheet program. Yet the news that the details of 16,000 positive test cases had been lost because of an error made with Microsoft Excel revealed that was exactly the case.

But perhaps we shouldn’t be that surprised. Spreadsheets are ubiquitous. They can be found in critical operations in practically every industry. They are highly adaptable programming environments that can be applied to many different tasks.

They are also very easy to misuse or make mistakes with. As a result, almost everyone has their own spreadsheet horror stories. And the UK government in particular has a long history of embarrassing errors.

In 2012, the Department for Transport was forced into an embarrassing and costly retraction when it realised its spreadsheet-based financial model for awarding the west coast rail franchise contained flawed assumptions and was inconsistently communicated to bidding companies. The Department had to retract and re-tender the £9 billion, ten-year contract, refunding £40 million to the four bidders. The entire episode was thought to have cost the taxpayer up to £300 million.

In 1999, the government-owned company British Nuclear Fuels Limited admitted that some of its staff had falsified three years’ worth of safety data on fuel it was exporting by copying and pasting reams of data in spreadsheets. Customers immediately ceased trading with BNFL and insisted the UK government take the defective shipments of nuclear fuel back to the UK and pay compensation.

While this was a deliberate act rather than a software error, spreadsheets should not have been the tool of choice in this scenario because they are open to manipulation and error. A bespoke system should have existed for this important safety-critical process.

Woman at computer with spreadsheet on screen and head in hands.
Everyone has a horror story. Andrey Popov/Shutterstock

Perhaps the most significant spreadsheet problem occurred in 2010, when then-chancellor George Osborne based his justification for a decade of public austerity on the conclusions of a research paper, Growth in a Time of Debt. The paper, published by Harvard University economics professors Carmen Reinhardt and Kenneth Rogoff, asserted that a country’s economy will shrink when its debt exceeds 90% of GDP.

But this analysis was flawed. The spreadsheet used to calculate the figures omitted some rows of relevant data from the calculation. In fact, the data showed that an excess of 90% debt to GDP still resulted in positive economic growth. And so the argument for austerity was also flawed.

Yet as a result of this argument, the UK suffered significant cuts to welfare and public services, including health and social care. In 2017, a paper in the British Medical Journal attributed 120,000 excess deaths to these cuts.

Not enough testing

The underlying problem with spreadsheets is how easy it is to use them without applying adequate scrutiny, oversight and validation. They can include many complex customised algorithms but are not typically built by software engineers who are trained to create reliable software.

Instead, research shows, they are often built without sufficient planning, any control process or testing. This results in lurking data integrity problems that, given the right circumstances, can suddenly cause catastrophe.

Such a lack of testing appears to have been exactly the problem with the spreadsheet used by the test and trace system, which lost 16,000 entries when a file was imported to an older version of the software that could hold less data. Clearly the importing of data was not tested properly, and a bug went unnoticed which eventually caused a catastrophic failure of the system in operation. If the importing of data had been tested, it would have been noticed and corrected.

Unless we wish to keep repeating the same mistakes with spreadsheets, government needs to take this software more seriously. It needs to apply the same engineering controls that it would if developing in a formal language and prevent the inappropriate use of spreadsheets.

This isn’t the first time government spreadsheet misuse has been connected with life or death decisions. And unless things change, it probably won’t be the last.The Conversation

Simon Thorne, Senior Lecturer in Computing and ​Information Systems, Cardiff Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus: The Conversation, on AI and treating Covid-19

Teaching computers to read health records is helping fight COVID-19 – here’s how

James Teo, King’s College London and Richard Dobson, King’s College London

Medical records are a rich source of health data. When combined, the information they contain can help researchers better understand diseases and treat them more effectively. This includes COVID-19. But to unlock this rich resource, researchers first need to read it.

We may have moved on from the days of handwritten medical notes, but the information recorded in modern electronic health records can be just as hard to access and interpret. It’s an old joke that doctors’ handwriting is illegible, but it turns out their typing isn’t much better.

The sheer volume of information contained in health records is staggering. Every day, healthcare staff in a typical NHS hospital generate so much text it would take a human an age just to scroll through it, let alone read it. Using computers to analyse all this data is an obvious solution, but far from simple. What makes perfect sense to a human can be highly difficult for a computer to understand.

Our team is using a form artificial intelligence to bridge this gap. By teaching computers how to comprehend human doctors’ notes, we’re hoping they’ll uncover insights on how to fight COVID-19 by finding patterns across many thousands of patients’ records.

Why health records are hard going

A significant proportion of a health record is made up of free text, typed in narrative form like an email. This includes the patient’s symptoms, the history of their illness, and notes about pre-existing conditions and medications they’re taking. There may also be relevant information about family members and lifestyle mixed in too. And because this text has been entered by busy doctors, there will also be abbreviations, inaccuracies and typos.

A doctor using a computer
Doctors write information in free text boxes is rich in detail but poorly arranged for a machine to understand. logoboom/Shutterstock

This kind of information is known as unstructured data. For example, a patient’s record might say:

Mrs Smith is a 65-year-old woman with atrial fibrillation and had a CVA in March. She had a past history of a #NOF and OA. Family history of breast cancer. She has been prescribed apixaban. No history of haemorrhage.

This highly compact paragraph contains a large amount of data about Mrs Smith. Another human reading the notes would know what information is important and be able to extract it in seconds, but a computer would find the task extremely difficult.

Teaching machines to read

To solve this problem, we’re using something called natural language processing (NLP). Based on machine learning and AI technology, NLP algorithms translate the language used in free text into a standardised, structured set of medical terms that can be analysed by a computer.

These algorithms are extremely complex. They need to understand context, long strings of words and medical concepts, distinguish current events from historic ones, identify family relationships and more. We teach them to do this by feeding them existing written information so they can learn the structure and meaning of language – in this case, publicly available English text from the internet – and then use real medical records for further improvement and testing.

Using NLP algorithms to analyse and extract data from health records has huge potential to change healthcare. Much of what’s captured in narrative text in a patient’s notes is normally never seen again. This could be important information such as the early warning signs of serious diseases like cancer or stroke. Being able to automatically analyse and flag important issues could help deliver better care and avoid delays in diagnosis and treatment.

Finding ways to fight COVID-19

By drawing together health records using these tools, we’re now using these techniques to see patterns that are relevant to the pandemic. For example, we recently used our tools to discover whether drugs commonly prescribed to treat high blood pressure, diabetes and other conditions – known as angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) – increase the chances of becoming severely ill with COVID-19.

The virus that causes COVID-19 infects cells by binding to a molecule on the cell surface called ACE2. Both ACEIs and ARBs are thought to increase the amount of ACE2 on the surface of cells, leading to concerns that these drugs could be putting people at increased risk from the virus.

SARS-CoV-2 binding with ACE2 on the surface of a cell
The coronavirus (red) binds with ACE2 proteins (blue) on the cell’s surface (green) to gain entry. Kateryna Kon/Shutterstock

However, the information needed to answer this question – how many severely ill COVID-19 patients are being prescribed these drugs – can be recorded both as structured prescriptions and in free text in their medical records. That free text needs to be in a computer-searchable format for a machine to answer the question.

Using our NLP tools, we were able to analyse the anonymised records of 1,200 COVID-19 patients, comparing clinical outcomes with whether or not patients were taking these drugs. Reassuringly, we found that people prescribed ACEIs or ARBs were no more likely to be severely ill than those not taking the drugs.

We’re now expanding how we use these tools to find out more about who is most at risk from COVID-19. For instance, we’ve used them to investigate the links between ethnicity, pre-existing health conditions and COVID-19. This has revealed several striking things: that being black or of mixed ethnicity makes you more likely to be admitted to hospital with the disease, and that Asian patients, when in hospital, are at greater risk of being admitted to intensive care or dying from COVID-19.

We’ve also used these tools to evaluate the early warning scores that predict which patients admitted to hospital are most likely to become severely ill, and to suggest what additional measures could be used to improve these scores. We’re also using the technology to predict upcoming surges of COVID-19 cases, based on patients’ symptoms that doctors have recorded.The Conversation

James Teo, Neurologist, Clinical Director of Data and AI and Clinical Senior Lecturer, King’s College London and Richard Dobson, Professor in Health Informatics, King’s College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the Simulation Hypothesis

Curious Kids: could our entire reality be part of a simulation created by some other beings?

Unsplash, CC BY-SA
Sam Baron, Australian Catholic University

Is it possible the whole observable universe is just a thing kept in a container, in a room where there are some other extraterrestrial beings much bigger than us? Kanishk, Year 9

Hi Kanishk!

I’m going to interpret your question in a slightly different way.

Let’s assume these extraterrestrial beings have a computer on which our universe is being “simulated”. Simulated worlds are pretend worlds – a bit like the worlds on Minecraft or Fortnite, which are both simulations created by us.

If we think about it like this, it also helps to suppose these “beings” are similar to us. They’d have to at least understand us to be able to simulate us.

By narrowing the question down, we’re now asking: is it possible we’re living in a computer simulation run by beings like us? University of Oxford professor Nick Bostrom has thought a lot about this exact question. And he argues the answer is “yes”.

Not only does Bostrom think it’s possible, he thinks there’s a decent probability it’s true. Bostrom’s theory is known as the Simulation Hypothesis.

A simulated world that feels real

I want you to imagine there are many civilisations like ours dotted all around the universe. Also imagine many of these civilisations have developed advanced technology that lets them create computer simulations of a time in their own past (a time before they developed the technology).

Do you think our whole world could be created by someone using more advanced technology than we have today? Yash Raut/Unsplash

The people in these simulations are just like us. They are conscious (aware) beings who can touch, taste, move, smell and feel happiness and sadness. However, they have no way of proving they’re in a simulation and no way to “break out”.

Read more: Curious Kids: is time travel possible for humans?

Hedge your bets

According to Bostrom, if these simulated people (who are so much like us) don’t realise they’re in a simulation, then it’s possible you and I are too.

Suppose I guess we’re not in a simulation and you guess we are. Who guessed best?

Let’s say there is just one “real” past. But these futuristic beings are also running many simulations of the past — different versions they made up.

They could be running any number of simulations (it doesn’t change the point Bostrom is trying to make) — but let’s go with 200,000. Our guessing-game then is a bit like rolling a die with 200,000 sides.

When I guess we are not simulated, I’m betting the die will be a specific number (let’s make it 2), because there can only be one possible reality in which we’re not simulated.

This means in every other scenario we are simulated, which is what you guessed. That’s like betting the die will roll anything other than 2. So your bet is a far better one.

Simulated or not simulated, would you bet on it? Ian Gonzalez/Unsplash

Are we simulated?

Does that mean we’re simulated? Not quite.

The odds are only against my guess if we assuming these beings exist and are running simulations.

But, how likely is it there are beings so advanced they can run simulations with people who are “conscious” like us in the first place? Suppose this is very unlikely. Then it would also be unlikely our world is simulated.

Second, how likely is it such beings would run simulations even if they could? Maybe they have no interest in doing this. This, too, would mean it’s unlikely we are simulated.

Laying out all our options

Before us, then, are three possibilities:

  1. there are technologically advanced beings who can (and do) run many simulations of people like us (likely including us)

  2. there are technologically advanced beings who can run simulations of people like us, but don’t do this for whatever reason

  3. there are no beings technologically advanced enough to run simulations of people like us.

But are these really the only options available? The answer seems to be “yes”.

You might disagree by bringing up one of several theories suggesting our universe is not a simulation. For example, what if we’re all here because of the Big Bang (as science suggests), rather than by a simulation?

That’s a good point, but it actually fits within the Simulation Hypothesis, under option 1 — in which we’re not simulated. It doesn’t go against it. This is why the theory leaves us with only three options, one of which then must be true.

So which is it? Sadly, we don’t have enough evidence to help us decide.

Read more: Curious Kids: what started the Big Bang?

The principle of indifference

When we’re faced with a set of options and there is not enough evidence to believe one over the others, we should give an equal “credence” to each option. Generally speaking, credence is how likely you believe something to be true based on the evidence available.

Giving equal credence in cases such as the Simulation Hypothesis is an example of what philosophers call the “principle of indifference”.

Suppose you place a cookie on your desk and leave the room. When you come back, it’s gone. In the room with you were three people, all of which are strangers to you.

You have to start by piecing together what you know. You know someone in the room took the cookie. If you knew person A had been caught stealing cookies in the past, you could guess it was probably them. But on this occasion, you don’t know anything about these people.

Would it be fair to accuse anyone in particular? No.

Our universe, expanding

And so it is with the simulation argument. We don’t have enough information to help us select between the three options.

What we do know is if option 1 is true, then we’re very likely to be in a simulation. In options 2 and 3, we’re not. Thus, Bostrom’s argument seems to imply our credence of being simulated is roughly 1 in 3.

To put this into perspective, your credence in getting “heads” when you flip a coin should be 1 in 2. And your credence in winning the largest lottery in the world should be around 1 in 300,000,000 (if you believe it isn’t rigged).

If that makes you a little nervous, it’s worth remembering we might make discoveries in the future that could change our credences. What that information might be and how we might discover it, however, remains unknown.The Conversation

Famous astrophysicist Neil deGrasse Tyson has said it’s “hard to argue against” Bostrum’s Simulation Hypothesis.

Sam Baron, Associate professor, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Quantum Computation

Major quantum computational breakthrough is shaking up physics and maths

Quantum computers may be more trustworthy. Yurchanka Siarhei/Shutterstock
Ittay Weiss, University of Portsmouth

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of quantum complexity theory. Complexity theory is a zoo of “complexity classes” – collections of computational problems – of which MIP* and RE are but two.

The 165-page paper shows that these two classes are the same. That may seem like an insignificant detail in an abstract theory without any real-world application. But physicists and mathematicians are flocking to visit the zoo, even though they probably don’t understand it all. Because it turns out the discovery has astonishing consequences for their own disciplines.

In 1936, Alan Turing showed that the Halting Problem – algorithmically deciding whether a computer program halts or loops forever – cannot be solved. Modern computer science was born. Its success made the impression that soon all practical problems would yield to the tremendous power of the computer.

But it soon became apparent that, while some problems can be solved algorithmically, the actual computation will last long after our Sun will have engulfed the computer performing the computation. Figuring out how to solve a problem algorithmically was not enough. It was vital to classify solutions by efficiency. Complexity theory classifies problems according to how hard it is to solve them. The hardness of a problem is measured in terms of how long the computation lasts.

RE stands for problems that can be solved by a computer. It is the zoo. Let’s have a look at some subclasses.

The class P consists of problems which a known algorithm can solve quickly (technically, in polynomial time). For instance, multiplying two numbers belongs to P since long multiplication is an efficient algorithm to solve the problem. The problem of finding the prime factors of a number is not known to be in P; the problem can certainly be solved by a computer but no known algorithm can do so efficiently. A related problem, deciding if a given number is a prime, was in similar limbo until 2004 when an efficient algorithm showed that this problem is in P.

Another complexity class is NP. Imagine a maze. “Is there a way out of this maze?” is a yes/no question. If the answer is yes, then there is a simple way to convince us: simply give us the directions, we’ll follow them, and we’ll find the exit. If the answer is no, however, we’d have to traverse the entire maze without ever finding a way out to be convinced.

Such yes/no problems for which, if the answer is yes, we can efficiently demonstrate that, belong to NP. Any solution to a problem serves to convince us of the answer, and so P is contained in NP. Surprisingly, a million dollar question is whether P=NP. Nobody knows.

Trust in machines

The classes described so far represent problems faced by a normal computer. But computers are fundamentally changing – quantum computers are being developed. But if a new type of computer comes along and claims to solve one of our problems, how can we trust it is correct?

Picture of computer code.
Complexity science helps explain what problems a computer can solve. Phatcharapon/Shutterstock

Imagine an interaction between two entities, an interrogator and a prover. In a police interrogation, the prover may be a suspect attempting to prove their innocence. The interrogator must decide whether the prover is sufficiently convincing. There is an imbalance; knowledge-wise the interrogator is in an inferior position.

In complexity theory, the interrogator is the person, with limited computational power, trying to solve the problem. The prover is the new computer, which is assumed to have immense computational power. An interactive proof system is a protocol that the interrogator can use in order to determine, at least with high probability, whether the prover should be believed. By analogy, these are crimes that the police may not be able to solve, but at least innocents can convince the police of their innocence. This is the class IP.

If multiple provers can be interrogated, and the provers are not allowed to coordinate their answers (as is typically the case when the police interrogates multiple suspects), then we get to the class MIP. Such interrogations, via cross examining the provers’ responses, provide the interrogator with greater power, so MIP contains IP.

Quantum communication is a new form of communication carried out with qubits. Entanglement – a quantum feature in which qubits are spookishly entangled, even if separated – makes quantum communication fundamentally different to ordinary communication. Allowing the provers of MIP to share an entangled qubit leads to the class MIP*.

It seems obvious that communication between the provers can only serve to help the provers coordinate lies rather than assist the interrogator in discovering truth. For that reason, nobody expected that allowing more communication would make computational problems more reliable and solvable. Surprisingly, we now know that MIP* = RE. This means that quantum communication behaves wildly differently to normal communication.

Far-reaching implications

In the 1970s, Alain Connes formulated what became known as the Connes Embedding Problem. Grossly simplified, this asked whether infinite matrices can be approximated by finite matrices. This new paper has now proved this isn’t possible – an important finding for pure mathematicians.

In 1993, meanwhile, Boris Tsirelson pinpointed a problem in physics now known as Tsirelson’s Problem. This was about two different mathematical formalisms of a single situation in quantum mechanics – to date an incredibly successful theory that explains the subatomic world. Being two different descriptions of the same phenomenon it was to be expected that the two formalisms were mathematically equivalent.

But the new paper now shows that they aren’t. Exactly how they can both still yield the same results and both describe the same physical reality is unknown, but it is why physicists are also suddenly taking an interest.

Time will tell what other unanswered scientific questions will yield to the study of complexity. Undoubtedly, MIP* = RE is a great leap forward.The Conversation

Ittay Weiss, Senior Lecturer, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on AI Chatbots

AI called GPT-3 can write like a human but don’t mistake that for thinking – neuroscientist

Bas Nastassia/Shutterstock
Guillaume Thierry, Bangor University

Since it was unveiled earlier this year, the new AI-based language generating software GPT-3 has attracted much attention for its ability to produce passages of writing that are convincingly human-like. Some have even suggested that the program, created by Elon Musk’s OpenAI, may be considered or appears to exhibit, something like artificial general intelligence (AGI), the ability to understand or perform any task a human can. This breathless coverage reveals a natural yet aberrant collusion in people’s minds between the appearance of language and the capacity to think.

Language and thought, though obviously not the same, are strongly and intimately related. And some people tend to assume that language is the ultimate sign of thought. But language can be easily generated without a living soul. All it takes is the digestion of a database of human-produced language by a computer program, AI-based or not.

Based on the relatively few samples of text available for examination, GPT-3 is capable of producing excellent syntax. It boasts a wide range of vocabulary, owing to an unprecedentedly large knowledge base from which it can generate thematically relevant, highly coherent new statements. Yet, it is profoundly unable to reason or show any sign of “thinking”.

For instance, one passage written by GPT-3 predicts you could suddenly die after drinking cranberry juice with a teaspoon of grape juice in it. This is despite the system having access to information on the web that grape juice is edible.

Another passage suggests that to bring a table through a doorway that is too small you should cut the door in half. A system that could understand what it was writing or had any sense of what the world was like would not generate such aberrant “solutions” to a problem.

If the goal is to create a system that can chat, fair enough. GPT-3 shows AI will certainly lead to better experiences than what has been available until now. And it certainly allows for a good laugh.

But if the goal is to get some thinking into the system, then we are nowhere near. That’s because AI such as GPT-3 works by “digesting” colossal databases of language content to produce, “new”, synthesised language content.

The source is language; the product is language. In the middle stands a mysterious black box a thousand times smaller than the human brain in capacity and nothing like it in the way it works.

Reconstructing the thinking that is at the origin of the language content we observe is an impossible task, unless you study the thinking itself. As philosopher John Searle put it, only “machines with internal causal powers equivalent to those of brains” could think.

And for all our advances in cognitive neuroscience, we know deceptively little about human thinking. So how could we hope to program it into a machine?

What mesmerises me is that people go to the trouble of suggesting what kind of reasoning that AI like GTP-3 should be able to engage with. This is really strange, and in some ways amusing – if not worrying.

Why would a computer program, based on AI or not, and designed to digest hundreds of gigabytes of text on many different topics, know anything about biology or social reasoning? It has no actual experience of the world. It cannot have any human-like internal representation.

It appears that many of us fall victim of a mind-language causation fallacy. Supposedly there is no smoke without fire, no language without mind. But GPT-3 is a language smoke machine, entirely hollow of any actual human trait or psyche. It is just an algorithm, and there is no reason to expect that it could ever deliver any kind of reasoning. Because it cannot.

Filling in the gaps

Part of the problem is the strong illusion of coherence we get from reading a passage produced by AI such as GPT-3 because of our own abilities. Our brains were created by hundreds of thousands of years of evolution and tens of thousands of hours of biological development to extract meaning from the world and construct a coherent account of any situation.

When we read a GPT-3 output, our brain is doing most of the work. We make sense that was never intended, simply because the language looks and feels coherent and thematically sound, and so we connect the dots. We are so used to doing this, in every moment of our lives, that we don’t even realise it is happening.

We relate the points made to one another and we may even be tempted to think that a phrase is cleverly worded simply because the style may be a little odd or surprising. And if the language is particularly clear, direct and well constructed (which is what AI generators are optimised to deliver), we are strongly tempted to infer sentience, where there is no such thing.

When GPT-3’s predecessor GPT-2 wrote, “I am interested in understanding the origins of language,” who was doing the talking? The AI just spat out an ultra-shrunk summary of our ultimate quest as humans, picked up from an ocean of stored human language productions – our endless trying to understand what is language and where we come from. But there is no ghost in the shell, whether we “converse” with GPT-2, GPT-3, or GPT-9000.The Conversation

Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on password breakers

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?

Paul Haskell-Dowland, Author provided
Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?

Read more: The long history, and short future, of the password

What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.

Read more: Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?

Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.

Read more: Facebook hack reveals the perils of using a single account to log in to other services The Conversation

Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Apple and its new privacy-centric approach to data collection

Apple is starting a war over privacy with iOS 14 – publishers are naive if they think it will back down

Ten four, let’s go to war! DANIEL CONSTANTE
Ana Isabel Domingos Canhoto, Brunel University London

iPhone users are about to receive access to Apple’s latest mobile operating system, iOS 14. It will come with the usual array of shiny new features, but the real game-changer will be missing – at least until January.

For the first time, iOS 14 is to require apps to get permission from users before collecting their data – giving users an opt-in to this compromise to their privacy.

This caused a major backlash from companies that rely on this data to make money, most notably Facebook. So why did Apple decide to jeopardise the business models of major rivals and their advertisers, and will the postponement make any difference?

The backlash

The opt-in is not the only change in iOS 14 that gives users more privacy protection, but it has attracted the most attention. Privacy campaigners will applaud the move, but the reaction from the media business has been mixed. The likes of American online publishing trade body Digital Content Next thought it would potentially benefit members.

But Facebook warned the opt-in could halve publishers’ revenues on its advertising platform, while some publishers are loudly concerned. The owner of UK news site Mail Online, DMG Media, threatened to delete its app from the App Store.

Facebook logo with a silhouette of a padlock
You are the product. Ink Drop

Whether publishers win or lose very much depends on their business model and customer base. Publishers’ model of selling their product to consumers and selling space to advertisers has been badly damaged by the internet. All the free content online drove down physical sales, which in turn eroded advertising income.

A few publications, like The Times in the UK, managed to convert readers into online subscribers, but the majority didn’t. Consequently online advertising revenues have become very important for most publishers – particularly behavioural or targeted advertising, which displays different ads to different viewers of the same page according to factors like their location, browser, and which websites they have visited. The adverts they see are decided by an ad trader, which is often Google.

Despite the importance of behavioural advertising to online publishers, they receive under 30% of what advertisers pay. Most of the remainder goes to Google and Facebook.

These two companies’ power comes from ad-trading, and because they own many platforms on which publishers’ content is consumed – be it Facebook, Instagram, YouTube or the Google search engine – and sell advertising on the back of the user data. To increase behavioural advertising income on their own sites, publishers are left with either attracting lots of viewers via clickbait or inflammatory content, or attracting difficult to reach, valuable customers via niche content.

Clickbait tends to upset customers, especially highly educated ones, while niche content tends to be too smalltime for large media publishers. The reason some publishers welcome Apple’s move is that it could give them more control over advertising again, not only through selling more traditional display ads but also by developing other streams such as subscriptions and branded content.

For instance, the New York Times saw its ad revenues increase when it ditched targeted ads in favour of traditional online display in Europe in 2018 to get around the GDPR data protection restrictions. Conversely, DMG Media’s reaction to iOS 14 is because it collects and sells customer data on the Mail Online app, and also uses content with shock value to attract visitors and advertisers.

Privacy politics

Another important factor is the growing signs of a pushback against highly targeted advertising. With online users becoming increasingly concerned about online privacy, they are likely to engage less with ads, which reduces’ publishers’ income. They might also stop visiting sites displaying the targeted ads.

This is particularly true of more educated users, so curbing data collection could help publishers who serve these people. Online advertising also attracts more clicks when users control their data, so this could be a selling point to advertisers.

More generally, making traditional display advertising more important will benefit large publishers, since they have bigger audiences to sell to advertisers; but also those with clearly defined niche audiences (the Financial Times, say), since they offer a great way for advertisers to reach these people.

Online advertising represents 99% of Facebook revenues, so its resistance is not surprising. Online advertising is also important to Google revenues, though less so, and Google is also betting on the growing importance of consumer privacy by limiting data collection too – for instance, by third-party websites on the Chrome browser.

Apple’s perspective

Apple has little to gain here in the short term. It may even lose out if the likes of Mail Online leave the platform. But this is not a short-term move.

Apple wants to be known for a few things, such as user-friendly interfaces. It is also known for not aggressively collecting and exploiting user data, and standing up for consumer privacy.

Following the Cambridge Analytica scandal, which exposed Facebook’s lax privacy practices, Apple CEO Tim Cook famously said his company would never monetise customers’ information because privacy was a human right. The iOS 14 uneviling fits this approach. It helps Apple differentiate from competitors. It protects the company from privacy scandals. And it helps develop customer trust.

Tim Cook standing below a sign that says 'liberal'
Privacy pays. John Gress Media Inc

Moreover, Apple doesn’t need to exploit customers’ data. Apple’s revenues derive mostly from hardware products and software licences. This business model requires large upfront investment and constant innovation, but is difficult to copy (due to patents, technical capacity and talent) and creates high barriers to entry.

Therefore Apple’s decision to postpone the opt-in until January is not a sign that it might backtrack on the feature. Privacy is core to Apple, and the company’s share of the app market is such that ultimately it is unlikely to feel threatened by some publishers withdrawing. The delay simply makes Apple look reasonable at a time when it is fighting accusations of monopolistic behaviour and unfair practices. So publishers should get ready for significant changes in the app ecosystem, whether they like it or not.The Conversation

Ana Isabel Domingos Canhoto, Reader in Marketing, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Robots in high-contact medical environments

Robots to be introduced in UK care homes to allay loneliness – that’s inhuman

Fay Bound Alberti, University of York

Some UK care homes are to deploy robots in an attempt to allay loneliness and boost mental health. The wheeled machines will “initiate rudimentary conversations, play residents’ favourite music, teach them languages, and offer practical help, including medicine reminders”. They are being introduced after an international trial found they reduced anxiety and loneliness.

These robots can hold basic conversations and be programmed to people’s interests. This is positive, but they are not a viable alternative to human interaction. It’s a sad state of affairs when robots are presented as solutions to human loneliness. Though intended as a way to fill in for carers in a “stretched social care system” rather than as a long-term solution, the use of robots is a slippery slope in removing the aged and infirm still further from the nerves and fibres of human interaction.

Robot companions have been trialled in the UK and Japan, from dogs that sit to attention to young women greeting isolated businessmen after a long day at work. They certainly serve a function in reminding people what it is to have companionship, helping with crude social interaction and providing cues to what it is to be human.

But robots cannot provide the altruism and compassion that should be at the core of a caring system. And they might even increase loneliness in the long term by reducing the actual contact people have with humans, and by increasing a sense of disconnect.

While there have been studies showing robotic pets can reduce loneliness, such research is generally based on a contrast with no interaction at all, rather than a comparison of human and robotic interaction.

It’s also important to factor in the role of novelty, which is often missing in care home environments. In 2007, a Japanese nursing home introduced Ifbot, a resident robot that provided emotional companionship, sang songs and gave trivia quizzes to elderly residents. The director of the faculty reported that residents were interested for about a month before they lost interest, preferring “stuffed animals” to the “communication robot”.

Tactile connection

The preference for stuffed animals is, I think, important, because it also connects to the sensory experience of loneliness. Cuddly toys can be hugged and even temporarily moulded by the shape and temperature of the human body. Robots cannot. There is a limit to the sense of connection and embodied comfort that can come from robotic caregivers, or pets.

This is not only because robots show insufficient cultural awareness, and that their gestures might sometimes seem a little, well, mechanical. It’s because robots do not have flesh and blood, or even the malleability of a stuffed toy.

Consider the controversial experiments conducted by Harry Harlow in the 1950s that showed rhesus monkeys always preferred physical comfort to a mechanical caregiver, even if the latter had milk. Similarly, robots lack the warmth of a human or animal companion. They don’t respond intuitively to the movement of their companions, or regulate the heartbeats of their owners through the simple power of touch.

Loneliness is a physical affliction as well as a mental one. Companionship can improve health and increase wellbeing, but only when it is the right kind.

Stroking a dog can be soothing for the person as well as the animal. Walking a dog also gets people out of the house where that is possible, and encourages social interaction.

As the owner of a young labrador, I am not always a fan of early rising. But I can see the positive emotional impact a pet has had on my young son, in contrast to many hours of technological absorption. An Xbox can’t curl up on your bed in the middle of the night to keep you warm.

And the infamous Labrador stink is like perfume to my son, who claims it makes him feel less lonely. So it’s smell, as well as touch, that is involved loneliness – along with all the senses.

Aerial view of a cat and a dog on a person's lap.
Robots can’t do this. Chendongshan/Shutterstock.com


I am not a technophobe. In the Zoom world of COVID-19, technological solutions have a critical role in making people feel included, seen and listened to. In time, it may be that some of the distancing effects of technology, including the glitchy movements, whirring sounds and stilted body language will improve and become more naturalised. Similarly, robot companions may well in time become more lifelike. Who will remember the early, clunky days of Furreal pets?

But care robots are offering a solution that should not be needed. There is no reason for care home residents to be so devoid of human companionship (or animal support) that robot friends are the answer.

There is something dysfunctional about the infrastructure in which care is delivered, if robots are an economically motivated solution. Indeed, the introduction of robots into emotional care de-skills the complex work of caring, while commercialising and privatising responses to elderly loneliness.

It is often presented as “natural” or inevitable that elderly and infirm people live in homes, with other elderly and infirm people, shuttered away from the rest of the world. Care homes are an architectural way of concealing those that are least economically productive. There may be good homes, filled with happy residents, but there are many stories of people being ignored and neglected, especially during a pandemic.

How we care for the elderly and the infirm is a cultural and political choice. Historically, elderly and infirm people were part of the social fabric and extended families. With a globally ageing population, many countries are revisiting how best to restructure care homes in ways that reflect demographic, economic and cultural needs.

Care home schemes in the Netherlands, house students with elderly people and is popular with both. With a little imagination, care homes can be radically rethought.

New technologies have a role to play in society, just as they always have had in history. But they shouldn’t be used to paper over the gaps left by a withdrawal of social care and a breakdown in what “community” means in the 21st century. That’s inhuman.The Conversation

Fay Bound Alberti, Reader in History and UKRI Future Leaders Fellow, University of York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the Fourth Industrial Revolution and Agriculture

The fourth agricultural revolution is coming – but who will really benefit?

David Rose, University of Reading and Charlotte-Anne Chivers, University of Gloucestershire

Depending on who you listen to, artificial intelligence may either free us from monotonous labour and unleash huge productivity gains, or create a dystopia of mass unemployment and automated oppression. In the case of farming, some researchers, business people and politicians think the effects of AI and other advanced technologies are so great they are spurring a “fourth agricultural revolution”.

Given the potentially transformative effects of upcoming technology on farming – positive and negative – it’s vital that we pause and reflect before the revolution takes hold. It must work for everyone, whether it be farmers (regardless of their size or enterprise), landowners, farm workers, rural communities or the wider public. Yet, in a recently published study led by the researcher Hannah Barrett, we found that policymakers and the media and policymakers are framing the fourth agricultural revolution as overwhelmingly positive, without giving much focus to the potential negative consequences.

The first agricultural revolution occurred when humans started farming around 12,000 years ago. The second was the reorganisation of farmland from the 17th century onwards that followed the end of feudalism in Europe. And the third (also known as the green revolution) was the introduction of chemical fertilisers, pesticides and new high-yield crop breeds alongside heavy machinery in the 1950s and 1960s.

The fourth agricultural revolution, much like the fourth industrial revolution, refers to the anticipated changes from new technologies, particularly the use of AI to make smarter planning decisions and power autonomous robots. Such intelligent machines could be used for growing and picking crops, weeding, milking livestock and distributing agrochemicals via drone. Other farming-specific technologies include new types of gene editing to develop higher yielding, disease-resistant crops; vertical farms; and synthetic lab-grown meat.

These technologies are attracting huge amounts of funding and investment in the quest to boost food production while minimising further environmental degradation. This might, in part, be related to positive media coverage. Our research found that UK coverage of new farming technologies tends to be optimistic, portraying them as key to solving farming challenges.

However, many previous agricultural technologies were also greeted with similar enthusiasm before leading to controversy later on, such as with the first genetically modified crops and chemicals such as the now-banned pesticide DDT. Given wider controversies surrounding emergent technologies like nanotechnology and driverless cars, unchecked or blind techno-optimism is unwise.

We mustn’t assume that all of these new farming technologies will be adopted without overcoming certain barriers. Precedent tells us that benefits are unlikely to be spread evenly across society and that some people will lose out. We need to understand who might lose and what we can do about it, and ask wider questions such as whether new technologies will actually deliver as promised.

Cows in a large circular robotic milking machine
Robotic milking might be efficient but creates new stresses. Mark Brandon/Shutterstock

Robotic milking of cows provides a good example. In our research, a farmer told us that using robots had improved his work-life balance and allowed a disabled farm worker to avoid dextrous tasks on the farm. But they had also created a “different kind of stress” due to the resulting information overload and the perception that the farmer needed to be monitoring data 24/7.

The National Farmers’ Union (NFU) argues that new technologies could attract younger, more technically skilled entrants to an ageing workforce. Such breakthroughs could enable a wider range of people to engage in farming by eliminating the back-breaking stereotypes through greater use of machinery.

But existing farm workers at risk of being replaced by a machine or whose skills are unsuited to a new style of farming will inevitably be less excited by the prospect of change. And they may not enjoy being forced to spend less time working outside, becoming increasingly reliant on machines instead of their own knowledge.

Power imbalance

There are also potential power inequalities in this new revolution. Our research found that some farmers were optimistic about a high-tech future. But others wondered whether those with less capital, poor broadband availability and IT skills, and access to advice on how to use the technology would be able to benefit.

History suggests technology companies and larger farm businesses are often the winners of this kind of change, and benefits don’t always trickle down to smaller family farms. In the context of the fourth agricultural revolution, this could mean farmers not owning or being able to fully access the data gathered on their farms by new technologies. Or reliance on companies to maintain increasingly important and complex equipment.

Driverless tractor and drone pass over a field.
Advanced machinery can tie farmers to tech firms. Scharfsinn/Shutterstock

The controversy surrounding GM crops (which are created by inserting DNA from other organisms) provides a frank reminder that there is no guarantee that new technologies will be embraced by the public. A similar backlash could occur if the public perceive gene editing (which instead involves making small, controlled changes to a living organism’s DNA) as tantamount to GM. Proponents of wearable technology for livestock claim they improve welfare, but the public might see the use of such devices as treating animals like machines.

Instead of blind optimism, we need to identify where benefits and disadvantages of new agricultural technology will occur and for whom. This process must include a wide range of people to help create society-wide responsible visions for the future of farming.

The NFU has said the fourth agricultural revolution is “exciting – as well as a bit scary … but then the two often go together”. It is time to discuss the scary aspects with the same vigour as the exciting part.The Conversation

David Rose, Elizabeth Creak Associate Professor of Agricultural Innovation and Extension, University of Reading and Charlotte-Anne Chivers, Research Assistant, Countryside and Community Research Institute, University of Gloucestershire

This article is republished from The Conversation under a Creative Commons license. Read the original article.