Featured

Robo-Lawyers taking over? Not Quite

With the advances on technology we’ve gone through these last two decades things have changed and will keep changing in the future – one the instigators of this change is Artificial Intelligence (AI).

They’re still not sentient, we’re still safe.

AI has seen quite an advancement on a lot of sectors – automation aims to have less human assistance to fulfil the same tasks, as with automation with vehicles.
Car manufacturers such as Tesla are already using Autopilot technologies to assist human drivers with day-to-day tasks and to run diagnostics on and even repair their cars. Others such as Waymo are equally working towards making vehicles that are self-driving with no human interaction.

The field of Law is however an altogether different kettle of fish.

Today, we see assisting AI that are wrongfully called Robo-Lawyers – while it’s true that there are some tools such as DoNotPay which aim to servethe public, they are nonetheless rooted in case-law made by humans. Solicitors and Barristers also work with vendors such as iManage or NetDocuments which use ‘AI’ this is in fact Machine Learning and Automation. We cannnot replace a good old flesh-and-blood Lawyer just yet – perhaps never.

Thus, the term “robo-lawyer” doesn’t fully capture a complete Artificially Intelligent system that can thoroughly act on a matter: it can analyse data, sort files and information, yes, but as with the medical profession upholding the laws of the land are simply too human to be left to a bare-metal server.

Solicitors and Barristers are not out of a job yet, nor do we think they will be in the near or far future.

Justice is an inherently ‘human’ concept

Artificial Intelligence is a very broad term, well explored by a Lords Committee on the subject. The APPG on AI would likely agree with us at Hayachi Services that rules and laws have been present in human societies for a very long time indeed.

So what will AI change: Can machines learn faster than humans for example? Not really, no. Machines can process some kinds of information faster than humans, but the human mind is still the most powerful information processing system on this planet. Perhaps Bio-Computers may one day match a Solicitor’s years of expertise or a Barrister’s way with words, but because law is as organic as the fabric of our culture it is doubtful that machines will ever ‘lead-the-field’.

What we consider ‘AI’ today is actually Machine Learning, a very complicated way of saying that we train the computer to run a ‘monkey see, monkey do’ exercise and to detect patterns in line with the variables we humans have set. Indeed a lot of engineering and expertise goes into such an exercise, and it can greatly assist work in the field of Law (such the Lexis Nexis Precedent Database) – but it is what it is.


I will always prefer a real lawyer to an robo-lawyer, what about you?

Featured

What is a CVE?

CVE stands for Common Vulnerabilities and Exposures, in essence it represents a security risk that has been classified and can be remediated on its own.

At Hayachi Services we take the security of your organisation seriously, and so all of our Partners use CVEs to help keep you secure and classify risk on your systems.

For example, one of the modules that is offered with Panda Security’s Adaptive Defense is Patch Management. This tool uses CVE classifications on hundreds of applications to ensure that you can keep track of the Common Vulnerabilities your IT Estate has and remediate them with the click of a button.

Equally Red Hat use Smart Management to allow you to easily update and keep control of your Red Hat Enterprise Linux estate, at no extra cost to you – another bit of value when buying with Red Hat.

So then, why is it important to pay for world-class solutions to avoid these ‘Common’ Vulnerabilities?

A CVE can be a dangerous thing even if it is well known, in the same way that while Phishing attacks are common they still do incredible damage to business and individuals.

Businesses will at times use Legacy software, for compatibility and compliance reasons as well as the simple fact that sometimes that business-critical tools aren’t always available on a newer system.

For example at CERN they are aware of the risk of legacy operating systems and take steps to protect themselves, and the example used is powerful: who would walk naked through a quarantine ward and expect not to be infected?

Being aware of the most obvious, commons risks across your estate – having the reporting, patching and control of what is within your risk appetite and what needs to be secured immediately is incredibly powerful.

The National Cyber Security Centre do weekly threat reports for exactly this reason, new threats develop organically every day and keeping a finger to the pulse is essential.

How can Hayachi Services help?

Simple question, and a simple answer. Chat with us! We can talk about your risk appetite and suggest actions to help protect your business.

Our Partners work with the smallest SMEs and Charities all the way up to the Fortune500, Defence and Critical Infrastructure so we’ll certainly have something to allay your concerns.

We are proudly vendor neutral, and while our favourites are our Partners from across industry that isn’t to say we will only recommend what we personally sell.

Favouritism doesn’t keep our clients safe, so we don’t do it!

You can find our Partner’s pages on CVEs below:

The Conversation, on AI Chatbots

AI called GPT-3 can write like a human but don’t mistake that for thinking – neuroscientist

Bas Nastassia/Shutterstock
Guillaume Thierry, Bangor University

Since it was unveiled earlier this year, the new AI-based language generating software GPT-3 has attracted much attention for its ability to produce passages of writing that are convincingly human-like. Some have even suggested that the program, created by Elon Musk’s OpenAI, may be considered or appears to exhibit, something like artificial general intelligence (AGI), the ability to understand or perform any task a human can. This breathless coverage reveals a natural yet aberrant collusion in people’s minds between the appearance of language and the capacity to think.

Language and thought, though obviously not the same, are strongly and intimately related. And some people tend to assume that language is the ultimate sign of thought. But language can be easily generated without a living soul. All it takes is the digestion of a database of human-produced language by a computer program, AI-based or not.

Based on the relatively few samples of text available for examination, GPT-3 is capable of producing excellent syntax. It boasts a wide range of vocabulary, owing to an unprecedentedly large knowledge base from which it can generate thematically relevant, highly coherent new statements. Yet, it is profoundly unable to reason or show any sign of “thinking”.

For instance, one passage written by GPT-3 predicts you could suddenly die after drinking cranberry juice with a teaspoon of grape juice in it. This is despite the system having access to information on the web that grape juice is edible.

Another passage suggests that to bring a table through a doorway that is too small you should cut the door in half. A system that could understand what it was writing or had any sense of what the world was like would not generate such aberrant “solutions” to a problem.

If the goal is to create a system that can chat, fair enough. GPT-3 shows AI will certainly lead to better experiences than what has been available until now. And it certainly allows for a good laugh.

But if the goal is to get some thinking into the system, then we are nowhere near. That’s because AI such as GPT-3 works by “digesting” colossal databases of language content to produce, “new”, synthesised language content.

The source is language; the product is language. In the middle stands a mysterious black box a thousand times smaller than the human brain in capacity and nothing like it in the way it works.

Reconstructing the thinking that is at the origin of the language content we observe is an impossible task, unless you study the thinking itself. As philosopher John Searle put it, only “machines with internal causal powers equivalent to those of brains” could think.

And for all our advances in cognitive neuroscience, we know deceptively little about human thinking. So how could we hope to program it into a machine?

What mesmerises me is that people go to the trouble of suggesting what kind of reasoning that AI like GTP-3 should be able to engage with. This is really strange, and in some ways amusing – if not worrying.

Why would a computer program, based on AI or not, and designed to digest hundreds of gigabytes of text on many different topics, know anything about biology or social reasoning? It has no actual experience of the world. It cannot have any human-like internal representation.

It appears that many of us fall victim of a mind-language causation fallacy. Supposedly there is no smoke without fire, no language without mind. But GPT-3 is a language smoke machine, entirely hollow of any actual human trait or psyche. It is just an algorithm, and there is no reason to expect that it could ever deliver any kind of reasoning. Because it cannot.

Filling in the gaps

Part of the problem is the strong illusion of coherence we get from reading a passage produced by AI such as GPT-3 because of our own abilities. Our brains were created by hundreds of thousands of years of evolution and tens of thousands of hours of biological development to extract meaning from the world and construct a coherent account of any situation.

When we read a GPT-3 output, our brain is doing most of the work. We make sense that was never intended, simply because the language looks and feels coherent and thematically sound, and so we connect the dots. We are so used to doing this, in every moment of our lives, that we don’t even realise it is happening.

We relate the points made to one another and we may even be tempted to think that a phrase is cleverly worded simply because the style may be a little odd or surprising. And if the language is particularly clear, direct and well constructed (which is what AI generators are optimised to deliver), we are strongly tempted to infer sentience, where there is no such thing.

When GPT-3’s predecessor GPT-2 wrote, “I am interested in understanding the origins of language,” who was doing the talking? The AI just spat out an ultra-shrunk summary of our ultimate quest as humans, picked up from an ocean of stored human language productions – our endless trying to understand what is language and where we come from. But there is no ghost in the shell, whether we “converse” with GPT-2, GPT-3, or GPT-9000.The Conversation

Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on password breakers

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?

Paul Haskell-Dowland, Author provided
Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?


Read more: The long history, and short future, of the password


What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.


Read more: Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?


Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.


Read more: Facebook hack reveals the perils of using a single account to log in to other services The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Apple and its new privacy-centric approach to data collection

Apple is starting a war over privacy with iOS 14 – publishers are naive if they think it will back down

Ten four, let’s go to war! DANIEL CONSTANTE
Ana Isabel Domingos Canhoto, Brunel University London

iPhone users are about to receive access to Apple’s latest mobile operating system, iOS 14. It will come with the usual array of shiny new features, but the real game-changer will be missing – at least until January.

For the first time, iOS 14 is to require apps to get permission from users before collecting their data – giving users an opt-in to this compromise to their privacy.

This caused a major backlash from companies that rely on this data to make money, most notably Facebook. So why did Apple decide to jeopardise the business models of major rivals and their advertisers, and will the postponement make any difference?

The backlash

The opt-in is not the only change in iOS 14 that gives users more privacy protection, but it has attracted the most attention. Privacy campaigners will applaud the move, but the reaction from the media business has been mixed. The likes of American online publishing trade body Digital Content Next thought it would potentially benefit members.

But Facebook warned the opt-in could halve publishers’ revenues on its advertising platform, while some publishers are loudly concerned. The owner of UK news site Mail Online, DMG Media, threatened to delete its app from the App Store.

Facebook logo with a silhouette of a padlock
You are the product. Ink Drop

Whether publishers win or lose very much depends on their business model and customer base. Publishers’ model of selling their product to consumers and selling space to advertisers has been badly damaged by the internet. All the free content online drove down physical sales, which in turn eroded advertising income.

A few publications, like The Times in the UK, managed to convert readers into online subscribers, but the majority didn’t. Consequently online advertising revenues have become very important for most publishers – particularly behavioural or targeted advertising, which displays different ads to different viewers of the same page according to factors like their location, browser, and which websites they have visited. The adverts they see are decided by an ad trader, which is often Google.

Despite the importance of behavioural advertising to online publishers, they receive under 30% of what advertisers pay. Most of the remainder goes to Google and Facebook.

These two companies’ power comes from ad-trading, and because they own many platforms on which publishers’ content is consumed – be it Facebook, Instagram, YouTube or the Google search engine – and sell advertising on the back of the user data. To increase behavioural advertising income on their own sites, publishers are left with either attracting lots of viewers via clickbait or inflammatory content, or attracting difficult to reach, valuable customers via niche content.

Clickbait tends to upset customers, especially highly educated ones, while niche content tends to be too smalltime for large media publishers. The reason some publishers welcome Apple’s move is that it could give them more control over advertising again, not only through selling more traditional display ads but also by developing other streams such as subscriptions and branded content.

For instance, the New York Times saw its ad revenues increase when it ditched targeted ads in favour of traditional online display in Europe in 2018 to get around the GDPR data protection restrictions. Conversely, DMG Media’s reaction to iOS 14 is because it collects and sells customer data on the Mail Online app, and also uses content with shock value to attract visitors and advertisers.

Privacy politics

Another important factor is the growing signs of a pushback against highly targeted advertising. With online users becoming increasingly concerned about online privacy, they are likely to engage less with ads, which reduces’ publishers’ income. They might also stop visiting sites displaying the targeted ads.

This is particularly true of more educated users, so curbing data collection could help publishers who serve these people. Online advertising also attracts more clicks when users control their data, so this could be a selling point to advertisers.

More generally, making traditional display advertising more important will benefit large publishers, since they have bigger audiences to sell to advertisers; but also those with clearly defined niche audiences (the Financial Times, say), since they offer a great way for advertisers to reach these people.

Online advertising represents 99% of Facebook revenues, so its resistance is not surprising. Online advertising is also important to Google revenues, though less so, and Google is also betting on the growing importance of consumer privacy by limiting data collection too – for instance, by third-party websites on the Chrome browser.

Apple’s perspective

Apple has little to gain here in the short term. It may even lose out if the likes of Mail Online leave the platform. But this is not a short-term move.

Apple wants to be known for a few things, such as user-friendly interfaces. It is also known for not aggressively collecting and exploiting user data, and standing up for consumer privacy.

Following the Cambridge Analytica scandal, which exposed Facebook’s lax privacy practices, Apple CEO Tim Cook famously said his company would never monetise customers’ information because privacy was a human right. The iOS 14 uneviling fits this approach. It helps Apple differentiate from competitors. It protects the company from privacy scandals. And it helps develop customer trust.

Tim Cook standing below a sign that says 'liberal'
Privacy pays. John Gress Media Inc

Moreover, Apple doesn’t need to exploit customers’ data. Apple’s revenues derive mostly from hardware products and software licences. This business model requires large upfront investment and constant innovation, but is difficult to copy (due to patents, technical capacity and talent) and creates high barriers to entry.

Therefore Apple’s decision to postpone the opt-in until January is not a sign that it might backtrack on the feature. Privacy is core to Apple, and the company’s share of the app market is such that ultimately it is unlikely to feel threatened by some publishers withdrawing. The delay simply makes Apple look reasonable at a time when it is fighting accusations of monopolistic behaviour and unfair practices. So publishers should get ready for significant changes in the app ecosystem, whether they like it or not.The Conversation

Ana Isabel Domingos Canhoto, Reader in Marketing, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Robots in high-contact medical environments

Robots to be introduced in UK care homes to allay loneliness – that’s inhuman

Fay Bound Alberti, University of York

Some UK care homes are to deploy robots in an attempt to allay loneliness and boost mental health. The wheeled machines will “initiate rudimentary conversations, play residents’ favourite music, teach them languages, and offer practical help, including medicine reminders”. They are being introduced after an international trial found they reduced anxiety and loneliness.

These robots can hold basic conversations and be programmed to people’s interests. This is positive, but they are not a viable alternative to human interaction. It’s a sad state of affairs when robots are presented as solutions to human loneliness. Though intended as a way to fill in for carers in a “stretched social care system” rather than as a long-term solution, the use of robots is a slippery slope in removing the aged and infirm still further from the nerves and fibres of human interaction.

Robot companions have been trialled in the UK and Japan, from dogs that sit to attention to young women greeting isolated businessmen after a long day at work. They certainly serve a function in reminding people what it is to have companionship, helping with crude social interaction and providing cues to what it is to be human.

But robots cannot provide the altruism and compassion that should be at the core of a caring system. And they might even increase loneliness in the long term by reducing the actual contact people have with humans, and by increasing a sense of disconnect.

While there have been studies showing robotic pets can reduce loneliness, such research is generally based on a contrast with no interaction at all, rather than a comparison of human and robotic interaction.

It’s also important to factor in the role of novelty, which is often missing in care home environments. In 2007, a Japanese nursing home introduced Ifbot, a resident robot that provided emotional companionship, sang songs and gave trivia quizzes to elderly residents. The director of the faculty reported that residents were interested for about a month before they lost interest, preferring “stuffed animals” to the “communication robot”.

Tactile connection

The preference for stuffed animals is, I think, important, because it also connects to the sensory experience of loneliness. Cuddly toys can be hugged and even temporarily moulded by the shape and temperature of the human body. Robots cannot. There is a limit to the sense of connection and embodied comfort that can come from robotic caregivers, or pets.

This is not only because robots show insufficient cultural awareness, and that their gestures might sometimes seem a little, well, mechanical. It’s because robots do not have flesh and blood, or even the malleability of a stuffed toy.

Consider the controversial experiments conducted by Harry Harlow in the 1950s that showed rhesus monkeys always preferred physical comfort to a mechanical caregiver, even if the latter had milk. Similarly, robots lack the warmth of a human or animal companion. They don’t respond intuitively to the movement of their companions, or regulate the heartbeats of their owners through the simple power of touch.

Loneliness is a physical affliction as well as a mental one. Companionship can improve health and increase wellbeing, but only when it is the right kind.

Stroking a dog can be soothing for the person as well as the animal. Walking a dog also gets people out of the house where that is possible, and encourages social interaction.

As the owner of a young labrador, I am not always a fan of early rising. But I can see the positive emotional impact a pet has had on my young son, in contrast to many hours of technological absorption. An Xbox can’t curl up on your bed in the middle of the night to keep you warm.

And the infamous Labrador stink is like perfume to my son, who claims it makes him feel less lonely. So it’s smell, as well as touch, that is involved loneliness – along with all the senses.

Aerial view of a cat and a dog on a person's lap.
Robots can’t do this. Chendongshan/Shutterstock.com

Techno-fix

I am not a technophobe. In the Zoom world of COVID-19, technological solutions have a critical role in making people feel included, seen and listened to. In time, it may be that some of the distancing effects of technology, including the glitchy movements, whirring sounds and stilted body language will improve and become more naturalised. Similarly, robot companions may well in time become more lifelike. Who will remember the early, clunky days of Furreal pets?

But care robots are offering a solution that should not be needed. There is no reason for care home residents to be so devoid of human companionship (or animal support) that robot friends are the answer.

There is something dysfunctional about the infrastructure in which care is delivered, if robots are an economically motivated solution. Indeed, the introduction of robots into emotional care de-skills the complex work of caring, while commercialising and privatising responses to elderly loneliness.

It is often presented as “natural” or inevitable that elderly and infirm people live in homes, with other elderly and infirm people, shuttered away from the rest of the world. Care homes are an architectural way of concealing those that are least economically productive. There may be good homes, filled with happy residents, but there are many stories of people being ignored and neglected, especially during a pandemic.

How we care for the elderly and the infirm is a cultural and political choice. Historically, elderly and infirm people were part of the social fabric and extended families. With a globally ageing population, many countries are revisiting how best to restructure care homes in ways that reflect demographic, economic and cultural needs.

Care home schemes in the Netherlands, house students with elderly people and is popular with both. With a little imagination, care homes can be radically rethought.

New technologies have a role to play in society, just as they always have had in history. But they shouldn’t be used to paper over the gaps left by a withdrawal of social care and a breakdown in what “community” means in the 21st century. That’s inhuman.The Conversation

Fay Bound Alberti, Reader in History and UKRI Future Leaders Fellow, University of York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the Fourth Industrial Revolution and Agriculture

The fourth agricultural revolution is coming – but who will really benefit?

kung_tom/shutterstock
David Rose, University of Reading and Charlotte-Anne Chivers, University of Gloucestershire

Depending on who you listen to, artificial intelligence may either free us from monotonous labour and unleash huge productivity gains, or create a dystopia of mass unemployment and automated oppression. In the case of farming, some researchers, business people and politicians think the effects of AI and other advanced technologies are so great they are spurring a “fourth agricultural revolution”.

Given the potentially transformative effects of upcoming technology on farming – positive and negative – it’s vital that we pause and reflect before the revolution takes hold. It must work for everyone, whether it be farmers (regardless of their size or enterprise), landowners, farm workers, rural communities or the wider public. Yet, in a recently published study led by the researcher Hannah Barrett, we found that policymakers and the media and policymakers are framing the fourth agricultural revolution as overwhelmingly positive, without giving much focus to the potential negative consequences.

The first agricultural revolution occurred when humans started farming around 12,000 years ago. The second was the reorganisation of farmland from the 17th century onwards that followed the end of feudalism in Europe. And the third (also known as the green revolution) was the introduction of chemical fertilisers, pesticides and new high-yield crop breeds alongside heavy machinery in the 1950s and 1960s.

The fourth agricultural revolution, much like the fourth industrial revolution, refers to the anticipated changes from new technologies, particularly the use of AI to make smarter planning decisions and power autonomous robots. Such intelligent machines could be used for growing and picking crops, weeding, milking livestock and distributing agrochemicals via drone. Other farming-specific technologies include new types of gene editing to develop higher yielding, disease-resistant crops; vertical farms; and synthetic lab-grown meat.

These technologies are attracting huge amounts of funding and investment in the quest to boost food production while minimising further environmental degradation. This might, in part, be related to positive media coverage. Our research found that UK coverage of new farming technologies tends to be optimistic, portraying them as key to solving farming challenges.

However, many previous agricultural technologies were also greeted with similar enthusiasm before leading to controversy later on, such as with the first genetically modified crops and chemicals such as the now-banned pesticide DDT. Given wider controversies surrounding emergent technologies like nanotechnology and driverless cars, unchecked or blind techno-optimism is unwise.

We mustn’t assume that all of these new farming technologies will be adopted without overcoming certain barriers. Precedent tells us that benefits are unlikely to be spread evenly across society and that some people will lose out. We need to understand who might lose and what we can do about it, and ask wider questions such as whether new technologies will actually deliver as promised.

Cows in a large circular robotic milking machine
Robotic milking might be efficient but creates new stresses. Mark Brandon/Shutterstock

Robotic milking of cows provides a good example. In our research, a farmer told us that using robots had improved his work-life balance and allowed a disabled farm worker to avoid dextrous tasks on the farm. But they had also created a “different kind of stress” due to the resulting information overload and the perception that the farmer needed to be monitoring data 24/7.

The National Farmers’ Union (NFU) argues that new technologies could attract younger, more technically skilled entrants to an ageing workforce. Such breakthroughs could enable a wider range of people to engage in farming by eliminating the back-breaking stereotypes through greater use of machinery.

But existing farm workers at risk of being replaced by a machine or whose skills are unsuited to a new style of farming will inevitably be less excited by the prospect of change. And they may not enjoy being forced to spend less time working outside, becoming increasingly reliant on machines instead of their own knowledge.

Power imbalance

There are also potential power inequalities in this new revolution. Our research found that some farmers were optimistic about a high-tech future. But others wondered whether those with less capital, poor broadband availability and IT skills, and access to advice on how to use the technology would be able to benefit.

History suggests technology companies and larger farm businesses are often the winners of this kind of change, and benefits don’t always trickle down to smaller family farms. In the context of the fourth agricultural revolution, this could mean farmers not owning or being able to fully access the data gathered on their farms by new technologies. Or reliance on companies to maintain increasingly important and complex equipment.

Driverless tractor and drone pass over a field.
Advanced machinery can tie farmers to tech firms. Scharfsinn/Shutterstock

The controversy surrounding GM crops (which are created by inserting DNA from other organisms) provides a frank reminder that there is no guarantee that new technologies will be embraced by the public. A similar backlash could occur if the public perceive gene editing (which instead involves making small, controlled changes to a living organism’s DNA) as tantamount to GM. Proponents of wearable technology for livestock claim they improve welfare, but the public might see the use of such devices as treating animals like machines.

Instead of blind optimism, we need to identify where benefits and disadvantages of new agricultural technology will occur and for whom. This process must include a wide range of people to help create society-wide responsible visions for the future of farming.

The NFU has said the fourth agricultural revolution is “exciting – as well as a bit scary … but then the two often go together”. It is time to discuss the scary aspects with the same vigour as the exciting part.The Conversation

David Rose, Elizabeth Creak Associate Professor of Agricultural Innovation and Extension, University of Reading and Charlotte-Anne Chivers, Research Assistant, Countryside and Community Research Institute, University of Gloucestershire

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on how digital communication is less rich than in-person

Why our screens leave us hungry for more nutritious forms of social interaction

Shutterstock/LukyToky
mc schraefel, University of Southampton

COVID-19 has seen all the rules change when it comes to social engagement. Workplaces and schools have closed, gatherings have been banned, and the use of social media and other online tools has risen to bridge the gap.

But as we continue to adapt to the various restrictions, we should remember that social media is the refined sugar of social interaction. In the same way that producing a bowl of white granules means removing minerals and vitamins from the sugarcane plant, social media strips out many valuable and sometimes necessarily challenging parts of “whole” human communication.

Fundamentally, social media dispenses with the nuance of dealing with a person in the flesh and all the signalling complexities of body language, vocal tone and speed of utterance. The immediacy and anonymity of social media also remove the (healthy) challenges of paying attention, properly processing information and responding with civility.

As a result, social media is a fast and easy way to communicate. But while the removal of complexity is certainly convenient, a diet high in connections through social media has been widely shown to have a detrimental effect on our physical and emotional wellbeing.

Increased anxiety and depression are well-known side-effects. There are also consequences for making decisions based on simplistic, “refined” sources of information. We may be less discerning when it comes to evaluating such information, responding with far less reflection. We see a tweet, and we are triggered by it immediately – not unlike a sugar hit from a bar of chocolate.

More complex kinds of communication demand more of us, as we learn to recognise and engage with the complexities of face-to-face interaction – the tempo, closeness and body language that make up the non-verbal cues of communication that are missing in social media.

These cues may even exist because we have evolved to be with others, to work with others. Consider, for example, the hormone oxytoxin, which is associated with trust and lower stress levels and triggered when we are in the physical company of others.

Another indicator of trust and engagement is the fact that group heart rates synchronise when working together. But achieving such rhythm of communication takes effort, skill and practice.

Pause for thought

There’s an interesting element of elite athletic performance known as “quiet eye”. It refers to the brief moment of pause before a tennis player serves or a footballer takes a penalty to focus on the goal. Good communicators, too, seem to take this pause, whether it’s in a presentation or a conversation – a moment lost in social media’s rush for an immediate anonymous response.

Having said all this, I don’t believe social media – or table sugar for that matter – is fundamentally wrong. As with a slice of cake on a special occasion, it can be a delight, a treat and a rush. But problems appear when it is our dominant form of communication. As with only eating cake, it weakens us, leaving us far less able to thrive in more challenging environments.

COVID-19 has meant a greater proportion of many people’s lives are spent online. But even Zoom meetings and gatherings, while more intimate than a tweet or social media post, also have limitations and lead to fatigue.

In physiological terms, part of the reason for these experiences being so challenging is that we are supposed to connect with each other in person. We are wired to deal with every aspect of physically present personal contact – from the uncomfortable conversations to the hugely gratifying exchanges.

We suffer without it. We see this in energy levels, overall health and mental stability. It’s physical as well as emotional in effect. Indeed, researchers have shown for over a decade now that loneliness kills. What research has yet to show is if social media mitigates this.


Read more: Coronavirus: experts in evolution explain why social distancing feels so unnatural


Again, virtual meetings are not intrinsically wrong. But they are not sufficient, in human physiological terms, to sustain what we have come to need after 300,000 years of evolution.

Even in the days before coronavirus, social media had been evolving into a dominant means of communication for many. Fast and easy, but also often mean, judgemental, fleeting – something that does not bring out the best in us.

The hope in offering this analogy is that by contextualising how social media works in terms of our physiology, we can start to understand how we may need to balance social media with other more challenging, but ultimately more satisfying forms of communication. And also how we may need to design virtual methods of communication that embrace more of the physiology of social contact that we need, and which helps us to thrive.The Conversation

mc schraefel, Professor of Computer Science and Human Performance, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on the moral compass of Tech billionaires

How tech billionaires’ visions of human nature shape our world

Simon McCarthy-Jones, Trinity College Dublin

In the 20th century, politicians’ views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of human nature may shape the 21st century. We must know what technologists see in humanity’s heart.

The economist Thomas Sowell proposed two visions of human nature. The utopian vision sees people as naturally good. The world corrupts us, but the wise can perfect us.

The tragic vision sees us as inherently flawed. Our sickness is selfishness. We cannot be trusted with power over others. There are no perfect solutions, only imperfect trade-offs.

Science supports the tragic vision. So does history. The French, Russian and Chinese revolutions were utopian visions. They paved their paths to paradise with 50 million dead.

The USA’s founding fathers held the tragic vision. They created checks and balances to constrain political leaders’ worst impulses.

Technologists’ visions

Yet when Americans founded online social networks, the tragic vision was forgotten. Founders were trusted to juggle their self-interest and the public interest when designing these networks and gaining vast data troves.

Users, companies and countries were trusted not to abuse their new social-networked power. Mobs were not constrained. This led to abuse and manipulation.

Belatedly, social networks have adopted tragic visions. Facebook now acknowledges regulation is needed to get the best from social media.

Tech billionaire Elon Musk dabbles in both the tragic and utopian visions. He thinks “most people are actually pretty good”. But he supports market, not government control, wants competition to keep us honest, and sees evil in individuals.

Musk’s tragic vision propels us to Mars in case short-sighted selfishness destroys Earth. Yet his utopian vision assumes people on Mars could be entrusted with the direct democracy that America’s founding fathers feared. His utopian vision also assumes giving us tools to think better won’t simply enhance our Machiavellianism.

Bill Gates leans to the tragic and tries to create a better world within humanity’s constraints. Gates recognises our self-interest and supports market-based rewards to help us behave better. Yet he believes “creative capitalism” can tie self-interest to our inbuilt desire to help others, benefiting all.

Peter Tiel stood in front of screen displaying computer code.
Peter Thiel considers the code of human nature. Heisenberg Media/Flickr, CC BY-SA

A different tragic vision lies in the writings of Peter Thiel. This billionaire tech investor was influenced by philosophers Leo Strauss and Carl Schmitt. Both believed evil, in the form of a drive for dominance, is part of our nature.

Thiel dismisses the “Enlightenment view of the natural goodness of humanity”. Instead, he approvingly cites the view that humans are “potentially evil or at least dangerous beings”.

The consequences of seeing evil

The German philosopher Friedrich Nietzsche warned that those who fight monsters must beware of becoming monsters themselves. He was right.

People who believe in evil are more likely to demonise, dehumanise, and punish wrongdoers. They are more likely to support violence before and after another’s transgression. They feel that redemptive violence can eradicate evil and save the world. Americans who believe in evil are more likely to support torture, killing terrorists and America’s possession of nuclear weapons.

Technologists who see evil risk creating coercive solutions. Those who believe in evil are less likely to think deeply about why people act as they do. They are also less likely to see how situations influence people’s actions.

Two years after 9/11, Peter Thiel founded Palantir. This company creates software to analyse big data sets, helping businesses fight fraud and the US government combat crime.

Thiel is a Republican-supporting libertarian. Yet, he appointed a Democrat-supporting neo-Marxist, Alex Karp, as Palantir’s CEO. Beneath their differences lies a shared belief in the inherent dangerousness of humans. Karp’s PhD thesis argued that we have a fundamental aggressive drive towards death and destruction.

Just as believing in evil is associated with supporting pre-emptive aggression, Palantir doesn’t just wait for people to commit crimes. It has patented a “crime risk forecasting system” to predict crimes and has trialled predictive policing. This has raised concerns.

Karp’s tragic vision acknowledges that Palantir needs constraints. He stresses the judiciary must put “checks and balances on the implementation” of Palantir’s technology. He says the use of Palantir’s software should be “decided by society in an open debate”, rather than by Silicon Valley engineers.

Yet, Thiel cites philosopher Leo Strauss’ suggestion that America partly owes her greatness “to her occasional deviation” from principles of freedom and justice. Strauss recommended hiding such deviations under a veil.

Thiel introduces the Straussian argument that only “the secret coordination of the world’s intelligence services” can support a US-led international peace. This recalls Colonel Jessop in the film, A Few Good Men, who felt he should deal with dangerous truths in darkness.

Can we handle the truth?

Seeing evil after 9/11 led technologists and governments to overreach in their surveillance. This included using the formerly secret XKEYSCORE computer system used by the US National Security Agency to colllect people’s internet data, which is linked to Palantir. The American people rejected this approach and democratic processes increased oversight and limited surveillance.

Facing the abyss

Tragic visions pose risks. Freedom may be unnecessarily and coercively limited. External roots of violence, like scarcity and exclusion, may be overlooked. Yet if technology creates economic growth it will address many external causes of conflict.

Utopian visions ignore the dangers within. Technology that only changes the world is insufficient to save us from our selfishness and, as I argue in a forthcoming book, our spite.

Technology must change the world working within the constraints of human nature. Crucially, as Karp notes, democratic institutions, not technologists, must ultimately decide society’s shape. Technology’s outputs must be democracy’s inputs.

This may involve us acknowledging hard truths about our nature. But what if society does not wish to face these? Those who cannot handle truth make others fear to speak it.

Straussian technologists, who believe but dare not speak dangerous truths, may feel compelled to protect society in undemocratic darkness. They overstep, yet are encouraged to by those who see more harm in speech than its suppression.

The ancient Greeks had a name for someone with the courage to tell truths that could put them in danger – the parrhesiast. But the parrhesiast needed a listener who promised to not to react with anger. This parrhesiastic contract allowed dangerous truth-telling.

We have shredded this contract. We must renew it. Armed with the truth, the Greeks felt they could take care of themselves and others. Armed with both truth and technology we can move closer to fulfilling this promise.The Conversation

Simon McCarthy-Jones, Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on nanotechnology and viruses

Coronavirus nanoscience: the tiny technologies tackling a global pandemic

Kateryna Kon/Shutterstock
Josh Davies, Cardiff University

The world-altering coronavirus behind the COVID-19 pandemic is thought to be just 60 nanometres to 120 nanometres in size. This is so mind bogglingly small that you could fit more than 400 of these virus particles into the width of a single hair on your head. In fact, coronaviruses are so small that we can’t see them with normal microscopes and require much fancier electron microscopes to study them. How can we battle a foe so minuscule that we cannot see it?

One solution is to fight tiny with tiny. Nanotechnology relates to any technology that is or contains components that are between 1nm and 100nm in size. Nanomedicine that takes advantage of such tiny technology is used in everything from plasters that contain anti-bacterial nanoparticles of silver to complex diagnostic machines.

Nanotechnology also has an impressive record against viruses and has been used since the late 1880s to separate and identify them. More recently, nanomedicine has been used to develop treatments for flu, Zika and HIV. And now it’s joining the fight against the COVID-19 virus, SARS-CoV-2.

Diagnosis

If you’re suspected of having COVID, swabs from your throat or nose will be taken and tested by reverse transcription polymerase chain reaction (RT-PCR). This method checks if genetic material from the coronavirus is present in the sample.

Despite being highly accurate, the test can take up to three days to produce results, requires high-tech equipment only accessible in a lab, and can only tell if you have an active infection when the test is taken. But antibody tests, which check for the presence of coronavirus antibodies in your blood, can produce results immediately, wherever you’re tested.

Antibodies are formed when your body fights back against a virus. They are tiny proteins that search for and destroy invaders by hunting for the chemical markers of germs, called antigens. This means antibody tests can not only tell if you have coronavirus but if you have previously had it.

Antibody tests use nanoparticles of materials such as gold to capture any antibodies from a blood sample. These then slowly travel along a small piece of paper and stick to an antigen test line that only the coronavirus antibody will bond to. This makes the line visible and indicates that antibodies are present in the sample. These tests are more than 95% accurate and can give results within 15 minutes.

Vaccines and treatment

A major turning point in the battle against coronavirus will be the development of a successful vaccine. Vaccines often contain an inactive form of a virus that acts as an antigen to train your immune system and enable it develop antibodies. That way, when it meets the real virus, your immune system is ready and able to resist infection.

But there are some limitations in that typical vaccine material can prematurely break down in the bloodstream and does not always reach the target location, reducing the efficiency of a vaccine. One solution is to enclose the vaccine material inside a nanoshell by a process called encapsulation.

These shells are made from fats called lipids and can be as thin as 5nm in diameter, which is 50,000 times thinner than an egg shell. The nanoshells protect the inner vaccine from breaking down and can also be decorated with molecules that target specific cells to make them more effective at delivering their cargo.

This can improve the immune response of elderly people to the vaccine. And critically, people typically need lower doses of these encapsulated vaccines to develop immunity, meaning you can more quickly produce enough to vaccinate an entire population.

Encapsulation can also improve viral treatments. A major contribution to the deaths of virus patients in intensive care is “acute respiratory distress syndrome”, which occurs when the immune system produces an excessive response. Encapsulated vaccines can target specific areas of the body to deliver immunosuppressive drugs directly to targeted organs and helping regulate our immune system response.

Transmission reduction

It’s hard to exaggerate the importance of wearing face masks and washing your hands to reducing the spread of COVID-19. But typical face coverings can have trouble stopping the most penetrating particles of respiratory droplets, and many can only be used once.

New fabrics made from nanofibres 100nm thick and coated in titanium oxide can catch droplets smaller than 1,000nm and so they can be destroyed by ultraviolet (UV) radiation from sunlight. Masks, gloves and other personal protective equipment (PPE) made from such fabrics can also be washed and reused, and are more breathable.

Close-up of intricately woven fibres.
New fabrics made from coated nanofibres could produce better PPE. AnnaVel/Shutterstock

Another important nanomaterial is graphene, which is formed from a single honeycomb layer of carbon atoms and is 200 times stronger than steel but lighter than paper. Fabrics laced with graphene can capture viruses and block them from passing through. PPE containing graphene could be more puncture, flame, UV and microbe resistant while also being light weight.

Graphene isn’t reserved for fabrics either. Nanoparticles could be placed on surfaces in public places that might be particularly likely to facilitate transmission of the virus.

These technologies are just some of the ways nanoscience is contributing to the battle against COVID-19. While there is no one answer to a global pandemic, these tiny technologies certainly have the potential to be an important part of the solution.The Conversation

Josh Davies, PhD Candidate in Chemistry, Cardiff University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on use of algorithms in Government

Not just A-levels: unfair algorithms are being used to make all sorts of government decisions

Adam Harkens, University of Birmingham

The recent use of an algorithm to calculate the graduating grades of secondary school students in England provoked so much public anger at its perceived unfairness that it’s widely become known as the “A-levels fiasco”. As a result of the outrage – and the looming threat of legal action – the government was forced into an embarrassing U-turn and awarded grades based on teacher assessment.

Prime Minister Boris Johnson has since blamed the crisis on what he called the “mutant” algorithm. But this wasn’t a malfunctioning piece of technology. In marking down many individual students to prevent high grades increasing overall, the algorithm did exactly what the government wanted it to do. The fact that more disadvantaged pupils were marked down was an inevitable consequence of prioritising historical data from an unequal education system over individual achievement.

But more than this, the saga shouldn’t be understood as a failure of design of a specific algorithm, nor the result of incompetence on behalf of a specific government department. Rather, this is a significant indicator of the data-driven methods that many governments are now turning to and the political struggles that will probably be fought over them.

Algorithmic systems tend to be promoted for several reasons, including claims that they produce smarter, faster, more consistent and more objective decisions, and make more efficient use of government resources. The A-level fiasco has shown that this is not necessarily the case in practice. Even where an algorithm provides a benefit (fast, complex decision-making for a large amount of data), it may bring new problems (socio-economic discrimination).

Algorithms all over

In the UK alone, several systems are being or have recently been used to make important decisions that determine the choices, opportunities and legal position of certain sections of the public.

At the start of August, the Home Office agreed to scrap its visa “streaming tool” designed to sort visa applications into risk categories (red, amber, green) indicated how much further scrutiny was needed. This followed a legal challenge from campaign group Foxglove and the Joint Council for the Welfare of Immigrants charity, claiming that the algorithm discriminated on the basis of nationality. Before this case could reach court, Home Secretary Priti Patel pledged to halt the use of the algorithm and to commit to a substantive redesign.

The Metropolitan Police Service’s “gangs matrix” is a database used to record suspected gang members and undertake automated risk assessments. It informs police interventions including stop and search, and arrest. A number of concerns have been raised regarding its potentially discriminatory impact, its inclusion of potential victims of gang violence, and its failure to comply with data protection law.

Many councils in England use algorithms to check benefit entitlements and detect welfare fraud. Dr Joanna Redden of Cardiff University’s Data Justice Lab has found a number of authorities have halted such algorithm use after encountering problems with errors and bias. But also, significantly, she told the Guardian there had been “a failure to consult with the public and particularly with those who will be most affected by the use of these automated and predictive systems before implementing them”.

This follows an important warning from Philip Alston, the UN special rapporteur for extreme poverty, that the UK risks “stumbling zombie-like into a digital welfare dystopia”. He argued that too often technology is being used to reduce people’s benefits, set up intrusive surveillance and generate profits for private companies.

The UK government has also proposed a new algorithm for assessing how many new houses English local authority areas should plan to build. The effect of this system remains to be seen, though the model seems to suggest more houses should be built in southern rural areas, instead of the more-expected urban areas, particularly northern cities. This raises serious questions of fair resource distribution.

Why does this matter?

The use of algorithmic systems by public authorities to make decisions that have a significant impact on our lives points to a number of crucial trends in government. As well as increasing the speed and scale at which decisions can be made, algorithmic systems also change the way those decisions are made and the forms of public scrutiny that are possible.

This points to a shift in the government’s perspective of, and expectations for, accountability. Algorithmic systems are opaque and complex “black boxes” that enable powerful political decisions to be made based on mathematical calculations, in ways not always clearly tied to legal requirements.

This summer alone, there have been at least three high-profile legal challenges to the use of algorithmic systems by public authorities, relating to the A-level and visa streaming systems, as well as the government’s COVID-19 test and trace tool. Similarly, South Wales Police’s use of facial recognition software was declared unlawful by the Court of Appeal.

While the purpose and nature of each of these systems are different, they share common features. Each system has been implemented without adequate oversight nor clarity regarding their lawfulness.

Failure of public authorities to ensure that algorithmic systems are accountable is at worst a deliberate attempt to hinder democratic processes by shielding algorithmic systems from public scrutiny. And at best, it represents a highly negligent attitude towards the responsibility of government to adhere to the rule of law, to provide transparency, and to ensure fairness and the protection of human rights.

With this in mind, it is important that we demand accountability from the government as it increases its use of algorithms, so that we retain democratic control over the direction of our society, and ourselves.The Conversation

Adam Harkens, Research Associate, Birmingham Law School, University of Birmingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Remote working and offices or city centres

Remote working is here to stay – but that doesn’t mean the end of offices or city centres

Most people will return to offices but there’s no rush. Shutterstock
Jane Parry, University of Southampton

When coronavirus lockdowns were introduced, the shift to remote working was sudden and sweeping. Now the British government is hoping the return to the office will be just as swift – to help the economy “get back to normal”. But pushing everyone back to the office full time fails to recognise the many benefits that working from home has brought. It also fails to capitalise on this moment of change.

The mass homeworking experiment in the middle of a pandemic presented some of the most challenging circumstances possible. Yet, coming out the other side of it, there’s likely to be considerable resistance to simply readopting old ways of working. This is already evident at the start of a new research project I’m leading at Southampton Business School into the effects of COVID-19 on the workplace, called Work After Lockdown, with partners the Institute for Employment Studies and work consultancy Half the Sky.

Coronavirus lockdowns accelerated the shift to flexible working in a way that had previously seemed impossible. They also provide hard evidence of how work can be done differently – and successfully. Most crucially, they have provided vivid illustration of this to resistant managers, who were previously the key block to flexible working.

By mid-lockdown in April, the Office for National Statistics estimated that nearly half of people in employment were working from home in some way. These were predominantly white-collar office workers. Considering that, prior to the pandemic, less than 30% of people had ever worked from home, this marks a significant shift.

Some organisations were much better prepared for this switch than others. Those who had already mobilised the necessary remote-working technology adapted more easily, as often did multinational companies already used to managing virtual teams with diverse needs.

But lockdown was nevertheless a shock for most employees. Few were ready to start performing all of their work from home, let alone manage this under far from ideal circumstances – such as children to care for and educate, or shielding relatives to support, not to mention health concerns to manage. Unsurprisingly, this was often a struggle. What has been more unexpected in our research so far was how quickly people adapted, often finding more efficient ways of organising their time.

Mother hunched over laptop, with small child crying.
Working from home during lockdown is very different to remote working that is planned. Shutterstock.com

So far there seems to be little evidence of a drop in productivity. This is very difficult to measure due to the economic effects of the pandemic. The OECD think tank pointed to an initial drop, followed by reports of an upsurge in productivity, and argued strongly that the wellbeing of remote workers is central to sustaining productivity gains. This is a key message for employers – that well-managed working from home that is chosen and not forced upon people will make work more efficient and productive.

Rethinking the office

All this is prompting employers to think about how their work spaces can be used differently and more effectively. Offices could be a space for convening and group thinking, while homes become the site of undisturbed, productive work.

In fact, there are already creative discussions going on in organisations about how they can ensure that they benefit from the disruption caused by the pandemic. As one manager at a large legal firm said: “We have a completely blank sheet of paper.”

Banks including JP Morgan and technology companies such as Google are just some of the organisations that have welcomed working from home as part of their business models. Three-quarters of the 43 large companies surveyed by The Times spoke of moving towards flexible working more permanently.

Alongside the radical thinking that employers are doing, is a shift in how employees feel about their work. Recent analysis of attitudes around homeworking at Cardiff and Southampton universities reveals that 88% of those who worked at home during lockdown want to continue doing so in some respect.


Read more: Returning to the office: how to stay connected and socially distant


In our own research, benefits are emerging around family wellbeing and better use of time, with knock-on effects as workers become more conscious and proactive about their physical health. Many people we’ve spoken to feel that the adversity of lockdown gave them insight and understanding into the lives of their colleagues, and the length of lockdown gave them the time to work out better ways of organising their work tasks remotely.

Of course, lockdown experiences have been diverse. Employers told us they became more aware of staff who found enforced working from home to be a lonely or more challenging time, including those living alone or in small or cramped living conditions, as well as those with more outside responsibilities, such as caring commitments, whose intensity was heightened by lockdown. This improved awareness of workforce diversity might yet have more positive consequences for future management.

Much of the recent government narrative has been one of calamity about what deserted offices will do to cities and jobs. But only a few companies are suggesting abandoning their offices completely. Quite the reverse, they could become more pleasant spaces in which we still socialise and buy coffee.

As we rethink the office, this provides an opportunity to consider what we want our cities to look like – and how they might become more inclusive, safer and greener spaces. Crucially, we can do this while making them spaces where work is organised more efficiently. This could be a once in a generation moment to make these positive changes.The Conversation

Jane Parry, Lecturer in Organisational Studies and HRM, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.