Game Development and Linux aren’t something you’d typically hear in the same sentence but it is becoming increasingly common. More games offer Linux support, as does Steam as a platform, and using Linux as an operating system for development is becoming the preference of most developers.
The Unity and Unreal engines have options to recompile your games for Linux compatibility as well as a Linux Editor. The Unity engine is much easier to deploy than the Unreal unreal engine so we will cover that in this article, but for Unreal fans you can use this link to see how to deploy Unreal on a Linux distro.
Unity offers an Editor specifically designed for Linux. We can easily set it up on almost any Linux distribution including Red Hat Enterprise Linux because it is configured to run using AppImage technologies.
As a Red Hat Business Partner we are of course a big fan of their portfolio and the potential it brings to developers large and small. As I work a lot on using Ansible to design the most efficient, scalable and secure ways to run our client’s estates I get see a lot of nifty ways to deploy apps at scale such as with AppImage.
With increasingly disparate workforces the industry needs to be able to create development environments where the compute is offloaded to a different system from the client device. Unity’s Linux Editor and the Unity Build Server (on-prem or fully Cloud) are excellent tools to create a scalable, remotely accessible infrastructure and goes hand-in-hand with Red Hat Enterprise Linux’s reputation for stability and its unparalleled 10-year life cycle.
Typically we would set out how to get started with our own documentation however as both Unity and Unreal are moving so quickly to develop their offering in the Linux space anything we write would shortly become redundant after publishing.
Get in touch and we will be more than happy to run through the steps necessary to set up your studio to do Game Development on Linux and run through the benefits of the long-term stability your studio can enjoy in doing so.
The recent use of an algorithm to calculate the graduating grades of secondary school students in England provoked so much public anger at its perceived unfairness that it’s widely become known as the “A-levels fiasco”. As a result of the outrage – and the looming threat of legal action – the government was forced into an embarrassing U-turn and awarded grades based on teacher assessment.
Prime Minister Boris Johnson has since blamed the crisis on what he called the “mutant” algorithm. But this wasn’t a malfunctioning piece of technology. In marking down many individual students to prevent high grades increasing overall, the algorithm did exactly what the government wanted it to do. The fact that more disadvantaged pupils were marked down was an inevitable consequence of prioritising historical data from an unequal education system over individual achievement.
But more than this, the saga shouldn’t be understood as a failure of design of a specific algorithm, nor the result of incompetence on behalf of a specific government department. Rather, this is a significant indicator of the data-driven methods that many governments are now turning to and the political struggles that will probably be fought over them.
Algorithmic systems tend to be promoted for several reasons, including claims that they produce smarter, faster, more consistent and more objective decisions, and make more efficient use of government resources. The A-level fiasco has shown that this is not necessarily the case in practice. Even where an algorithm provides a benefit (fast, complex decision-making for a large amount of data), it may bring new problems (socio-economic discrimination).
Algorithms all over
In the UK alone, several systems are being or have recently been used to make important decisions that determine the choices, opportunities and legal position of certain sections of the public.
At the start of August, the Home Office agreed to scrap its visa “streaming tool” designed to sort visa applications into risk categories (red, amber, green) indicated how much further scrutiny was needed. This followed a legal challenge from campaign group Foxglove and the Joint Council for the Welfare of Immigrants charity, claiming that the algorithm discriminated on the basis of nationality. Before this case could reach court, Home Secretary Priti Patel pledged to halt the use of the algorithm and to commit to a substantive redesign.
Many councils in England use algorithms to check benefit entitlements and detect welfare fraud. Dr Joanna Redden of Cardiff University’s Data Justice Lab has found a number of authorities have halted such algorithm use after encountering problems with errors and bias. But also, significantly, she told the Guardian there had been “a failure to consult with the public and particularly with those who will be most affected by the use of these automated and predictive systems before implementing them”.
This follows an important warning from Philip Alston, the UN special rapporteur for extreme poverty, that the UK risks “stumbling zombie-like into a digital welfare dystopia”. He argued that too often technology is being used to reduce people’s benefits, set up intrusive surveillance and generate profits for private companies.
The UK government has also proposed a new algorithm for assessing how many new houses English local authority areas should plan to build. The effect of this system remains to be seen, though the model seems to suggest more houses should be built in southern rural areas, instead of the more-expected urban areas, particularly northern cities. This raises serious questions of fair resource distribution.
Why does this matter?
The use of algorithmic systems by public authorities to make decisions that have a significant impact on our lives points to a number of crucial trends in government. As well as increasing the speed and scale at which decisions can be made, algorithmic systems also change the way those decisions are made and the forms of public scrutiny that are possible.
This points to a shift in the government’s perspective of, and expectations for, accountability. Algorithmic systems are opaque and complex “black boxes” that enable powerful political decisions to be made based on mathematical calculations, in ways not always clearly tied to legal requirements.
While the purpose and nature of each of these systems are different, they share common features. Each system has been implemented without adequate oversight nor clarity regarding their lawfulness.
Failure of public authorities to ensure that algorithmic systems are accountable is at worst a deliberate attempt to hinder democratic processes by shielding algorithmic systems from public scrutiny. And at best, it represents a highly negligent attitude towards the responsibility of government to adhere to the rule of law, to provide transparency, and to ensure fairness and the protection of human rights.
With this in mind, it is important that we demand accountability from the government as it increases its use of algorithms, so that we retain democratic control over the direction of our society, and ourselves.
When coronavirus lockdowns were introduced, the shift to remote working was sudden and sweeping. Now the British government is hoping the return to the office will be just as swift – to help the economy “get back to normal”. But pushing everyone back to the office full time fails to recognise the many benefits that working from home has brought. It also fails to capitalise on this moment of change.
The mass homeworking experiment in the middle of a pandemic presented some of the most challenging circumstances possible. Yet, coming out the other side of it, there’s likely to be considerable resistance to simply readopting old ways of working. This is already evident at the start of a new research project I’m leading at Southampton Business School into the effects of COVID-19 on the workplace, called Work After Lockdown, with partners the Institute for Employment Studies and work consultancy Half the Sky.
Coronavirus lockdowns accelerated the shift to flexible working in a way that had previously seemed impossible. They also provide hard evidence of how work can be done differently – and successfully. Most crucially, they have provided vivid illustration of this to resistant managers, who were previously the key block to flexible working.
By mid-lockdown in April, the Office for National Statistics estimated that nearly half of people in employment were working from home in some way. These were predominantly white-collar office workers. Considering that, prior to the pandemic, less than 30% of people had ever worked from home, this marks a significant shift.
Some organisations were much better prepared for this switch than others. Those who had already mobilised the necessary remote-working technology adapted more easily, as often did multinational companies already used to managing virtual teams with diverse needs.
But lockdown was nevertheless a shock for most employees. Few were ready to start performing all of their work from home, let alone manage this under far from ideal circumstances – such as children to care for and educate, or shielding relatives to support, not to mention health concerns to manage. Unsurprisingly, this was often a struggle. What has been more unexpected in our research so far was how quickly people adapted, often finding more efficient ways of organising their time.
So far there seems to be little evidence of a drop in productivity. This is very difficult to measure due to the economic effects of the pandemic.
The OECD think tank pointed to an initial drop, followed by reports of an upsurge in productivity, and argued strongly that the wellbeing of remote workers is central to sustaining productivity gains. This is a key message for employers – that well-managed working from home that is chosen and not forced upon people will make work more efficient and productive.
Rethinking the office
All this is prompting employers to think about how their work spaces can be used differently and more effectively. Offices could be a space for convening and group thinking, while homes become the site of undisturbed, productive work.
In fact, there are already creative discussions going on in organisations about how they can ensure that they benefit from the disruption caused by the pandemic. As one manager at a large legal firm said: “We have a completely blank sheet of paper.”
Banks including JP Morgan and technology companies such as Google are just some of the organisations that have welcomed working from home as part of their business models. Three-quarters of the 43 large companies surveyed by The Times spoke of moving towards flexible working more permanently.
Alongside the radical thinking that employers are doing, is a shift in how employees feel about their work. Recent analysis of attitudes around homeworking at Cardiff and Southampton universities reveals that 88% of those who worked at home during lockdown want to continue doing so in some respect.
In our own research, benefits are emerging around family wellbeing and better use of time, with knock-on effects as workers become more conscious and proactive about their physical health. Many people we’ve spoken to feel that the adversity of lockdown gave them insight and understanding into the lives of their colleagues, and the length of lockdown gave them the time to work out better ways of organising their work tasks remotely.
Of course, lockdown experiences have been diverse. Employers told us they became more aware of staff who found enforced working from home to be a lonely or more challenging time, including those living alone or in small or cramped living conditions, as well as those with more outside responsibilities, such as caring commitments, whose intensity was heightened by lockdown. This improved awareness of workforce diversity might yet have more positive consequences for future management.
Much of the recent government narrative has been one of calamity about what deserted offices will do to cities and jobs. But only a few companies are suggesting abandoning their offices completely. Quite the reverse, they could become more pleasant spaces in which we still socialise and buy coffee.
As we rethink the office, this provides an opportunity to consider what we want our cities to look like – and how they might become more inclusive, safer and greener spaces. Crucially, we can do this while making them spaces where work is organised more efficiently. This could be a once in a generation moment to make these positive changes.
The advent of mass working from home has made many people more aware of the security risks of sending sensitive information via the internet. The best we can do at the moment is make it difficult to intercept and hack your messages – but we can’t make it impossible.
What we need is a new type of internet: the quantum internet. In this version of the global network, data is secure, connections are private and your worries about information being intercepted are a thing of the past.
My colleagues and I have just made a breakthrough, published in Science Advances, that will make such a quantum internet possible by scaling up the concepts behind it using existing telecommunications infrastructure.
Our current way of protecting online data is to encrypt it using mathematical problems that are easy to solve if you have a digital “key” to unlock the encryption but hard to solve without it. However, hard does not mean impossible and, with enough time and computer power, today’s methods of encryption can be broken.
Quantum communication, on the other hand, creates keys using individual particles of light (photons) , which – according to the principles of quantum physics – are impossible to make an exact copy of. Any attempt to copy these keys will unavoidably cause errors that can be detected. This means a hacker, no matter how clever or powerful they are or what kind of supercomputer they possess, cannot replicate a quantum key or read the message it encrypts.
This concept has already been demonstrated in satellites and over fibre-optic cables, and used to send secure messages between different countries. So why are we not already using in everyday life? The problem is that it requires expensive, specialised technology that means it’s not currently scalable.
Previous quantum communication techniques were like pairs of children’s walkie talkies. You need one pair of handsets for every pair of users that want to securely communicate. So if three children want to talk to each other they will need three pairs of handsets (or six walkie talkies) and each child must have two of them. If eight children want to talk to each other they would need 56 walkie talkies.
Obviously it’s not practical for someone to have a separate device for every person or website they want to communicate with over the internet. So we figured out a way to securely connect every user with just one device each, more similar to phones than walkie talkies.
Each walkie talkie handset acts as both a transmitter and a receiver in order to share the quantum keys that make communication secure. In our model, users only need a receiver because they get the photons to generate their keys from a central transmitter.
This is possible because of another principle of quantum physics called “entanglement”. A photon can’t be exactly copied but it can be entangled with another photon so that they both behave in the same way when measured, no matter how far apart they are – what Albert Einstein called “spooky action at a distance”.
When two users want to communicate, our transmitter sends them an entangled pair of photons – one particle for each user. The users’ devices then perform a series of measurements on these photons to create a shared secret quantum key. They can then encrypt their messages with this key and transfer them securely.
By using multiplexing, a common telecommunications technique of combining or splitting signals, we can effectively send these entangled photon pairs to multiple combinations of people at once.
We can also send many signals to each user in a way that they can all be simultaneously decoded. In this way we’ve effectively replaced pairs of walkie talkies with a system more similar to a video call with multiple participants, in which you can communicate with each user privately and independently as well as all at once.
We’ve so far tested this concept by connecting eight users across a single city. We are now working to improve the speed of our network and interconnect several such networks. Collaborators have already started using our quantum network as a test bed for several exciting applications beyond just quantum communication.
We also hope to develop even better quantum networks based on this technology with commercial partners in the next few years. With innovations like this, I hope to witness the beginning of the quantum internet in the next ten years.
For many of us, work often competes for time with sleep – which is why many of us look forward to the weekend for a chance to “catch up” on sleep. But how much sleep is lost on days when we work? Our latest research shows that we get about 30 minutes less sleep than we would ideally need on each night of the working week.
We followed 100 people aged from 60 to 71 over two years, covering their transition into retirement. We measured their sleep on three separate occasions, with one year in between, and compared the sleep habits while they were working against when – and for how long – they slept after retirement.
After retirement, we found that every day was like a weekend – at least when it came to how long people slept for. Sleep duration increased, but only on weekdays, from 6.5 to seven hours a night on average. This meant retired people got about an equal amount of sleep every night of the week.
The amount of sleep people tended to get on their weekends while still in work seemed to be their preferred sleep duration, rather than “catch-up” sleep. If weekend sleep was prolonged to compensate for the working week’s sleep loss, we would have expected a drop after retirement (when there’s no sleep loss to compensate for) – but we this wasn’t the case.
Given that participants’ weekend sleep was their preferred sleep duration, weekend lie-ins will not compensate for sleep lost on weekdays while working. This means that our study participants had chronic partial sleep deprivation when they were working, of about 2.5 hours each week.
While adults are recommended to get at least seven hours per night for optimal health, sleep needs vary both between people and as we age. We need less sleep when we are older than when we are younger.
Different people need different amounts of sleep, which makes it hard to estimate what constitutes “too little” sleep for any given individual, but otherstudies have in experiments found that getting only six to seven hours of sleep affects attention and reaction time negatively compared to getting eight to nine hours of shuteye. This performance drop remained, even after getting a full night’s sleep three days in a row.
Partial sleep deprivation as a result of work can continue for years, which is why the accumulated effects needs to be considered. Sleeping less than seven hours on a regular basis is related to increased risk for various health conditions, including diabetes, stroke and depression. It’s also associated with impaired immune system function, as well as increased risk of accidents.
Not only did sleep duration change with retirement, but people also went to bed later and woke later. Getting rid of the alarm clock seemed to be what drove the increase, as retired people went to bed about half an hour later and woke up an hour later on average during weekdays compared to when they were working.
Going to bed in time to get plenty of sleep before getting up for work is not always easy – especially for the majority of the population who have a late “biological clock”. This means they naturally prefer to go to sleep later and wake up later than people with an early biological clock.
Those with a late biological clock also have a tendency to postpone their bed and wake times on weekends more than others, which unfortunately sets their biological clock even later – making it hard to go to bed early on Sunday and even harder wake up early on Monday morning.
When our biological clock is out of sync with the social clock (which is the timetable imposed on us by society) it can result in “social jetlag”. Social jetlag acts a bit like regular jetlag, and can make us feel down and tired. It’s also associated with higher risk for metabolic disorders and depressive symptoms.
But even though sleep patterns became more stable after retirement, people still went to bed and woke up around half an hour later on weekends compared to weekdays. This hints that other social factors – such as visiting with friends – also affect when and how much we sleep.
We also found that retired participants with a full-time working partner changed their sleep timing to a smaller extent than the rest, highlighting that sleep is social, as opposed to a purely individual phenomenon.
But there are some things you can do yourself to adjust your sleep patterns more to work and avoid “social jetlag” on Monday morning, including making sure you get plenty of daylight in the mornings. Morning light pushes our biological clock backwards, making it easier to fall asleep at night. However, the opposite is also true, so bright light should be avoided in the evenings and bedrooms should be dark.
It also helps to prioritise your sleep and keep a more regular sleep schedule, even on weekends. Allow yourself some extra time in bed on weekend mornings if you need it, but try to avoid throwing your weekend sleep schedule off too much in order to stay away from the vicious cycle of sleep loss and social jetlag.
That being said, our study suggests that work generates sleep loss and hinders people from sleeping in line with their natural rhythm. But just as later school start times are an effective way to improve sleep in adolescents, later (or flexible) start times at work could potentially have the same effect for working people – and may mean people won’t have to wait until retirement to get enough sleep.
Young people have probably spent much more of their time than usual playing video games over the last few months thanks to the coronvirus pandemic. One report from telecoms firm Verizon said online gaming use went up 75% in the first week of lockdown in the US.
What impact might this have on young people’s development? One area that people are often concerned about is the effect of video games, particularly violent ones, on moral reasoning. My colleagues and I recently published research that suggested games have no significant effect on the moral development of university-age students but can affect younger adolescents. This supports the use of an age-rating system for video game purchases.
Our sense of morality and the way we make moral decisions – our moral reasoning – develop as we grow up and become more aware of life in wider society. For example, our thoughts about right and wrong are initially based on what we think the punishments and/or rewards could be. This then develops into a greater understanding of the role of social factors and circumstances in moral decisions.
Yet the moral dimension of video games is far more complex than just their representation of violence, as they often require players to make a range of moral choices. For example, players from the game BioShock have to choose whether to kill or rescue a little girl character known as a little sister.
A player with more mature moral reasoning may consider the wider social implications and consequences of this choice rather than just the punishment or rewards meted out by the game. For example, they may consider their own conscience and that they could feel bad about choosing to kill the little girl.
We surveyed a group of 166 secondary school students aged 11-18 and a group of 135 university students aged 17-27 to assess their gaming habits and the development of their moral reasoning using what’s known as the sociomoral reflection measure . This involved asking participants 11 questions on topics such as the importance of keeping promises, telling the truth, obeying the law and preserving life. The results suggested a stark difference between the two groups.
Among secondary students, we found evidence that playing video games could have an affect on moral development. Whereas female adolescents usually have more developed moral reasoning, in this case we found that males, who were more likely to play video games for longer, actually had higher levels of reasoning. We also found those who played a greater variety of genres of video games also had more developed reasoning.
This suggests that playing video games could actually support moral development. But other factors, including feeling less engaged with and immersed in a game, playing games with more mature content, and specifically playing the games Call of Duty and playing Grand Theft Auto, were linked (albeit weakly) with less developed moral reasoning.
No effect after 18
Overall, the evidence suggested adolescent moral development could be affected in some way by playing video games. However, there was little to no relationship between the university students’ moral reasoning development and video game play. This echoes previous research that found playing violent video games between the ages of 14 and 17 made you more likely to do so in the future, but found no such relationship for 18- to 21-year-olds.
This might be explained by the fact that 18 is the age at which young people in many countries are deemed to have become adult, leading to many changes and new experiences in their lives, such as starting full-time work or higher education. This could help support their moral development such that video games are no longer likely to be influential, or at least that currently available video games are no longer challenging enough to affect people.
The implication is that age rating systems on video games, such as the PEGI and ESRB systems, are important because under-18s appear more susceptible to the moral effects of games. But our research also highlights that it is not just what teenagers play but how they play it that can make a difference. So engaging with games for a wide variety of genres could be as important for encouraging moral development as playing age-appropriate games.
With the advances on technology we’ve gone through these last two decades things have changed and will keep changing in the future – one the instigators of this change is Artificial Intelligence (AI).
AI has seen quite an advancement on a lot of sectors – automation aims to have less human assistance to fulfil the same tasks, as with automation with vehicles. Car manufacturers such as Tesla are already using Autopilot technologies to assist human drivers with day-to-day tasks and to run diagnostics on and even repair their cars. Others such as Waymo are equally working towards making vehicles that are self-driving with no human interaction.
The field of Law is however an altogether different kettle of fish.
Today, we see assisting AI that are wrongfully called Robo-Lawyers – while it’s true that there are some tools such as DoNotPay which aim to servethe public, they are nonetheless rooted in case-law made by humans. Solicitors and Barristers also work with vendors such as iManage or NetDocuments which use ‘AI’ this is in fact Machine Learning and Automation. We cannnot replace a good old flesh-and-blood Lawyer just yet – perhaps never.
Thus, the term “robo-lawyer” doesn’t fully capture a complete Artificially Intelligent system that can thoroughly act on a matter: it can analyse data, sort files and information, yes, but as with the medical profession upholding the laws of the land are simply too human to be left to a bare-metal server.
Solicitors and Barristers are not out of a job yet, nor do we think they will be in the near or far future.
Artificial Intelligence is a very broad term, well explored by a Lords Committee on the subject. The APPG on AI would likely agree with us at Hayachi Services that rules and laws have been present in human societies for a very long time indeed.
So what will AI change: Can machines learn faster than humans for example? Not really, no. Machines can process some kinds of information faster than humans, but the human mind is still the most powerful information processing system on this planet. Perhaps Bio-Computers may one day match a Solicitor’s years of expertise or a Barrister’s way with words, but because law is as organic as the fabric of our culture it is doubtful that machines will ever ‘lead-the-field’.
What we consider ‘AI’ today is actually Machine Learning, a very complicated way of saying that we train the computer to run a ‘monkey see, monkey do’ exercise and to detect patterns in line with the variables we humans have set. Indeed a lot of engineering and expertise goes into such an exercise, and it can greatly assist work in the field of Law (such the Lexis Nexis Precedent Database) – but it is what it is.
I will always prefer a real lawyer to an robo-lawyer, what about you?
I spent much of the late 1990s as a reserve military intelligence officer involved in the hunt for war criminals in Bosnia. We got most of them in the end. A couple of years later, as a civilian human rights officer, I visited dozens of shallow graves and killing sites in Kosovo to document investigations and exhumations.
Many of these killings were carried out by Serbian and Croatian units which called themselves “special forces”. At that time, I was confident that British special forces were different. Now, not so much.
Military emails revealed recently by the Sunday Times have shown yet again, that the British special forces seem to have been operating Balkan-style death squads in Afghanistan, killing unarmed men. Last year the BBC raised the alarm on dozens of suspicious deaths during night raids – including children.
The Ministry of Defence said all these cases and many, many others have been “independently investigated” by the British Army’s own military police. The Ministry does not explain how an investigation of the Army by the Army is “independent”. A subsequent article, written in response to the revelations by well-known historian Andrew Roberts, reveals that the government and others apparently think that some of the alleged activities constitute acceptable behaviour. Roberts’ article was given the headline “Tie its hands and the SAS will lose”. It is worth pointing out that the SAS, along with the rest of the British Army, did lose in Afghanistan. This despite the fact that it is luminously clear that the only hands being “tied” were their prisoners. It is also worth saying that their actions have left a poisonous legacy in the country.
Arguments by those who advocate giving “leeway”, as Roberts puts it, for those inclined to war crimes, are important now. They are part of a move by some senior military figures and acolytes in parliament to remove the constraints applied by law on British forces through a proposed piece of legislation called the “Overseas Operations Bill”. This went through its first reading in parliament in March and, if implemented, would make it much harder to prosecute war crimes by the British military.
The proposed law provides a five-year time limit, after which prosecutions for war crimes have to pass through a so-called “triple-lock” procedure before they can proceed. This includes a “presumption against prosecution” once that five-year limit has passed. Cases brought after that time would be “exceptional” and would require the consent of the Attorney General. All this amounts to what is called in legal terms “qualified immunity” for those who have committed war crimes. War criminals simply need to run down the clock.
The basis for this law is the idea of“tank-chasing lawyers” who persecute soldiers with supposedly dishonest claims concerning their activities abroad. However, this is seriously misleading. Criminal prosecutions for war crimes are brought by military prosecutors, not unscrupulous “tank-chasers”. Cases supposedly involving immoral lawyers were brought on behalf of Iraqi and Afghan alleged victims for compensation. These were not war crimes prosecutions. The disgraceful implication of this is that the activities of a single law firm should form the basis for undermining decades of consistent international leadership in this field. Britain has been at the forefront of dealing with war crimes since the Nuremberg tribunals just after the second world war.
This squalid piece of legislation is not going to have an easy ride. Sensible and experienced military voices are now being raised. These officers take seriously their commission from the Queen to carry out their duties “according to the rules and disciplines of war”. David Benest, a leading expert on the history of British war crimes, wrote before his recent death that the bill risks reinforcing an already well-entrenched “cover-up” culture. Equally powerfully, one of the most senior retired officers Charles Guthrie, a former commander of the SAS, has written that the provisions of the bill “appear to have been dreamt up by those who have seen too little of the world to understand that the rules of war matter”.
There is disquiet elsewhere, and for very good reasons. The chairman of the House of Commons Defence Committee has asked whether discussions have taken place with the International Criminal Court about the bill. Such discussion, had it taken place, might have reminded the relevant politicians that were it ever to be made law there is every possibility that the ICC would take over war crimes prosecutions themselves. If that were to happen, the death-squad trigger-pullers won’t be in the ICC’s sights. They will be investigating the generals who ignored, allowed or condoned the soldiers’ actions. They’ll be looking at the ministers who signed off on the operations, and who failed to ensure adequate oversight of what was happening. Unlike in Bosnia, we won’t need search teams to find them.
Individualistic western societies are built on the idea that no one knows our thoughts, desires or joys better than we do. And so we put ourselves, rather than the government, in charge of our lives. We tend to agree with the philosopher Immanuel Kant’s claim that no one has the right to force their idea of the good life on us.
Artificial intelligence (AI) will change this. It will know us better than we know ourselves. A government armed with AI could claim to know what its people truly want and what will really make them happy. At best it will use this to justify paternalism, at worst, totalitarianism.
Every hell starts with a promise of heaven. AI-led totalitarianism will be no different. Freedom will become obedience to the state. Only the irrational, spiteful or subversive could wish to chose their own path.
To prevent such a dystopia, we must not allow others to know more about ourselves than we do. We cannot allow a self-knowledge gap.
The All-Seeing AI
In 2019, the billionaire investor Peter Thiel claimed that AI was “literally communist”. He pointed out that AI allows a centralising power to monitor citizens and know more about them than they know about themselves. China, Thiel noted, has eagerly embraced AI.
We already know AI’s potential to support totalitarianism by providing an Orwellian system of surveillance and control. But AI also gives totalitarians a philosophical weapon. As long as we knew ourselves better than the government did, liberalism could keep aspiring totalitarians at bay.
But AI has changed the game. Big tech companies collect vast amounts of data on our behaviour. Machine-learning algorithms use this data to calculate not just what we will do, but who we are.
The accuracy of AI’s predictions will only improve. In the not-too-distant future, as the writer Yuval Noah Harari has suggested, AI may tell us who we are before we ourselves know.
These developments have seismic political implications. If governments can know us better than we can, a new justification opens up for intervening in our lives. They will tyrannise us in the name of our own good.
Negative freedom is “freedom from”. It is freedom from the interference of other people or government in your affairs. Negative freedom is no one else being able to restrain you, as long as you aren’t violating anyone else’s rights.
In contrast, positive freedom is “freedom to”. It is the freedom to be master of yourself, freedom to fulfil your true desires, freedom to live a rational life. Who wouldn’t want this?
But what if someone else says you aren’t acting in your “true interest”, although they know how you could. If you won’t listen, they may force you to be free – coercing you for your “own good”. This is one of the most dangerous ideas ever conceived. It killed tens of millions of people in Stalin’s Soviet Union and Mao’s China.
The Russian Communist leader, Lenin, is reported to have said that the capitalists would sell him the rope he would hang them with. Peter Thiel has argued that, in AI, capitalist tech firms of Silicon Valley have sold communism a tool that threatens to undermine democratic capitalist society. AI is Lenin’s rope.
Fighting for ourselves
We can only prevent such a dystopia if no one is allowed to know us better than we know ourselves. We must never sentimentalise anyone who seeks such power over us as well-intentioned. Historically, this has only ever ended in calamity.
The problem is not AI improving our self-knowledge. The problem is a power disparity in what is known about us. Knowledge about us exclusively in someone else’s hands is power over us. But knowledge about us in our own hands is power for us.
Many of us have been spending more time at home than ever before, and chances are unless you live by yourself in the middle of nowhere, at some point unwanted noise will have infiltrated your lockdown.
Whether it’s cars passing nearby, a neighbour’s blaring music or the constant drone of a lawnmower, the trouble with sound is that – unlike light – it can be hard to block out completely. This is because it’s a pressure wave in air that readily diffracts around objects and easily passes through porous obstacles such as trees and shrubs.
The wind and temperature gradient in the atmosphere also affects transmission of noise. This is why we may hear the noise from a distant motorway if the wind is blowing from that direction – or think the motorway has moved to the bottom of the garden on a cold still morning when there is a temperature inversion – this is when there are warmer layers of air above colder ones.
Another issue with sound is that people living in a quiet area may be more seriously disturbed by the odd passing vehicle than people living in an area where traffic noise is more constant.
Reducing noise at source is usually the best course of action. Ideally, many of us would like to reduce the number of noisy vehicles passing our homes and gardens but unfortunately, we can’t control this. In the case of road traffic, reducing the speed limit would help – as would a smoother road surface or, better still, a surface that absorbs sound such as porous asphalt. These are all jobs for the highway authority – but they may have more pressing claims on their budgets.
There are, however, things you can do around your house and garden to make things a little more peaceful. A barrier such as a close boarded fence, earth mound or wall close to the road should help – but they will have to be long enough and high enough to have much effect.
Much depends on where the house is in relation to the road. The aim would be to position any barrier so that the road is not in view from any exposed window or part of the garden.
If noise can’t be controlled over the whole garden then consider making a tranquil zone in part of the garden where you can relax. This might involve building a wall or fence around part of the area to block the major sources of noise while not forgetting that the house itself can act as an effective barrier.
A water feature may also help to mask residual noise. The more natural sounding this is the better – but make sure it’s not too noisy, as this may be disturbing to you or your neighbours.
A study involving brain scans has shown that we process auditory information differently depending on the scene in view. The noise of a sandy beach and motorway at distance are quite similar, but research has shown that if using the same sound recording while showing a beach scene (as opposed to a motorway scene) to volunteers in an MRI scanner, the resulting brain patterns differ significantly. The rated tranquillity also differs significantly.
In fact, research on tranquility has shown that the rated tranquillity of a place depends on both the percentage of natural features – such as greenery, rock, sand and water – in view and the level of man-made noise.
This means there is a trade-off in the sense that if you cannot control the noise, the perceived tranquillity improves if the amount of greenery or water in view increases. This is worth bearing in mind when creating a tranquil garden space.
Finding tranquillity indoors
Inside the home, some of the same principles apply. Reduce sources of noise by installing double glazing to windows and doors and add a thicker insulation layer in the loft to control aircraft noise.
If it proves difficult to control noise in the bedroom then think about changing rooms so that you sleep on the non-traffic side of the house. Another thought is to include pictures of nature as wall art – the bigger the better – as research has shown that installing pictures of nature scenes on the walls, as well as playing relaxing sea sounds as background music, can significantly improve people’s experiences of tranquillity and anxiety in a doctor’s waiting room.
Many of us have enjoyed listening to the birds more often with the reduced traffic levels of lockdown. It would be nice to think the “new normal” would include some of these gains. Hopefully people will realise that many of the journeys they make by car are not strictly necessary. And it’s important not to forget that nature is around us all the time – if only we just take a moment to stop and listen.