The Conversation, on Electricity use and Sustainability

Britain’s electricity use is at its lowest for decades – but will never be this low again

Lukasz Pajor / shutterstock
Grant Wilson, University of Birmingham; Joseph Day, University of Birmingham, and Noah Godfrey, University of Birmingham

In 2020, Britain’s electrical use was the lowest it had been since 1983. This wasn’t entirely due to COVID – demand for electricity had been falling for more than a decade anyway, thanks to savings from energy-efficient appliances, moving industry offshore and consumers becoming more careful as costs increased.

But demand will bounce back after COVID. And the electrification of transport and heat, both critical to achieving net-zero emissions, will require lots more electricity in future.

We have looked at the data for electricity use in Great Britain (Northern Ireland is part of a single market on the island of Ireland) over the past year and we believe that there will never again be a year when so little electricity is used.

Britain daily electrical demand 2019 versus 2020

COVID meant less electricity use

Pandemic measures reduced the overall amount of electricity used by 6% in 2020, to the lowest level since 1983. When you look at usage per person the fall in recent years is even more extreme. To find a similar level of electricity use per capita you would have to go back more than 50 years to a time when black and white TVs were still the norm.

Overall, 2020 was not a particularly windy year but wind still managed to generate more than a quarter of Britain’s electrical energy. Broadly speaking, generation from other renewables and coal were all similar to 2019. Reductions in generation came mostly from gas, while nuclear output also dropped to its lowest level since 1982. Net imports were also down on recent years.

From a climate perspective, major power production was coal-free for more than 5,000 hours in 2020 – more than half the year. This meant the electricity that was generated was on average Britain’s cleanest ever.

The chart above and table below show that over the past decade, Britain has switched its electricity generation from coal to gas and renewables. The challenge is to continue to substitute the remaining fossil-fuels while at the same time increasing the total amount generated.

How to power millions of electric cars?

Britain will need to generate more electricity because low-carbon transport and heating rely on it. To get a sense of the scale of the electricity needed for transport, let’s imagine what would happen if all cars and taxis suddenly went electric.

Cars and taxis currently travel nearly 280 billion miles a year in Great Britain. Multiply that by the 24-25 kilowatt hours per 100 miles that the current best electrical vehicles technologies can reach, and you have a total of around 70+ terawatt hours of electricity needed each year (interestingly, a similar value to the total amount of wind generation in 2020).

Generating enough electricity to cover these cars and taxis – even ignoring other forms of transport – would take Britain’s annual demand back up to its peak year in 2005.

From gas to electricity

Unlike the trend towards much cleaner power generation, more than 80% of the energy used to provide warmth in Britain is still provided by burning fossil fuels, most commonly through a gas boiler. As with transport, decarbonisation will mean shifting a significant portion of this energy demand from fossil fuel to electricity.

A white box on a wall with copper pipes coming out of it.
Gas boilers are still used to heat most homes in Britain. lovemydesigns / shutterstock

Specifically, this will mean replacing gas boilers with a variety of heat pumps. These devices use electricity to extract ambient thermal energy from the surroundings – the air or the ground – and to “pump” this heat into a building. Around 28,000 heat pumps were installed in 2019, though the government’s target is to fit 600,000 a year by 2028. Clearly a massive and sustained increase in deployment will be required.

Just as electric vehicles require less energy than petrol cars, heat pumps require less input energy than their fossil fuel counterparts. Despite this efficiency benefit, the decarbonisation of heat will probably still require Britain to generate hundreds of terawatt hours more electricity every year. The exact amount ultimately depends on the mix of different low-carbon heating technologies and reduction in heat demand from climate change and building improvements.

All this extra electricity will have to be carefully managed to avoid the network being overloaded at peak times. Demand for heat is currently seen as less flexible, and it always will be highly seasonal – people want warm houses in colder weather, during the daytime. This differs markedly from transport, which shows a much more consistent pattern of demand throughout the course of the year (notwithstanding the temporary impacts of COVID).

Managing that extra electricity isn’t impossible: users can be provided with incentives to shift their behaviour (why not charge your car overnight, or on particularly windy days when electricity is clean and cheap?), and longer-term energy storage options are being developed. Innovations such as thermal energy storage and active buildings also aim to provide more flexibility to heating.

For those of us who study energy systems, it’s an exciting time. As demand from transport and heat increases, Great Britain will never again use as little electricity as it did in 2020 – and as this means using less fossil fuels, it’s something to celebrate.The Conversation

Grant Wilson, Lecturer, Energy Informatics Group, Chemical Engineering, University of Birmingham; Joseph Day, Postdoctoral Research Assistant in Energy Informatics, University of Birmingham, and Noah Godfrey, Energy Data Analyst – PhD in Modelling Flexibility in Future UK Energy Systems, University of Birmingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Digital Currencies using BlockChain

Ethereum: what is it and why has the price gone parabolic?

Welcome to web 3.0. Inked Pixels
Paul J Ennis, University College Dublin and Donncha Kavanagh, University College Dublin

The price of the world’s second largest cryptocurrency, ether, hit a new all-time high of US$1,440 (£1,050) on January 19. This breached a previous high set three years ago and gave ether a total value (market capitalisation) of US$160 billion, although it has since fallen back to around US$140 billion.

Ether, which runs on a technology system known as the ethereum blockchain, is worth over ten times the price it was when it bottomed during the COVID market panic of March 2020. And the cryptocurrency is still only five years old. In part, this remarkable rise in the value is due to excess money flowing into all the leading cryptocurrencies, which are now seen as relatively safe store-of-value assets and a good speculative investment.

Ether/US$ price

Ether price chart
TradingView

But ether’s price rise has even outstripped that of the number one cryptocurrency, bitcoin, which “only” had a seven-fold increase since March. Ether has outperformed partly due to several improvements and new features being rolled out over the next few months. So what are ether and ethereum and why is this cryptocurrency now worth more than corporate giants such as Starbucks and AstraZeneca?

Ether and bitcoin

Blockchains are online ledgers that keep permanent tamper-proof records of information. These records are continually verified by a network of computer nodes similar to servers, which are not centrally controlled by anyone. Ether is just one of over 8,000 cryptocurrencies that use some form of this technology, which was invented by the anonymous “Satoshi Nakamoto” when he released bitcoin over a decade ago.

The ethereum blockchain was first outlined in 2013 by Vitalik Buterin, a 19-year old prodigy who was born in Russia but mostly grew up in Canada. After crowdfunding and development in 2014, the platform was launched in July 2015.

As with the bitcoin blockchain, each ethereum transaction is confirmed when the nodes on the network reach a consensus that it took place – these verifiers are rewarded in ether for their work, in a process known as mining.

But the bitcoin blockchain is confined to enabling digital, decentralised money – meaning money that is not issued from any central institution unlike, say, dollars. Ethereum’s blockchain is categorically different in that it can host both other digital tokens or coins, and decentralised applications.

Decentralised applications or “dapps” are open-source programs developed by communities of coders not attached to any company. Any changes to the software are voted on by the community using a consensus mechanism.

Perhaps the best known applications running on the ethereum blockchain are “smart contracts”, which are programs that automatically execute all or parts of an agreement when certain conditions are met. For instance, a smart contract could automatically reimburse a customer if, say, a flight was delayed more than a prescribed amount of time.

Many of the dapp communities are also operating what is known as decentralised autonomous organisations or DAOs. These are essentially alternatives to companies and seen by many as the building blocks of the next phase of the internet or “web 3.0”. A good example is the burgeoning trading exchange Sushiswap.

Ethereum has evolved and developed since its launch six years ago. In 2016, a set of smart contracts known as “The DAO” raised a record US$150 million in a crowdsale but was quickly exploited by a hacker who siphoned off one- third of the funds. However, since then, the ethereum ecosystem has matured considerably. While hacks and scams remain common, the overall level of professionalism appears to have improved dramatically.

Why the price explosion

Financial interest in ether tends to follow in the wake of bitcoin rallies because it is the second-largest cryptocurrency and, as such, quickly draws the attention of the novice investor. All the same, there are other factors behind its recent rally.

The first is the pace of innovation on the platform. Most activity in the cryptocurrency space happens on ethereum. In 2020, we saw the emergence of decentralised finance (DeFi). DeFi is analogous to the mainstream financial world, but with the middleman banks cut out.

Users can borrow, trade, lend and invest through autonomous smart contracts via protocols like Compound, Aave and Yearn Finance. It sounds like science fiction, but this is no hypothetical market – approximately US$24 billion is locked into various DeFi projects right now. Importantly, DeFi allows users to generate income on their cryptocurrency holdings, especially their ether tokens.

The second factor behind the ether surge is the launch of ethereum 2.0. This upgrade addresses major concerns impacting the current version of ethereum. In particular, it will reduce transaction fees – especially useful in DeFi trading, where each transaction can end up costing the equivalent of tens of US dollars.

Ethereum 2.0 will also eliminate the environmentally wasteful mining currently required to make the ethereum blockchain function (the same is true of many other cryptocurrencies, including bitcoin). Within the year, ethereum should be able to drop the need for vast industrial mining warehouses that consume huge amounts of energy.

Instead, transactions will be validated using a different system known as “proof-of-stake”. The sense that ethereum addresses problems like these quickly rather than letting them sit could prove a major differential from the sometimes sluggish and conservative pace of the bitcoin development culture.

A final factor is the launch of ethereum futures trading on February 8. This means that traders will be able to speculate on what ether will be worth at a given date in the future for the first time – a hallmark of any mature financial asset. Some analysts have said the recent bitcoin rally has been fuelled by traditional investment firms, and the launch of ethereum futures is often touted as opening the doors for the same price action.

However, as every seasoned cryptocurrency user knows, both currencies are extremely volatile and are as liable to crash by extremes as rise by them. Bitcoin’s price fell 85% in the year after the last bull market in 2017, while ether was down by 95% at one stage from its previous high of US$1,428.

Whatever the valuation, the future of ethereum as a platform looks bright. Its challenge is ultimately external: projects such as Cardano and Polkadot, created by individuals who helped launch ethereum itself, are attempting to steal ethereum’s crown.

But as bitcoin has shown, first-mover advantage matters in cryptocurrency, and despite bitcoin’s relative lack of features it is unlikely to be moved from its dominant position for some time. The same is most likely true for the foreseeable future with ethereum.The Conversation

Paul J Ennis, Lecturer/Assistant Professor in Management Information Systems, University College Dublin and Donncha Kavanagh, Professor of Information & Organisation, University College Dublin

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Managing Screen Time

Five ways to manage your screen time in a lockdown, according to tech experts

shutterstock
John McAlaney, Bournemouth University; Deniz Cemiloglu, Bournemouth University, and Raian Ali, Hamad Bin Khalifa University

The average daily time spent online by adults increased by nearly an hour during the UK’s spring lockdown when compared to the previous year, according to communications regulator Ofcom. With numerous countries back under severe pandemic restrictions, many of us once again find ourselves questioning whether our heavy reliance on technology is impacting our wellbeing.

It’s true that digital devices have provided new means of work, education, connection, and entertainment during lockdown. But the perceived pressure to be online, the tendency to procrastinate to avoid undertaking tasks, and the use of digital platforms as a way to escape distress all have the potential to turn healthy behaviours into habits. This repetitive use can develop into addictive patterns, which can in turn affect a user’s wellbeing.

In our recent research, we explored how to empower people to have healthier and more productive relationships with digital technology. Our findings can be applied to those suffering from digital addiction as well as those who may feel their digital diet has ballooned unhealthily in the solitude and eventlessness of lockdown.

Screen time and addiction

Digital addiction refers to the compulsive and excessive use of digital devices. The design of digital platforms themselves contributes to this addictive use. Notifications, news feeds, likes and comments have all been shown to contribute towards a battle for your attention, which leads users to increase the time they spend looking at screens.

Screen time is an obvious measure of digital addiction, although researchers have noted that there is no simple way to determine how much screen time one can experience before it becomes problematic. As such, there is a continued lack of consensus on how we should think about and measure digital addiction.

Woman video conferences with others on a screen
Many of us have turned to video conferencing to keep in touch with friends and family. shutterstock

During a global pandemic, when there often feels like no alternative to firing up Netflix, or video conferencing with friends and family, screen time as an indicator of digital addiction is clearly ineffective. Nonetheless, research conducted on digital addiction intervention and prevention does provide insights on how we can all engage with our digital technologies in a healthier way during a lockdown.

1. Setting limits

During the course of our research, we found that effective limit setting can motivate users to better control their digital usage. When setting limits, whatever goal you’re deciding to work towards should be aligned with the five “SMART” criteria. That means the goal needs to be specific, measurable, attainable, relevant and time-bound.

For example, instead of framing your goal as “I will cut down my digital media use”, framing it as “I will spend no more than one hour watching Netflix on weekdays” will enable you to plan effectively and measure your success objectively.

2. Online Support Groups

It might seem a little paradoxical, but you can actually use technology to help promote greater control over your screen time and digital overuse. One study has found that online peer support groups — where people can discuss their experiences with harmful technology use and share information on how to overcome these problems — can help people adjust their digital diet in favour of their personal wellbeing. Even an open chat with your friends can help you understand when your tech use is harmful.

3. Self-reflection

Meanwhile, increasing your sense of self-awareness about addictive usage patterns can also help you manage your digital usage. You can do this by identifying applications we use repetitively and recognising the triggers that prompt this excessive consumption.

Self-awareness can also be attained by reflecting on emotional and cognitive processing. This involves recognising feelings and psychological needs behind excessive digital usage. “If I don’t instantly reply to a group conversation, I will lose my popularity” is a problematic thought that leads to increased screen time. Reflecting on the veracity of such thoughts can help release people from addictive patterns of digital usage.

4. Know your triggers

Acquiring self-awareness on addictive usage patterns can actually help us to identify unsatisfied needs that trigger digital overuse. When we do this, we can pave the way to define alternative behaviours and interests to satisfy those needs in different ways.

Mindfulness meditation, for instance, could be an alternative way of relieving stress, fears, or anxiety that currently leads users to digital overuse. If you feel your digital overuse might simply be due to boredom, then physical activity, cooking, or adopting offline hobbies can all provide alternative forms of entertainment. Again, technology can actually help enable this, for example by letting you create online groups for simultaneous exercising, producing a hybrid solution to unhealthy digital habits.

Father and daughter have fun cooking in kitchen
Cooking is one alternative to unhealthy digital habits. shutterstock

5. Prioritise the social

We must also remember that our relationship with digital media reflects our inner drives. Humans are innately social creatures, and socialising with others is important to our mental wellbeing. Social media can enhance our opportunities for social contact, and support several positive aspects of mental wellbeing, such as peer support and the enhancement of self-esteem. The engagement with media to purposefully socialise during a lockdown can support our mental health, rather than being detrimental to our wellbeing.

Ultimately, technology companies also have a responsibility to both understand and be transparent about how the design of their platforms may cause harm. These companies should empower users with explanations and tools to help them make informed decisions about their digital media use.

While we may consider this as a legitimate user requirement, technology companies seem to be at the very early stages of delivering it. In the meantime, reflecting on when and why we turn to our screens is a good basis upon which to form positive digital habits during new lockdowns imposed this year.The Conversation

John McAlaney, Associate Professor in Psychology, Bournemouth University; Deniz Cemiloglu, Researcher, Bournemouth University, and Raian Ali, Professor, College of Science and Engineering, Hamad Bin Khalifa University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Digital Hoarders

Digital hoarders: we’ve identified four types – which are you?

rawf8/Shutterstock
Nick Neave, Northumbria University, Newcastle

How many emails are in your inbox? If the answer is thousands, or if you often struggle to find a file on your computer among its cluttered hard drive, then you might be classed as a digital hoarder.

In the physical world, hoarding disorder has been recognised as a distinct psychiatric condition among people who accumulate excessive amounts of objects to the point that it prevents them living a normal life. Now, research has begun to recognise that hoarding can be a problem in the digital world, too.


Read more: For some people, anxiety and phobias are taken to extremes


A case study published in the British Medical Journal in 2015 described a 47-year-old man who, as well as hoarding physical objects, took around 1,000 digital photographs every day. He would then spend many hours editing, categorising, and copying the pictures onto various external hard drives. He was autistic, and may have been a collector rather than a hoarder — but his digital OCD tendencies caused him much distress and anxiety.

The authors of this research paper defined digital hoarding as “the accumulation of digital files to the point of loss of perspective which eventually results in stress and disorganisation”. By surveying hundreds of people, my colleagues and I found that digital hoarding is common in the workplace. In a follow-up study, in which we interviewed employees in two large organisations who exhibited lots of digital hoarding behaviours, we identified four types of digital hoarder.

“Collectors” are organised, systematic and in control of their data. “Accidental hoarders” are disorganised, don’t know what they have, and don’t have control over it. The “hoarder by instruction” keeps data on behalf of their company (even when they could delete much of it). Finally, “anxious hoarders” have strong emotional ties to their data — and are worried about deleting it.

Working life

Although digital hoarding doesn’t interfere with personal living space, it can clearly have a negative impact upon daily life. Research also suggests digital hoarding poses a serious problem to businesses and other organisations, and even has a negative impact on the environment.

To assess the extent of digital hoarding, we initially surveyed more than 400 people, many of whom admitted to hoarding behaviour. Some people reported that they kept many thousands of emails in inboxes and archived folders and never deleted their messages. This was especially true of work emails, which were seen as potentially useful as evidence of work undertaken, a reminder of outstanding tasks, or were simply kept “just in case”.

Man at computer confornted by many email notifications
Saving work emails is a common form of digital hoarding. Rawpixel.com/Shutterstock

Interestingly, when asked to consider the potentially damaging consequences of not deleting digital information – such as the cybersecurity threat to confidential business information – people were clearly aware of the risks. Yet the respondents still showed a great reluctance to hit the delete button.

At first glance, digital hoarding may not appear much of a problem — especially if digital hoarders work for large organisations. Storage is cheap and effectively limitless thanks to internet “cloud” storage systems. But digital hoarding may still lead to negative consequences.

First, storing thousands of files or emails is inefficient. Wasting large amounts of time looking for the right file can reduce productivity. Second, the more data is kept, the greater the risk that a cyberattack could lead to the loss or theft of information covered by data protection legislation. In the EU, new GDPR rules mean companies that lose customer data to hacking could be hit with hefty fines.

The final consequence of digital hoarding — in the home or at work — is an environmental one. Hoarded data has to be stored somewhere. The reluctance to have a digital clear-out can contribute to the development of increasingly large servers that use considerable amounts of energy to cool and maintain them.

A long corridor of servers
Data stored online is saved on servers, which have a large carbon footprint. sedcoret/Shutterstock

How to tackle digital hoarding

Research has shown that physical hoarders can develop strategies to reduce their accumulation behaviours. While people can be helped to stop accumulating, they are more resistant when it comes to actually getting rid of their cherished possessions — perhaps because they “anthropomorphise” them, treating inanimate objects as if they had thoughts and feelings.

We don’t yet know enough about digital hoarding to see whether similar difficulties apply, or whether existing coping strategies will work in the digital world, too. But we have found that asking people how many files they think they have often surprises and alarms them, forcing them to reflect on their digital accumulation and storing behaviours.

As hoarding is often associated with anxiety and insecurity, addressing the source of these negative emotions may alleviate hoarding behaviours. Workplaces can do more here, by reducing non-essential email traffic, making it very clear what information should be retained or discarded, and by delivering training on workplace data responsibilities.

In doing so, companies can reduce the anxiety and insecurity related to getting rid of obsolete or unnecessary information, helping workers to avoid the compulsion to obsessively save and store the bulk of their digital data.The Conversation

Nick Neave, Associate Professor in Psychology, and Director of the Hoarding Research Group, Northumbria University, Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on electricity generated from microbes

Four ways microbial fuel cells might revolutionise electricity production in the future

Jackie Niam/Shutterstock
Godfrey Kyazze, University of Westminster

The world population is estimated to reach 9.5 billion by 2050. Given that most of our current energy is generated from fossil fuels, this creates significant challenges when it comes to providing enough sustainable electricity while mitigating climate change.

One idea that has gained traction over recent years is generating electricity using bacteria in devices called microbial fuel cells (MFCs). These fuel cells rely on the ability of certain naturally occurring microorganisms that have the ability to “breathe” metals, exchanging electrons to create electricity. This process can be fuelled using substances called substrates, which include organic materials found in wastewater.

At the moment microbial fuel cells are able to generate electricity to power small devices such as calculators, small fans and LEDs – in our lab we powered the lights on a mini Christmas tree using “simulated wastewater”. But if the technology is scaled up, it holds great promise.

How they work

MFCs use a system of anodes and cathodes – electrodes that pass a current either in or out. Common MFC systems consist of an anode chamber and a cathode chamber separated by a membrane. The bacteria grow on the anode and convert the substrates into carbon dioxide, protons and electrons.

The electrons that are produced are then transferred via an external circuit to the cathode chamber, while the protons pass through the membrane. In the cathode chamber, a reaction between the protons and the electrons uses up oxygen and forms water. And as long as substrates are continually converted, electrons will flow – which is what electricity is.

Generating electricity through MFCs has a number of advantages: systems can be set up anywhere; they create less “sludge” than conventional methods of wastewater treatment such as activated sludge systems; they can be small-scale yet a modular design can be used to build bigger systems; they have a high tolerance to salinity; and they can operate at room temperature.

The availability of a wide range of renewable substrates that can be used to generate electricity in MFCs has the potential to revolutionise electricity production in the future. Such substrates include urine, organic matter in wastewater, substances secreted by living plants into the soil (root exudates), inorganic wastes like sulphides and even gaseous pollutants.

1. Pee power

Biodegradable matter in waste materials such as faeces and urine can be converted into electricity. This was demonstrated in a microbial fuel cell latrine in Ghana, which suggested that toilets could in future be potential power stations. The latrine, which was operated for two years, was able to generate 268 nW/m² of electricity, enough to power an LED light inside the latrine, while removing nitrogen from urine and composting the faeces.

Schematic of an MFC latrine. Cynthia Castro et al. Journal of Water, Sanitation and Hygiene for Development, 2014.

For locations with no grid electricity or for refugee camps, the use of waste in latrines to produce electricity could truly be revolutionary.

2. Plant MFCs

Another renewable and sustainable substrate that MFCs could use to generate electricity is plant root exudates, in what are called plant MFCs. When plants grow they produce carbohydrates such as glucose, some of which are exuded into the root system. The microorganisms near the roots convert the carbohydrates into protons, electrons and carbon dioxide.

In a plant MFC, the protons are transferred through a membrane and recombine with oxygen to complete the circuit of electron transfer. By connecting a load into the circuitry, the electricity being generated can be harnessed.

Plant MFCs could revolutionise electricity production in isolated communities that have no access to the grid. In towns, streets could be lit using trees.

3. Microbial desalination cells

Another variation of microbial fuel cells are microbial desalination cells. These devices use bacteria to generate electricity, for example from wastewater, while simultaneously desalinating water. The water to be desalinated is put in a chamber sandwiched between the anode and cathode chambers of MFCs using membranes of negatively (anion) and positively (cation) charged ions.

When the bacteria in the anode chamber consume the wastewater, protons are released. These protons cannot pass through the anion membrane, so negative ions move from the salty water into the anode chamber. At the cathode protons are consumed, so positively charged ions move from the salty water to the cathode chamber, desalinating the water in the middle chamber. Ions released in the anode and cathode chambers help to improve the efficiency of electricity generation.

Conventional water desalination is currently very energy intensive and hence costly. A process that achieves desalination on a large scale while producing (not consuming) electricity would be revolutionary.

Desalination plant in Hamburg. Current desalination technology is very energy intensive. Andrea Izzotti/Shutterstock

4. Improving the yield of natural gas

Anaerobic digestion – where microorganisms are used to break down biodegradable or waste matter without needing oxygen – is used to recover energy from wastewater by producing biogas that is mostly methane – the main ingredient of natural gas. But this process is usually inefficient.

Research suggests that the microbial groups used within these digesters share electrons – what has been dubbed interspecies electron transfer – opening up the possibility that they could use positive energy to influence their metabolism.

By supplying a small voltage to anaerobic digesters – a process called electromethanogenesis – the methane yield (and hence the electricity that could be recovered from combined heat and power plants) can be significantly improved.

While microbial fuel cells are able to generate electricity to power small devices, researchers are investigating ways to scale up the reactors to increase the amount of power they can generate, and to further understand how extracellular electron transfer works. A few start-up companies such as Robial and Plant-e are beginning to commercialise microbial fuel cells. In the future, microbial fuel cells could even be used to generate electricity in regenerative life support systems during long-term human space missions. It’s early days but the technology holds much promise.The Conversation

Godfrey Kyazze, Reader in Bioprocess Technology, University of Westminster

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on manufacturing quieter drones

Make drones sound less annoying by factoring in humans at the design stage

Shutterstock/DmitryKalinovsky
Antonio J Torija Martinez, University of Salford

These days almost everyone has either flown a drone or listened to the nasty whining sound they produce. Although small drones (up to 20kg) are about 40 decibels quieter than conventional civil aircraft, they produce a high pitched noise – which people tend to find very annoying.

One Nasa study found that drone sounds were more annoying than those made by road vehicles. And my own research has found that the noise of drones is less preferable than that of civil aircraft – even at the same volume.

Part of the problem is that drones often fly at relatively low altitudes over populated areas that are not normally exposed to aircraft noise. This is likely to lead to tensions within the exposed communities. Unquestionably, if the noise issues are not tackled appropriately, they could derail the wider adoption and commercialisation of drones and put at risk the significant societal benefits that they could bring.

For example, small to medium size drones are already used for multiple applications such as medical deliveries and the search for missing persons. Another innovation in commercial aviation is the development of electrical vertical takeoff and landing (and possibly autonomous) vehicles to transport people in cities.

Several “urban air mobility” vehicles, or “flying taxis” are currently being developed by different aircraft manufacturers. Both drones and flying taxis will produce sounds significantly different from conventional civil aircraft and will share similar issues regarding noise annoyance.

In 2019, I started a line of research which aimed to answer two big questions: how will communities react to these new vehicles with unconventional noise signatures when they begin to operate at scale? And how can the design of these new vehicles be improved to protect the health and the quality of life of the people living in those communities?

To answer the first question, we investigated how a drone operation could influence the perception of a series of typical sound environments in cities. As drones cannot be flown closer to people than 50m, virtual reality techniques were used to produce highly realistic scenarios with a drone hovering in a selection of urban locations.

This laboratory study found that the noise generated by the hovering of a small quad-copter significantly affected the perception of the sound environment. For instance, an important increase in noise annoyance was reported with the drone hovering, particularly in locations with low volumes of road traffic. This suggested that the noise produce by road traffic could make drone noise less noticeable. So the operation of drones along busy roads might mitigate the increase of noise impact caused in the community.

We are now testing a wide variety of drones, with different operating manoeuvres. We seek to better understand and predict human responses to the drone sounds and to gather meaningful evidence to further develop the regulation of the sounds they produce.

Perception-influenced engineering

By integrating human responses into the design process, the most undesirable noises can be avoided in the earliest stages of vehicle development.

This can either be done directly with subjective testing (human participants assessing and providing feedback for a series of drone noise samples) or through the use of so-called psycho-acoustic metrics which are widely adopted in the automotive industry. These metrics allow an accurate representation of how different sound features (pitch, temporal variations, tones) are perceived. We want to use them to inform the design of drones. For instance, optimising the position of rotors to make drones sound less annoying.

The combination of virtual reality techniques and psycho-acoustic methods to inform the design and operation of drones will avoid costly and inefficient ad-hoc corrections at later stages, going beyond the traditional approach for aircraft noise assessment. But more importantly, if drone manufacturers incorporate these strategies into their designs, they might just build machines that are not only efficient, but also, just that little bit less irritating.The Conversation

Antonio J Torija Martinez, Lecturer in Acoustic Engineering, University of Salford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New Year? New Infrastructure – 2021

Adieu 2020!

The year 2021 is only a few weeks away now and many organisations large and small are still looking to consider further digital transformation plans for next year.

Old technology will still affect your performance as a business with or without Covid-19

How can you consistently and reliably transform across a distributed workforce though? Planning. Not a sexy answer, but incredibly effective.

If you want to, for example, take advantage of Dell’s Financial Services offering to allow you to spread the cost of your Dell purchases, you could readily deploy more than 3000 laptops across the UK within about three months with proper planning.

Digital Transformation for the Cloud

What about the Cloud we hear you say? Cloud computing isn’t new, and there is every likelihood that your firm already uses a Public and Private Cloud. ‘Public’ as in an expert third-party, ‘Private’ as in-house.

The key question for 2021 is: you likely have a Cloud already, what now? How do you manage, re-deploy, configure and also migrate the Cloud[s] of your supplier[s]?

One fundamental preparatory step for any cloud transformation (as above) is a VPN. With one you can run fire-and-forget systems to consistently deploy applications, security fixes and updates to remote sites (be they a kitchen, sofa, or library).

Red Hat Cloudforms – the answer to a distributed cloud

When it comes to Cloud Computing, the last thing you want is vendor lock-in. Isn’t that why you moved away from an on-premise datacentre in the first place? To be agile; to be cool again?

The first thing you should do prior to any Cloud Digital Transformation in 2021 is acquire an unparalleled Vendor-Management toolbox. Red Hat is the largest Open Source technology provider in the world, and holds 80% of the Linux market share because frankly – they deserve to, they’ve earned it through constant recognition of what clients want.

Manage your cloud, and then start to look at the most cost-effective vendors with the greatest value-proposition for your firm.

Renovating your Castle

Your Datacentre is a like a Castle: it has multiple networked entry points and exits, secret passageways; high politics when managing your Datacentre’s affairs.

Datacentres experience aging and only get older unless we invest in them.

Renovating a Castle is no mean feat and your firm likely would enjoy working with a vendor that is flexible, timely and has done it before.

Welcome to the Dell ecosystem

Dell Technologies are the main supplier to Governments across the world including the UK Government, because expertise matters. If you lack the internal resources to renovate your Datacentre simply ask: we and Dell will provide the expertise for you.

Dell Financial Services also allows your firm to spread the cost of best-of-breed technologies over many years, allowing you to promptly extract the value of the latest tech in your Datacentre from the outset.

The Conversation, on personality profiling using VR

How we discovered that VR can profile your personality

Mark Nazh/Shutterstock
Stephen Fairclough, Liverpool John Moores University

Virtual reality (VR) has the power to take us out of our surroundings and transport us to far-off lands. From a quick round of golf, to fighting monsters or going for a skydive, all of this can be achieved from the comfort of your home.

But it’s not just gamers who love VR and see its potential. VR is used a lot in psychology research to investigate areas such as social anxiety, moral decision-making and emotional responses. And in our new research we used VR to explore how people respond emotionally to a potential threat.

We knew from earlier work that being high up in VR provokes strong feelings of fear and anxiety. So we asked participants to walk across a grid of ice blocks suspended 200 metres above a snowy alpine valley.

We found that as we increased the precariousness of the ice block path, participants’ behaviour became more cautious and considered – as you would expect. But we also found that how people behave in virtual reality can provide clear evidence of their personality. In that we were able to pinpoint participants with a certain personality trait based on the way they behaved in the VR scenario.

While this may be an interesting finding, it obviously raises concerns in terms of people’s data. As technology companies could profile people’s personality via their VR interactions and then use this information to target advertising, for example. And this clearly raises concerns about how data collected through VR platforms can be used.

Virtual fall

As part of our study, we used head-mounted VR displays and handheld controllers, but we also attached sensors to people’s feet. These sensors allowed participants to test out a block before stepping onto it with both feet.

As participants made their way across the ice, some blocks would crack and change colour when participants stepped onto them with one foot or both feet. As the experiment progressed, the number of crack blocks increased.

We also included a few fall blocks. These treacherous blocks were identical to crack blocks until activated with both feet, when they shattered and participants experienced a short but uncomfortable virtual fall.

We found that as we increased the number of crack and fall blocks, participants’ behaviour became more cautious and considered. We saw a lot more testing with one foot to identify and avoid the cracks and more time spent considering the next move.

But this tendency towards risk-averse behaviour was more pronounced for participants with a higher level of a personality trait called neuroticism. People with high neuroticism are more sensitive to negative stimuli and potential threat.

Personality and privacy

We had participants complete a personality scale before performing the study. We specifically looked at neuroticism, as this measures the extent to which each person is likely to experience negative emotions such as anxiety and fear. And we found that participants with higher levels of neuroticism could be identified in our sample based on their behaviour. These people did more testing with one foot and spent longer standing on “safe” solid blocks when the threat was high.

Neuroticism is one of the five major personality traits most commonly used to profile people. These traits are normally assessed by a self-report questionnaire, but can also be assessed based on behaviour – as demonstrated in our experiment.

Excited teen hipster girl playing virtual reality video game wear vr goggles headset.
As advances in technology continue to develop, so too does the power of surveillance. insta_photos/Shutterstock

Our findings show how users of VR could have their personality profiled in a virtual world. This approach, where private traits are predicted based on implicit monitoring of digital behaviour, was demonstrated with a dataset derived from Facebook likes back in 2013. This paved the way for controversial commercial applications and the Cambridge Analytica scandal – when psychological profiles of users were allegedly harvested and sold to political campaigns. And our work demonstrates how the same approach could be applied to users of commercial VR headsets, which raises major concerns for people’s privacy.

Users should know if their data is being tracked, whether historical records are kept, whether data can be traced to individual accounts, along with what the data is used for and who it can be shared with. After all, we wouldn’t settle for anything less if such a comprehensive level of surveillance could be achieved in the real world.The Conversation

Stephen Fairclough, Professor of Psychophysiology in the School of Psychology, Liverpool John Moores University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation, on Spreadsheet errors

Excel errors: the UK government has an embarrassingly long history of spreadsheet horror stories

TeodorLazarev/Shutterstock
Simon Thorne, Cardiff Metropolitan University

When the UK prime minister, Boris Johnson, first said the country would develop a £12 billion “world-beating” system for testing and tracing cases of COVID-19, few people probably imagined that it would be based around a £120 generic spreadsheet program. Yet the news that the details of 16,000 positive test cases had been lost because of an error made with Microsoft Excel revealed that was exactly the case.

But perhaps we shouldn’t be that surprised. Spreadsheets are ubiquitous. They can be found in critical operations in practically every industry. They are highly adaptable programming environments that can be applied to many different tasks.

They are also very easy to misuse or make mistakes with. As a result, almost everyone has their own spreadsheet horror stories. And the UK government in particular has a long history of embarrassing errors.

In 2012, the Department for Transport was forced into an embarrassing and costly retraction when it realised its spreadsheet-based financial model for awarding the west coast rail franchise contained flawed assumptions and was inconsistently communicated to bidding companies. The Department had to retract and re-tender the £9 billion, ten-year contract, refunding £40 million to the four bidders. The entire episode was thought to have cost the taxpayer up to £300 million.

In 1999, the government-owned company British Nuclear Fuels Limited admitted that some of its staff had falsified three years’ worth of safety data on fuel it was exporting by copying and pasting reams of data in spreadsheets. Customers immediately ceased trading with BNFL and insisted the UK government take the defective shipments of nuclear fuel back to the UK and pay compensation.

While this was a deliberate act rather than a software error, spreadsheets should not have been the tool of choice in this scenario because they are open to manipulation and error. A bespoke system should have existed for this important safety-critical process.

Woman at computer with spreadsheet on screen and head in hands.
Everyone has a horror story. Andrey Popov/Shutterstock

Perhaps the most significant spreadsheet problem occurred in 2010, when then-chancellor George Osborne based his justification for a decade of public austerity on the conclusions of a research paper, Growth in a Time of Debt. The paper, published by Harvard University economics professors Carmen Reinhardt and Kenneth Rogoff, asserted that a country’s economy will shrink when its debt exceeds 90% of GDP.

But this analysis was flawed. The spreadsheet used to calculate the figures omitted some rows of relevant data from the calculation. In fact, the data showed that an excess of 90% debt to GDP still resulted in positive economic growth. And so the argument for austerity was also flawed.

Yet as a result of this argument, the UK suffered significant cuts to welfare and public services, including health and social care. In 2017, a paper in the British Medical Journal attributed 120,000 excess deaths to these cuts.

Not enough testing

The underlying problem with spreadsheets is how easy it is to use them without applying adequate scrutiny, oversight and validation. They can include many complex customised algorithms but are not typically built by software engineers who are trained to create reliable software.

Instead, research shows, they are often built without sufficient planning, any control process or testing. This results in lurking data integrity problems that, given the right circumstances, can suddenly cause catastrophe.

Such a lack of testing appears to have been exactly the problem with the spreadsheet used by the test and trace system, which lost 16,000 entries when a file was imported to an older version of the software that could hold less data. Clearly the importing of data was not tested properly, and a bug went unnoticed which eventually caused a catastrophic failure of the system in operation. If the importing of data had been tested, it would have been noticed and corrected.

Unless we wish to keep repeating the same mistakes with spreadsheets, government needs to take this software more seriously. It needs to apply the same engineering controls that it would if developing in a formal language and prevent the inappropriate use of spreadsheets.

This isn’t the first time government spreadsheet misuse has been connected with life or death decisions. And unless things change, it probably won’t be the last.The Conversation

Simon Thorne, Senior Lecturer in Computing and ​Information Systems, Cardiff Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus: The Conversation, on AI and treating Covid-19

Teaching computers to read health records is helping fight COVID-19 – here’s how

James Teo, King’s College London and Richard Dobson, King’s College London

Medical records are a rich source of health data. When combined, the information they contain can help researchers better understand diseases and treat them more effectively. This includes COVID-19. But to unlock this rich resource, researchers first need to read it.

We may have moved on from the days of handwritten medical notes, but the information recorded in modern electronic health records can be just as hard to access and interpret. It’s an old joke that doctors’ handwriting is illegible, but it turns out their typing isn’t much better.

The sheer volume of information contained in health records is staggering. Every day, healthcare staff in a typical NHS hospital generate so much text it would take a human an age just to scroll through it, let alone read it. Using computers to analyse all this data is an obvious solution, but far from simple. What makes perfect sense to a human can be highly difficult for a computer to understand.

Our team is using a form artificial intelligence to bridge this gap. By teaching computers how to comprehend human doctors’ notes, we’re hoping they’ll uncover insights on how to fight COVID-19 by finding patterns across many thousands of patients’ records.

Why health records are hard going

A significant proportion of a health record is made up of free text, typed in narrative form like an email. This includes the patient’s symptoms, the history of their illness, and notes about pre-existing conditions and medications they’re taking. There may also be relevant information about family members and lifestyle mixed in too. And because this text has been entered by busy doctors, there will also be abbreviations, inaccuracies and typos.

A doctor using a computer
Doctors write information in free text boxes is rich in detail but poorly arranged for a machine to understand. logoboom/Shutterstock

This kind of information is known as unstructured data. For example, a patient’s record might say:

Mrs Smith is a 65-year-old woman with atrial fibrillation and had a CVA in March. She had a past history of a #NOF and OA. Family history of breast cancer. She has been prescribed apixaban. No history of haemorrhage.

This highly compact paragraph contains a large amount of data about Mrs Smith. Another human reading the notes would know what information is important and be able to extract it in seconds, but a computer would find the task extremely difficult.

Teaching machines to read

To solve this problem, we’re using something called natural language processing (NLP). Based on machine learning and AI technology, NLP algorithms translate the language used in free text into a standardised, structured set of medical terms that can be analysed by a computer.

These algorithms are extremely complex. They need to understand context, long strings of words and medical concepts, distinguish current events from historic ones, identify family relationships and more. We teach them to do this by feeding them existing written information so they can learn the structure and meaning of language – in this case, publicly available English text from the internet – and then use real medical records for further improvement and testing.

Using NLP algorithms to analyse and extract data from health records has huge potential to change healthcare. Much of what’s captured in narrative text in a patient’s notes is normally never seen again. This could be important information such as the early warning signs of serious diseases like cancer or stroke. Being able to automatically analyse and flag important issues could help deliver better care and avoid delays in diagnosis and treatment.

Finding ways to fight COVID-19

By drawing together health records using these tools, we’re now using these techniques to see patterns that are relevant to the pandemic. For example, we recently used our tools to discover whether drugs commonly prescribed to treat high blood pressure, diabetes and other conditions – known as angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) – increase the chances of becoming severely ill with COVID-19.

The virus that causes COVID-19 infects cells by binding to a molecule on the cell surface called ACE2. Both ACEIs and ARBs are thought to increase the amount of ACE2 on the surface of cells, leading to concerns that these drugs could be putting people at increased risk from the virus.

SARS-CoV-2 binding with ACE2 on the surface of a cell
The coronavirus (red) binds with ACE2 proteins (blue) on the cell’s surface (green) to gain entry. Kateryna Kon/Shutterstock

However, the information needed to answer this question – how many severely ill COVID-19 patients are being prescribed these drugs – can be recorded both as structured prescriptions and in free text in their medical records. That free text needs to be in a computer-searchable format for a machine to answer the question.

Using our NLP tools, we were able to analyse the anonymised records of 1,200 COVID-19 patients, comparing clinical outcomes with whether or not patients were taking these drugs. Reassuringly, we found that people prescribed ACEIs or ARBs were no more likely to be severely ill than those not taking the drugs.

We’re now expanding how we use these tools to find out more about who is most at risk from COVID-19. For instance, we’ve used them to investigate the links between ethnicity, pre-existing health conditions and COVID-19. This has revealed several striking things: that being black or of mixed ethnicity makes you more likely to be admitted to hospital with the disease, and that Asian patients, when in hospital, are at greater risk of being admitted to intensive care or dying from COVID-19.

We’ve also used these tools to evaluate the early warning scores that predict which patients admitted to hospital are most likely to become severely ill, and to suggest what additional measures could be used to improve these scores. We’re also using the technology to predict upcoming surges of COVID-19 cases, based on patients’ symptoms that doctors have recorded.The Conversation

James Teo, Neurologist, Clinical Director of Data and AI and Clinical Senior Lecturer, King’s College London and Richard Dobson, Professor in Health Informatics, King’s College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.