Use + Remix

The automated vehicle 'trolley problem' shows where self-driving technology can fail. But there could be upsides to coding human values into these machines.

Both humans and robot cars are fallible, but working together they could improve road safety. : Pexels: Taras Makarenko Free to use Both humans and robot cars are fallible, but working together they could improve road safety. : Pexels: Taras Makarenko Free to use

The automated vehicle ‘trolley problem’ shows where self-driving technology can fail. But there could be upsides to coding human values into these machines.

While fully self-driving cars are a hypothetical product of the future, some levels of autonomous vehicles (AVs) are already here.

As with other forms of AI, humans must weigh the costs and benefits of incorporating this new technology into their lives.

On the upside, AVs could support sustainable transport by reducing congestion and fossil fuels, enhance road safety, and provide accessible transport to underserved communities including those without access to a driver’s licence.

Despite these benefits, many people remain hesitant to engage fully automated AVs.

In one Australian study led by Monash University’s Sjaan Koppel, 42 percent of participants said they would “never” use an automated vehicle to transport their unaccompanied children, while only 7 percent said they would “definitely” use one.

Our distrust in AI seems to stem from a fear that the machine will take over and make errors or decisions misaligned with human values, as depicted in the 1983 adaptation of Stephen King’s horror film of the murderous car Christine. We fear increasingly being kept out-of-the-loop of machines’ actions.

Trust and technology

There are six different AV levels described, with level 0 being “no automation” and level five offering “full driving automation,” where humans are defined only as ‘passengers’.

Currently, levels 0 to 2 are available to consumers, while level 3 — “conditional automation” — has some limited commercial availability. The second-highest level of automation, level 4 or “high automation”, is now being tested. AVs currently available to consumers today require drivers to monitor and override the automation as needed.

To ensure AVs don’t become Christine and develop minds of their own, AI programmers use a process called value alignment. This alignment becomes particularly important as increasingly autonomous levels of vehicles are developed and tested.

Value alignment takes place by programming AI — either explicitly, in the case of knowledge-based systems, or implicitly via ‘learning’ for neural networks — to behave in a manner representing human goals.

For AVs, alignment would differ somewhat depending on the vehicle’s intended use and location but would likely consider cultural values alongside local laws and governances (e.g. pulling over for an ambulance).

The trolley problem

AV alignment is not a simple task. Where AV alignment gets tricky is when the vehicles encounter a real-world challenge like the “trolley problem”.

First credited to philosopher Philippa Foot in 1967, the trolley problem has us consider human morals and ethics. Adapted for AVs, the trolley problem can help us consider to what extent AV alignment is possible.

Consider the following scenario: A fully automated AV is heading for a crash and must act. It can swerve right to avoid five people but hit one person, or swerve left to avoid the one person but place the five in danger. What action should the AV take? Which option is most aligned with human values?

Now consider this scenario: What if the vehicle was a level 1 or 2 AV, allowing the driver to retain control: which direction would you steer when the AV’s “warning” sounded?

What if the choice was between five adults and one child?

What if the one person was your mum or dad?

You might be relieved to know that the trolley problem was never meant to have a “correct” answer.

What this problem illustrates is that “aligning” AVs with human values is not straightforward.

Consider Google’s mishap with Gemini, in which an attempt at alignment, in this case to reduce racism and gender stereotypes through programming the large language model, resulted in misinformation and absurdity (e.g. Nazi-era soldiers depicted as people of colour). Alignment isn’t simple to achieve, and even deciding whose values and goals to align with remains challenging.

But there are upsides to the opportunity to ensure AVs align with human values.

Aligned AVs could make driving safer since, in reality, humans tend to overestimate their own driving ability. The majority of crashes are related to human error such as speeding, distraction or fatigue.

Could AVs instead help us align our own driving to be safer and more reliable? After all, technology, such as lane keeping assist and adaptive cruise control, are already supporting us to be safer drivers in level 1 AVs.

Human Alignment … for humans or AI?

As these vehicles’ presence on our roads increases, what’s clear is that enhancing human’s responsible driving of AVs is increasingly important.

Our ability to make effective decisions and drive safely in collaboration with AV technology is paramount.

Concerningly, research shows humans have a tendency to over-rely on automated systems, such as AVs, and this automation bias is a hard habit to break. We tend to perceive technology as infallible.

“Death by GPS” is now a widely used expression because of our inclination to blindly follow navigation systems — even when there is incontrovertible evidence that the technology is wrong. (You may recall the case of the tourists who drove into a bay in Queensland after trying to “drive” to North Stradbroke Island.)

What the AV trolley problem reveals is that the technology can be just as fallible as humans (maybe more so due its disembodied awareness of the world), but possibly for different reasons.

The dystopian scenario where AI “takes over” may not be as dramatic as we are led to believe. What could be a greater threat to AV safety could be a quiet but very real readiness of humans to simply hand over control to the AI.

Our uncritical engagement with AI is impacting the way we think, and dulling our senses, including our sense of direction. What all this means is our driving skills are likely to suffer as we become increasingly complacent in the face of technology.

While the future may include Level 5 AVs, the present still relies on human decision-making, and our very human capability of scepticism.

Drivers’ exposure to AV failures can counter automation bias, and when combined with demanding increased transparency in AI system decision-making processes, AVs may have the power to augment, even enhance, human-led road safety.

Michelle D. Lazarus, SFHEA, PhD, is the Director of the Centre for Human Anatomy Education and Deputy Director of the Centre for Scholarship in Health Education at Monash University in Australia. She is an award-winning educator, having received the Australian Universities ‘Teaching Excellence’ award amongst others, and is the author of the The Uncertainty Effect: How to Survive and Thrive through the Unexpected.

Are you a journalist? Sign up for our wire service