The concept of autonomous vehicles (AVs) has long sparked debate as to the ethics of transferred decision-making autonomy on roads. The Society of Automotive Engineers sets out 6 Levels of autonomy used officially across the industry to differentiate AV capability[1].
Levels 0-2 AV already exist in commercial markets. Level 3 is the first significant jump in capability. It describes vehicles that can self-drive for short periods, but require a human driver to be ready to intervene if the system requests it. Levels 4-5 then go beyond environmental detection. These encompass cutting-edge technologies which obviate human override altogether[2]. Level 4 AVs can complete an entire journey without human intervention under specific conditions. Level 5 can complete entire journeys under any circumstances. Level 5 would be associated with vehicles that don’t even need steering wheels or pedals, for example.
The moral and ethical dilemmas that emerge surrounding these two higher levels of autonomy arise from the loss of almost all direct decision-making power. Correct functioning of core technologies, ability to value human life and principles, trade-offs, and accountability then all become issues under both ethical and legal frameworks.
We will explore these, starting with the infamous Trolley Problem.
The Trolley Problem is a thought experiment created within the branch of philosophy called virtue ethics and discusses how foreseeable consequences compare to intended consequences on a moral level. The main variation, devised by British philosopher Philippa Foot (1967)[3], is as follows:
A trolley is running along a set of tracks, out of control and unable to break. 5 people are tied onto these tracks though, and the trolley is fast approaching them. You are stood off the tracks next to a lever, which if pulled would divert the trajectory of the trolley to a different set of tracks. This alternative track has only one person tied to it, so the trolley will currently kill 5 people, but this could be reduced to just one if you act. Do you pull the lever?
The Trolley Problem can be viewed under many ethical frameworks.
Consequentialists would argue it’s better to reduce overall harm in the outcome by any means necessary.
Deontologists would argue that the act of pulling the lever and actively killing one person is more morally wrong than letting the trolley continue its due course.
Utilitarians would argue that the most ethical choice creates the greatest amount of good for the greatest number of people.
Rawlsians would argue that all lives are equal, and to achieve justice and act most fairly one must prevent the greater harm.
Rights-based ethics would argue that the right to life is absolute and should not be violated or sacrificed for any trade-off.
Whichever ideology, our duty to minimize harm to others directly conflicts with our duty to choose the morally correct action. It’s the ability to value decisions and trade-offs like these that many question in autonomous vehicles[4]. For example, if an AV was about to crash, should passengers of the vehicle be prioritized over pedestrians/other vehicles?
It isn’t just the ability to make tough decisions that must be considered in the ethics of autonomous vehicles though. When humans ourselves cannot agree on which ethical framework would best answer the Trolley Problem, how are we meant to program self-driving cars to weigh up trade-offs like these under one ideology?
What basic values and principles should we be programming into AI?
Should we want it to prioritise positive duties: the number of lives saved, or negative duties: minimising active harm done?
In 2018, Uber tested Level 3 AV in Arizona, resulting in a tragic pedestrian fatality – the first caused by AV ever[5]. Being Level 3, there was a backup driver present in the vehicle, but it wasn’t enough. With the environmental detection system struggling to correctly identify the obstacle – here a pedestrian with a bike, the possibility of harm was not recognized by the car’s alert systems fast enough. By the time the backup driver was finally alerted to take control, the vehicle was already 0.2 seconds to impact, and traveling at 39mph[6].
This example does not necessarily discuss the trade-off of direct harm to AV passengers versus pedestrians external to the vehicle, as the backup driver was never at risk of harm herself. However, it does bring to light whether we can and should be relying on AI sensory detection over our own and whether manual override is a feasible backup in such high-pressure, pressed-for-time scenarios.
It also highlights the issue of transferring autonomy even temporarily to an AV, via the lack of a moral agent culpable for the killing. In this case, Uber retracted more than 90 other Level 3 AVs it had been testing in Arizona and settled with the victim’s family. The backup driver on the other hand was charged with negligent homicide[7]. Was blame correctly placed on her or should it have been the vehicle – is the latter even possible?
UNESCO outlines that AI ethical frameworks should prioritise avoiding harm and respecting human rights[8]. Safety and non-discrimination should underpin machine learning principles. Human oversight, control, and accountability should also be considered essential alongside responsible AI.
Additional concepts of fairness, and ‘for the greater good’, suggest that we want AI to use a utilitarian ideology for decision-making. On the other hand, ‘respecting human rights’ plays into the moral rightness of actions themselves, i.e. deontology.
Transparency will of course also be paramount in understanding how decisions end up being calculated by AVs. For evaluating harm caused or prevented in the case of an AV accident, we will need to understand how and why the underlying AI technology reaches a certain conclusion. Public trust in AVs will require understanding accountability, and making sure the right frameworks are being adhered to.
The European Parliamentary Research Service recognizes the ethical, legal, and economic concerns which must be addressed in developing and deploying automated decision-making AI[9]. This includes research into how to develop ethical principles in the underlying algorithms, and how to bring global policy and regulations up to speed with the exponential rate of AI innovation.
In terms of human rights, human agency is also being prioritized, with research bodies wanting to protect the ‘right of end users not to be subject to a decision based solely on automated processing’[9]. On the technology side, cybersecurity standards will become more important to ensure secure and reliable systems. Ethical AI requires trustworthy software.
While we do not currently have the general public using Level 3+ AVs on roads in the UK, or any such vehicles available in domestic markets yet[10], major players in the industry like BMW, Tesla, and Mercedes aim to launch these by 2025 using technologies like Traffic Jam Pilot to do so[11].
If AVs get the ethics of decision-making right, there are great benefits to be seen. Some estimates predict a 90% reduction in traffic-related accidents with them on the roads[5]. Still, it is clear that we do not yet have quantifiable ethical and legal frameworks outlining how decisions should be made and trade-offs prioritised when it comes to the technologies that underpin AVs.
AV players will therefore need to further outline what ‘minimise harm’ means, and which ethical ideology should dictate decision-making. As we saw with Uber’s accident in 2018[7], accountability and agency will also have to be clarified. All of these, how they are handled, and which direction we progress in, will have long-term ethical implications for society.