Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Moral Decision-Making in Autonomous Vehicles
Artificial Intelligence   Latest   Machine Learning

Moral Decision-Making in Autonomous Vehicles

Last Updated on April 22, 2024 by Editorial Team

Author(s): Nimit Bhardwaj

Originally published on Towards AI.

Moral Decision-Making in Autonomous Vehicles

Autonomous vehicles (AVs) have long sparked debate around the ethics of transferred decision-making autonomy on roads. Within this terminology though, sits different levels of autonomy.

The Society of Automotive Engineers sets out 6 Levels of autonomy used officially across the industry to differentiate AV capability [1]. Levels 0–2 AV already exist in commercial markets. Level 3 is the first significant jump in capability. It describes vehicles which can self-drive for short periods, but require a human driver to be ready to intervene if the system requests it. Levels 4–5 then go beyond environmental detection. These encompass cutting-edge technologies which obviate human override altogether [2]. Level 4 AVs can complete an entire journey without human intervention under specific conditions. Level 5 can complete entire journeys under any circumstances. Level 5 would be associated with vehicles which don’t even need steering wheels or pedals, for example.

The moral and ethical dilemmas which emerge surrounding these two higher levels of autonomy arise from the loss of almost all direct decision-making power. Correct functioning of core technologies, ability to value human life and principles, trade-offs, and accountability then all become issues under both ethical and legal frameworks.

This article will explore these ethical dilemmas, starting with the infamous Trolley Problem to provide context.

The Trolley Problem

The Trolley Problem is a thought experiment created within the branch of philosophy called virtue ethics and discusses how foreseeable consequences compare to intended consequences on a moral level. The main variation, devised by British philosopher Philippa Foot (1967) [3], is as follows:

A trolley is running along a set of tracks, out of control and unable to break. 5 people are tied onto these tracks though, and the trolley is fast approaching them. You are stood off the tracks next to a lever, which if pulled would divert the trajectory of the trolley to a different set of tracks. This alternative track has only one person tied to it, so the trolley will currently kill 5 people, but this could be reduced to just one if you act. Do you pull the lever?

Ethical Frameworks in Decision-Making

The Trolley Problem can be viewed under many ethical frameworks.

  • Consequentialists would argue it’s better to reduce overall harm in the outcome by any means necessary.
  • Deontologists would argue that the act of pulling the lever and actively killing one person is more morally wrong than letting the trolley continue its due course.
  • Utilitarians would argue that the most ethical choice creates the greatest amount of good for the greatest number of people.
  • Rawlsians would argue that all lives are equal, and to achieve justice and act most fairly one must prevent the greater harm.
  • Rights-based ethics would argue that the right to life is absolute and should not be violated or sacrificed for any trade-off.

Whichever ideology, our duty to minimise harm to others is directly conflicted with our duty to choose the morally correct action. It’s the ability to value decisions and trade-offs like these which many question in autonomous vehicles [4]. For example, if an AV was about to crash, should passengers of the vehicle be prioritised over pedestrians / other vehicles?

It isn’t just the ability to make tough decisions that must be considered in the ethics of autonomous vehicles though. When humans ourselves cannot agree on which ethical framework would best answer the Trolley Problem, how are we meant to program self-driving cars to weigh up trade-offs like these under one ideology?

What basic values and principles should we be programming into AI?

Should we want it to prioritise positive duties: the number of lives saved, or negative duties: minimising active harm done?

Case Study

In 2018, Uber tested Level 3 AV in Arizona, resulting in a tragic pedestrian fatality β€” the first caused by AV ever [5]. Being Level 3, there was a backup driver present in the vehicle, but it wasn’t enough. With the environmental detection system struggling to correctly identify the obstacle β€” here, a pedestrian with a bike, the possibility of harm was not recognized by the car’s alert systems fast enough. By the time the backup driver was finally alerted to take control, the vehicle was already 0.2 seconds to impact and traveling at 39mph [6].

This example does not necessarily discuss the trade-off of direct harm to AV passengers versus pedestrians external to the vehicle, as the backup driver was never at risk of harm herself. However, it does bring to light whether we can and should be relying on AI sensory detection over our own and whether manual override is a feasible backup in such high-pressure, short-for-time scenarios.

It also highlights the issue of transferring autonomy, even temporarily to an AV, via the lack of a moral agent culpable for the killing. In this case, Uber retracted more than 90 other Level 3 AVs it had been testing in Arizona and settled with the victim’s family. The backup driver on the other hand was charged with negligent homicide [7]. Was blame correctly placed on her or should it have been the vehicle β€” is the latter even possible?

Ethical AI

UNESCO outlines that AI ethical frameworks should prioritize avoiding harm and respecting human rights [8]. Safety and non-discrimination should underpin machine learning principles. Human oversight, control, and accountability should also be considered essential alongside responsible AI.

Additional concepts of fairness, and β€˜for the greater good’, suggest that we want AI to use a utilitarian ideology for decision-making. On the other hand, β€˜respecting human rights’ plays into the moral rightness of actions themselves, i.e. deontology.

Transparency will of course also be paramount in understanding how decisions end up being calculated by AVs. For evaluating harm caused or prevented in the case of an AV accident, we will need to understand how and why the underlying AI technology reaches a certain conclusion. Public trust in AVs will require understanding accountability, and making sure the right frameworks are being adhered to.

Ethical Automated Decision-Making

The European Parliamentary Research Service recognises the ethical, legal, and economic concerns which must be addressed in developing and deploying automated decision-making AI [9]. This includes research into how to develop ethical principles in the underlying algorithms, and how to bring global policy and regulations up to speed with the exponential rate of AI innovation.

In terms of human rights, human agency is also being prioritised, with research bodies wanting to protect the β€˜right of end users not to be subject to a decision based solely on automated processing’ [9]. On the technology side, cybersecurity standards will become more important to ensure secure and reliable systems. Ethical AI requires trustworthy software.

Conclusion

While we do not currently have the general public using Level 3+ AVs on roads in the UK, or any such vehicles available in domestic markets yet [10], major players in the industry like BMW, Tesla, and Mercedes aim to launch these by 2025 using technologies like Traffic Jam Pilot to do so [11].

If AVs get the ethics of decision-making right, there are great benefits to be seen. Some estimates predict a 90% reduction in traffic-related accidents with them on the roads5. Still, it is clear that we do not yet have quantifiable ethical and legal frameworks outlining how decisions should be made and trade-offs prioritized when it comes to the technologies that underpin AVs.

AV players will therefore need to further outline what β€˜minimise harm’ means, and which ethical ideology should dictate decision-making. As we saw with Uber’s accident in 2018 [7], accountability and agency will also have to be clarified. All of these, how they are handled, and which direction we progress in, will have long-term ethical implications for society.

References

[1] H. Joy, Navigating Ethical Dilemmas in Autonomous Vehicles: A Case Study (2023), Medium

[2] L. Day, TTHE CURRENT STATE OF PLAY IN AUTONOMOUS CARS (2021), Hackaday

[3] S. Bizarro, The Trolley Problem β€” Origins. The Trolley Problem is a thought… (2020), Medium

[4] D. Benyon, Uber AV road fatality poses questions (2018), Insurance Times

[5] K. Evans, N. de Moura, and S. Chauvier, Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project (2020), Springer Link

[6] L. Smiley, β€˜I’m the Operator’: The Aftermath of a Self-Driving Tragedy (2022), Wired

[7] B. Houck, Navigating Autonomous Vehicles Levels: The Vasquez Case and the Debate Between Level 3 and Level 4 Autonomy (2023), Arizona State Law Journal

[8] M. Pisani, AI Ethical Framework (2024), Rootstrap

[9] T. Madiega, EU guidelines on ethics in artificial intelligence: Context and implementation (2019), European Parliament

[10] K. Stricker, T. Wendt, W. Stark, M. Gottfredson, R. Tsang, and M. Schallehn, Electric and Autonomous Vehicles: The Future Is Now (2020), Bain & Company

[11] J. Deichmann, E. Ebel, K. Heineke, R. Heuss, M. Kellner, and F. Steiner, The future of autonomous vehicles (AV) (2023), McKinsey & Company

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓