Skip to main content
Skip to main content

Mandatory Ethics Settings for the Greater Good: Approaching Autonomous Vehicles Through Scientific and Humanistic Lenses

By Jacob Liu | Position Paper

News outlet NBC reported that in 2019, a self-driving Tesla model S ran a red light and crashed into a Honda Civic killing both of its passengers. The driver of the Tesla was driving at high speeds using the autopilot feature, which allows for semi-autonomous driving (Burke). While the term “semi-autonomous” suggests that the driver still should have been paying attention to the road, the vehicle’s program did not follow a quite obvious traffic signal. Nevertheless, the driver was charged with two counts of vehicular manslaughter. Meanwhile Tesla, the company who created the programming that failed to detect the red light, was not held responsible (Burke). This tragedy, among the 736 total Tesla autopilot crashes that occurred since 2019, raised concerns about the ethical programming of self-driving cars and who gets to decide how they are governed(Blanco). 

Properly implementing ethical programming into autonomous vehicles could help establish guidelines that can be used to determine accountability. This is because such regulations provide a way to know how the car should act in any given situation. Accountability can then be decided by comparing the car’s decision to the “correct” decision according to the ethical guidelines. If the car’s decision did not match and resulted in a collision, the manufacturer would be held accountable. On the other hand, omitting ethical guidelines creates ambiguities that make it quite difficult to blame any accident on anyone but the driver. Perhaps the most convincing argument for the importance of programming ethics into autonomous vehicles is the promise of improved safety. Advancements in autonomous vehicles have the potential to drastically reduce the risk of vehicle collisions. The limited autonomous technology that has already been implemented in vehicles such as “...forward-collision warning and autonomous braking system prevented about 14% of crash fatalities in 2016 in the US” (Takaguchi et al.). Evidently, artificial intelligence shows promise for preventing collisions and saving many lives, but what happens when they cannot prevent the collision? The dilemma here can be illustrated by the widely referenced trolley problem. In this problem, as Bonnefon et al explains in an address titled “‘The Trolley, The Bull Bar, and Why Engineers Should Care About The Ethics of Autonomous Cars,” a trolley is about to run over five people standing on its track, but you have the ability to divert the trolley to another track where just one person stands. You can either do nothing and let the five people die, or you can kill the one person who would have been completely safe had you not intervened. For many people this is not an easy decision to make. Meanwhile, autonomous vehicles need to be prepared for infinite variations of this problem and choose the safer option each time.

In cases where a self-driving car must decide between two inevitable collisions, much debate has been made around how and who gets to make these decisions. One option is personal ethics settings, which allows the owner of the car to set preferences for how they want the car to act. Another option is mandatory ethics settings, which involves preprogramming all autonomous vehicles with a standard set of ethical guidelines. This paper will detail the ethical preferences of the public through multiple studies and explain why human nature is incompatible with personal ethics settings. It will also outline how the superior choice of mandatory ethics settings should be implemented. While personal ethics settings may provide people with a better sense of safety and trust in their autonomous vehicles, mandatory ethics settings are required to maintain the safest public experience and minimize lives lost.

Mandatory ethics settings will not be corrupted are not corrupted by the paradoxical tendency for people to prioritize others while also prioritizing themselves. In The Social Dilemma of Autonomous Vehicles, six studies were done to understand the current public view on autonomous vehicles and how the public wants them to act. “In study one, (n = 182 participants), 76% of participants thought that it would be more moral for AVs to sacrifice one passenger rather than kill 10 pedestrians…” (Bonnefon et al.). A different study asked participants to indicate how likely they would be to buy an autonomous vehicle that would prioritize the passengers as well as how likely they would be to buy one that would sacrifice the passengers to save a greater amount of lives. “Although the reported likelihood of buying an AV was low even for the self-protective option (median 50), respondents indicated a significantly lower likelihood (P < 0.001) of buying the AV when they imagined the situation in which they and their family member would be sacrificed for the greater good (median = 19)” (Bonnefon et al.). These studies did not include enough participants to make conclusions about society as a whole, but they do support the idea that even if the public overall believes minimizing casualties is the morally optimal goal in programming autonomous vehicles,an individual may still choose to prioritize their own safety or the safety of their family if given the option. Here, another benefit of mandatory ethics settings becomes clear. If personal ethics settings are implemented, there is a chance that many people will program their car in a way that is opposite from what would be the least harmful to the greatest number of people. With mandatory ethics settings, this possibility is eliminated because drivers do not get to make this decision. These settings will instead be set to minimize casualties, which appears to be the most-agreed-upon method of achieving what would be considered moral and just based on a far-reaching MIT study for programming self-driving cars. 

"Another study, MIT's Moral Machine Experiment, upholds just such an approach. This experiment, which analyzed 39.61 million decisions from 233 countries, dependencies, or territories, provided each participant with a series of decisions to make for autonomous vehicles faced with unavoidable collisions" (Awad et. al.).This experiment provided each participant with a series of decisions to make for autonomous vehicles faced with unavoidable collisions. The results indicated that “...the strongest preferences are observed for sparing humans over animals, sparing more lives, and sparing young lives” (Awad et al.). According to the graphic provided by MIT, approximately 65% of participants preferred sparing more lives over fewer lives overall. The MIT study provides much stronger evidence for the majority of humans being partial to minimizing casualties. The broad diversity of countries and territories from which decisions were based suggests that the data collected is unlikely to be biased towards any specific culture. These cultural differences are important to consider when implementing mandatory ethics in vehicles that will have a global reach.

 How would mandatory ethics settings take into consideration cultural conflicts from one country to the next? Those administering the Moral Machine Experiment factored in such cultural differences by organizing their data into regional country-level clusters. The data shows quite different results between Western, Eastern, and Southern countries. Awad and others write, “For example, the preference to spare younger characters rather than older characters is much less pronounced for countries in the Eastern cluster, and much higher in countries in the Southern cluster. The same is true for the preference for sparing higher status characters” (Awad et al.). This could be problematic if personal ethics settings were to be used because people from different cultures may view each other’s choice of ethics settings as insensitive or inappropriate. Contrastingly, mandatory ethics settings would privilege the greatest social good or the least harm in every instance. This will encourage social cohesion and reduce the risk of conflict between cultures.

The guidelines for mandatory ethics settings should be set using trial and error with the goal to minimize casualties. Patrick Lin and others advocate just such an approach in their article, “Autonomous Military Robotics: Risk, Ethics, and Design.” While there are obviously differences between military robots and self-driving cars, both technologies face similar dilemmas in their deployment. During an analysis of how much risk a robot should be allowed to take on in any given situation, the authors provide three possible methods for setting a standard level of risk. Using the good faith subjective standard, “It is up to each individual as to whether an unacceptable risk exists” (Lin et al.). Unfortunately, their analysis did not find this to be a plausible solution because of the varying levels of human risk aversion. Some people may be much less sensitive to allowing greater amounts of risk, and they may allow robots to act in ways that many others would deem morally reprehensible. This solution is similar in principle to personal ethics settings in autonomous vehicles. Regardless of what an individual believes is best for themselves and the rest of society, their willingness to take on risk will affect how they set their personal ethics settings. For example, say a car programmed to prioritize the safety of others tends to drive more carefully. The driver might want to keep others safe, but they might also want their car to drive faster. If they are not particularly risk averse, they may set their car to a riskier setting because the chance of crashing is low enough for them to dismiss it.

 Another possible method for setting risk standards is the reasonable person standard. With this standard “An unacceptable risk is simply what a fair, informed member of a relevant community believes to be an unacceptable risk” (Lin et al.). The authors found this does not work either because it still relies on the robot itself to accurately assess risk levels and act accordingly without set guidelines. This is also similar to personal ethics settings in that personal ethics settings by nature rely on the moral values of the average individual to govern autonomous vehicles. However, being that this is only the first generation of self-driving cars, the lack of data and experiences will make it difficult to determine what situations constitute what level of risk. The standard ethical guidelines created by mandatory ethics settings would make it easier for cars to estimate risk because they can use the guidelines to gauge the immorality of the calculated outcome and use this as a unit of measurement for risk involved.

Lastly, the article proposes the objective standard in which “An unacceptable risk requires evidence and/or expert testimony as to the reality of (and unacceptability of) the risk” (Lin et al.). Although this method also suffers from the aforementioned lack of data, the authors of the article propose that this issue can be solved through a rigorous testing program in human-free environments. These tests would be in stages and would continue at least until researchers are fairly certain that the mistakes made by these robots would be no worse than those made by normal human soldiers (Lin et al.). This idea can be extrapolated to autonomous vehicles in support of mandatory ethics settings. Self-driving cars should be tested thoroughly in many stages before ever interacting in a populated environment. The mandatory ethics settings would be continuously revised based on testing results until there is a clear indication that the vehicles would make a morally acceptable decision with a higher frequency than the average driver. In this case, what is “morally acceptable” would be less subjective and more so based on whether or not the vehicle was able to minimize lives lost or lives harmed.

Despite the freedom of personal ethics settings appealing to many drivers on an individual basis, upon further analysis, mandatory ethics settings are necessary to prevent issues that are likely to arise due to human nature. A study by the University of Chicago used brain scans to determine that early decision-making in humans is dominated by self-interest. As participants played a three party ultimatum game meant to simulate real life decision-making, machine learning analysis could predict self-interested decisions in 200 milliseconds but needed 600 milliseconds to predict decisions that were fair for all parties (Yoder et al.). The study also found “…more cortical networks of the brain are dedicated to processing decisions that benefit the self and that these activations don't overlap with networks which process fairness for others” (Yoder et al.). Admittedly, the study had limited participants; and the cultural differences between participants are not specified, so whether or not these findings apply universally is difficult to determine, especially with each culture providing its own environment for moral development. However, it is simply not feasible for drivers in self-programmed vehicles to share the road without conflict. And even though the set of ethical guidelines eventually proposed by a mandatory ethics settings approach may not be agreed upon for the same reason, that course of action should be taken because the guidelines will be determined through objective evidence. There is still a long road ahead for autonomous vehicles, but the sooner these mandatory ethics guidelines for autonomous vehiclesstart being developed, the sooner societies around the globe can safely enter a new age of transportation.

Works Cited

Awad, Edmond, et al. “The Moral Machine Experiment.” Nature News, Nature Publishing Group, 24 Oct. 2018, https://www.nature.com/articles/s41586-018-0637-6. 

Bentahila, Lina, et al. “Universality and Cultural Diversity in Moral Reasoning and Judgement.” Frontiers, Frontiers, 10 Nov. 2021,

www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.764360/full#ref95.

Blanco, Sebastian. “Report: Tesla Autopilot Involved in 736 Crashes since 2019.” Car and Driver, Hearst Autos, Inc., 13 June 2023,

www.caranddriver.com/news/a44185487/report-tesla-autopilot-crashes-since-2019/.

Bonnefon, Jean-Francois, et al. “The Social Dilemma of Autonomous Vehicles.” Science, American Association for the Advancement of Science, 24 June 2016, https://www.science.org/doi/10.1126/science.aaf2654. 

Bonnefon, Jean-François, et al. ‘The Trolley, The Bull Bar, and Why Engineers Should Care About The Ethics of Autonomous Cars [Point of View]’. Proceedings of the IEEE, vol. 107, no. 3, 2019, pp. 502–504, https://doi.org10.1109/JPROC.2019.2897447. 

Contissa, Giuseppe, et al. “The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law.” SpringerLink, Springer Netherlands, 11 Sept. 2017, https://link.springer.com/article/10.1007/s10506-017-9211-z. 

Gogoll, Jan, and Julian F. Müller. “Autonomous Cars: In Favor of a Mandatory Ethics Setting .” SpringerLink, Springer Netherlands, 14 July 2016, https://link.springer.com/article/10.1007/s11948-016-9806-x#citeas. 

Kuhn, Steven. “Prisoner's Dilemma.” Stanford Encyclopedia of Philosophy, Stanford University, 2 Apr. 2019, https://plato.stanford.edu/entries/prisoner-dilemma/. 

Lin, Patrick, et al. “Autonomous Military Robotics: Risk, Ethics, and Design.” California Polytechnic State University, California Polytechnic State University, 20 Dec. 2008, http://ethics.calpoly.edu/ONR_report.pdf?from=article_link. 

Lin, Patrick. “Here's a Terrible Idea: Robot Cars with Adjustable Ethics Settings.” Wired, Conde Nast, 18 Aug. 2014, https://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/. 

Takaguchi, Kazuya, et al. “Personal Ethical Settings for Driverless Cars and the Utility Paradox: An Ethical Analysis of Public Attitudes in UK and Japan.” PLOS ONE, Public Library of Science, journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0275812. Accessed 28 July 2024.

Wang, Jack. “How Do We Balance Self-Interest with Fairness for Others?” University of Chicago News, University of Chicago, 25 Aug. 2020,

news.uchicago.edu/story/how-do-we-balance-self-interest-fairness-others.