Who is Responsible for the Actions of a Self-Driving Car?

Algorithmic Accountability:

A Critical Analysis of the Ethics Behind Self-Driving Cars

Self Driving Car
Figure 1: Self-driving car


        The internet is a scary place. From a young age, the idea of internet being harmful has been implemented into our brains by parents, guardians, or teachers. Despite these claims, youngsters rush to establish their social presence through social media accounts such as Facebook, Instagram, or Snapchat. To utilize the services of these applications, we are required to share integral parts of our identity like our full name, date of birth, and location. We went from making 10 second videos on Vine to dance trends on TikTok, which shows that as the world progresses, so does technology, and this evolution has led to an increase in consumption of data by companies. In addition, marketers use internet cookies to track surfing habits and interests which can allow the browser to remember passwords, show relevant advertisements, or refill shopping carts upon revisiting a website. Due to such programs, our computer contains databases of emails, videos, audios, images, click streams, logs, posts, search queries, health records, and more (Sagiroglu and Sinanc 42). The availability of the immense amount of personal data deems a frightening reality of technology understanding the ways of humans and taking over our planet, which can also be called an artificial intelligence (AI) takeover. A part of this process has already started with Google’s self-driving car project called Waymo. The robotic car’s mission is to “improve the world's access to mobility while saving thousands of lives [which are currently] lost to traffic crashes” (Waymo Homepage). Although would a car with special sensors be able to make moral decisions like humans? For instance, would a car be able to understand that a person who is jaywalking is committing an illegal act but should not be killed because of it? The moral ethics around giving technology control leads to the discussion whether robots should be allowed to make judgements about human life and death matters, and who is responsible when technology harms people. When humans are involved in accidents, the court does not question their instinctual behavior, but when a robot is doing the driving the question of “why?” emerges as technology is often expected to be work without any flaws. Although, the cars cannot answer questions in regard to human ethical standards, so who should be held responsible? By focusing upon the question of accountability in relation to algorithms, this paper will show that creators of complex technology like self-driving cars are partially responsibility when these vehicles cause harm, and show how increasing algorithmic transparency and reducing algorithmic bias help to provide a rational justification for a vehicle’s actions that considers ethical implications. 

        The automotive industry is currently undergoing a potentially revolutionary change that could not only affect how vehicles are built but also reshape the design of roads and cities as well as the interaction between humans and machines (Silberg et al. 132). A self-driving car is a computer-controlled car that drives itself and does not require people sitting behind the wheel to take control to safely operate the vehicle (PC Magazine). Usually this technology uses the words ‘autonomous’ and ‘automated’ as synonyms but there is a difference between the terms. An ‘autonomous vehicle’ has the freedom to control itself and make decisions independently, while an ‘automated vehicle’ indicates control of operation by a machine. Since the passengers decide on a destination or preferred routes, the car is not autonomous as it does not have the freedom to make that decision. Thus, this paper will use the following terms to refer to the technology: “an ‘automated vehicle’ means a motor vehicle designed and constructed to move autonomously for certain periods of time without continuous driver supervision but in respect of which driver intervention is still expected or required, and a ‘fully automated vehicle’ means a motor vehicle that has been designed and constructed to move autonomously without any driver supervision” (Regulation(EU) 2019/2144).

        These definitions consider the classification system implemented by the National Highway Traffic Safety Administration (NHTSA) which places higher priority on the amount of intervention and attentiveness required, rather than the vehicle's capabilities. The self-driving car can be categorized using four levels: 

Level 0 (no automation) means the driver is in complete control of the vehicle at all times. Level 1 (function-specific automation) means individual vehicle controls are automated, such as electronic stability control or automatic braking. Level 2 (Combined function automation) means at least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping. Level 3 (limited self-driving automation) means the driver can fully [surrender] control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so. And lastly, Level 4 (full self-driving automation) means the vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars. (Zhao et. al. 2-3)

Since levels 1 and 2 involve a high level of possible combinations, it makes it difficult to differentiate human error from robotic error. Thus, this paper will focus on level 3 and 4. 

        The concept of an ‘algorithm’ can be open to interpretation, but this paper will focus on the following definition: an algorithm is procedure of instructions which perform a variety of tasks, including information retrieval, image recognition, filtering, outlier detection, and recommendation (Kemper & Kolkman 2082). When one considers the sheer extent of everyday practices that are in some way modulated by the use of algorithms - from the trading-market to the realm of dating, from following the news to impression management - it might indeed not be so strange to speak of this model as an ‘algorithmic life’ (Mazzotti). One of the techniques used to develop these highly accurate predictive models is called deep learning, which is an AI function that imitates the internal workings of the human brain in processing data and creating patterns for use in decision making (Marr). Deep learning algorithms are currently being used to combat the global issues such as spread of COVID-19, climate change and mass migration. Companies and individuals have a right to protect their intellectual property (IP) and profit from it, which incentivizes development, and pushes for breakthroughs in the technological world. In addition, algorithms are a man-made entity, which means they can be “biased based on who builds them, how they’re developed, and how they’re ultimately used” (Heilweil). For example, a recruitment algorithm will hire new applicants based on previous hiring data, and if past decisions made by humans involved gender, age or racial discrimination, the algorithm will pick up on those trends and continue to enforce them in the future. This raises many concerns like who should be held responsible for the damage caused by algorithms, and how can companies protect their IP while fulfilling their obligation of ensuring that the use of their IP does not cause harm.

        To expand, algorithmic accountability explores who should be held responsible for the result of these algorithms being put out into the world. The first dimension of algorithmic accountability is that companies and individuals which profit from algorithms have a right to protect their intellectual property under the law. This means the creator can have full control over their property and decide to share this information with others or exclude them. This straightforward phenomenon pushes against the second dimension of algorithmic accountability which states that the community can ensure the use of intellectual property does not harm the public. Unfortunately to evaluate and assess potential harm, one needs to access a company’s intellectual property, and this violates the creator’s right to privacy (first dimension). Due to the right to exclude, the community is prohibited from predicting problems as they are not given access to the intellectual property. In general, it is difficult to foresee adverse consequences, especially their prevention, since the technological world is a new area of study which is always evolving. Although when these negative consequences occur, a backward-looking approach is adopted, and this situation would give the community more power in determining who should be penalized and held liable. All approaches to intellectual property focus on consequentialist considerations. That means the consequences of one’s conduct are the ultimate basis for any judgment about the rightness or wrongness of that conduct. On the contrary, looking-forward approaches such as prevention are also implemented, although with new technology it is very difficult to plan every harmful scenario as sometimes the consequences will not be obvious, and the defects will go unmanaged.

        Programmers, management, and those who come up with ideas are all responsible when their creation causes harm. While companies like Waymo, Telsa and Uber, have an obligation to ensure that the use of their intellectual property (IP) does not result in any damages. In this case, the blend of hard-coded algorithms involving legal and illegal scenarios on the road act as the company’s IP. Also, they must ensure the high quality of their IP as the “recent surge in the use of big data and the increasing intricacy of algorithms has dramatically changed the complexity of such quality” (Kemper & Kolkman 2083). There is a limit to the transparency of algorithmic models as new inventions become more complex companies and individuals/companies have the right to exclude others from sharing their IP. The challenge of transparency is not new and mishaps in the community continue to keep the spark of interest alive, from policymakers and the public. While not all consequences can be predicted accurately as a group of people can only expect a certain number of possible outcomes, the individuals have an obligation to critically examine their algorithm and explore scenarios. For instance, if a fully automated vehicle “crashes into a pedestrian because it did not have enough time to brake, and a reconstruction of the accident from the data gathered by the vehicle’s own sensors show that it could have swerved around the pedestrian, crossing the double yellow line and collided with the empty driverless vehicle in the next lane” (Goodall 28), who is responsible for the pedestrian’s death? While the algorithm did not intend to cause harm, its inability to make a different decision in that scenario cost someone their life. One possible solution which could have reduced the probability of the above scenario is the role of a critical audience when it comes to matters of algorithmic transparency. A critical audience would consist of intelligent and experienced individuals who would analyze and critique arguments, and this can help to look at sides of an argument which may not have been considered before. This means that, despite good intentions, guidelines or principles for fostering responsibility, [new technology] will fail if [companies] do not consider the context in which algorithms are used and by enlisting and maintaining critical and informed audiences, the chances of harmful situations decreases significantly (Kemper & Kolkman 2083-2084). Although this causes a disagreement with the first dimension of algorithmic accountability as it compromises an individual or companies’ intellectual property. To avoid conflicts between groups, it is necessary to enter contractual agreements which make ownerships explicit and specify who owns what in terms of products at the end. Since “intellectual property is non-rivalrous, that is, there is no physical constraint on the object’s use” (Stein, “Intellectual Property, core concepts” Slide 7) critical audiences can replicate/spread/manipulate designs they have seen already, but contracting builds a bridge of trust between the audience and individuals/companies. While this approach does not fully allow the general public to share responsibility in the case of adverse consequences, the inclusion of a critical audience does aid in decreasing resistance towards such radical technology as many people show a clear rejection of automated driving (König & Neumayr 42).

        Algorithmic bias is a major issue regarding this technology as ethical decisions are hard coded into a vehicle using the opinions of a small group of company employees who might have “differences in technical ability which can lead to differences in design principles” (Stein, “Remix Culture” Slide 5). By involving the public is a more direct manner and allowing them to make decisions, companies can reduce algorithm bias in their inventions. The sharing of algorithmic responsibility between the creators and the public allows for an understanding of the extensive ethical issues and allows the creators to improve their technology. This does not involve sharing their IP with people all over the world, rather involving people in the process of making ethical decisions and providing more diversity in the dataset. This can be accomplished by introducing the Moral machine at a global level and allowing people to express their “preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them” (Awad et al. 63). The moral machine is an online experimental platform which allows users to respond to moral dilemmas faced by automated vehicles and is used to combat the challenge of algorithmic bias. So far it has gathered 40 million decisions in ten languages from millions of people in 233 countries and territories” (Awad et al. 59). The three fundamental principles which were shared globally are: save humans over animals, groups over individuals, and children over elders. Some preferences based on gender or social status vary considerably across countries and appear to reflect underlying societal-level preferences for egalitarianism (Awad et al. 63). By implementing such ethical preferences, the public and designers reach a level of compromise and understanding which allows for a significant reduction in algorithm bias. Now if a fully automated vehicle gets into an accident despite the preventative measure being take, both the public and designers are responsible. This balance is important because the public has a say in what technology they wish to be interacting with on the road and the designers are allowing for drastic improvements by including public opinion. This agreement should be bound by a contract between the companies and the government (on behalf of the public) to avoid any issues. So, if designers decide to conduct the experiment and not implement the public's opinions on issues, the designers should be held fully liable for damage. Legally the designers should be held responsible for breaking the law and should thus have to pay a fine to the government. Ethically they are at fault as the problem could have been eliminated if the action did not take place, and this can be shown via “but-for" causation. “But-for" causation occurs as consequences would not have occurred if the initial act did not occur, in this case ignoring the public opinion despite a contractual agreement creates a chain of events which leads to mistrust between automated vehicle companies and the public. Overall, it is a risky move as the public can push for the removal of these vehicles and most likely be successful as the companies failed to uphold their end of the contract.

        In conclusion, the discussion regarding algorithmic accountability is a slippery slope because of the issues regarding protection of intellectual property and the obligation to ensure the property does not cause harm. Self-driving cars will soon enter our reality but “never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision” (Awad et al. 63). By increasing algorithmic transparency while protecting intellectual property rights, we can allow for more global collaboration over complex projects such as the self-driving car and achieve success at a faster pace with smarter results. In addition, the Moral machine may also be implemented in the future to understand people’s priorities at a deeper level. Awad and his colleagues claim that: 

Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary to keep the project tractable. (63)

While we have a long way to go in regards of deploying robotic cars onto highways, by understanding the ethical issues behind such technology and making the public aware of its use allows a smoother integration into society. By reducing algorithm bias through the Moral machine and even critical audiences, we are able to understand the work put into creating these complex machines and able to distribute the responsibility as a community. It is impossible to decipher what the world of automation will look like 10 years from now, but it is important to remember that the solution does not need to be perfect, it should be thoughtful and defensible (Goodall 58).


Works Cited
  1. “Definition of Self-Driving Car.” PCMAG, www.pcmag.com/encyclopedia/term/self-driving-car.
  2. “Home.” Waymo, waymo.com/.
  3. “Regulation (EU) 2019/2144.” Official Journal of the European Union , vol. 62, 27 Nov. 2019, eur-lex.europa.eu/eli/reg/2019/2144/oj.
  4. “U.S. Department of Transportation Releases Policy on Automated Vehicle Development.” NHTSA: National Highway Traffic Safety Administration, 30 May 2013, www.nhtsa.gov/press-releases.
  5. Awad, Edmond, et al. “The Moral Machine Experiment.” Nature, vol. 563, no. 7729, 2018, pp. 59–64.
  6. Goodall, Noah J. “Can You Program Ethics into a Self-Driving Car?” IEEE Spectrum, vol. 53, no. 6, 2016, pp. 28–58.
  7. Heilweil, Rebecca. “Why Algorithms Can Be Racist and Sexist.” Vox, Vox, 18 Feb. 2020, www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency.
  8. Kemper, Jakko, and Daan Kolkman. “Transparent to Whom? No Algorithmic Accountability without a Critical Audience.” Information, Communication & Society, vol. 22, no. 14, 2019, pp. 2081–2096.
  9. König, M, and L Neumayr. “Users’ Resistance towards Radical Innovations: The Case of the Self-Driving Car.” Transportation Research Part F: Psychology and Behaviour, vol. 44, 2017, pp. 42–52.
  10. Marr, Bernard. “What Is Deep Learning AI? A Simple Guide With 8 Practical Examples.” Forbes, Forbes Magazine, 12 Dec. 2018, www.forbes.com/sites/bernardmarr/2018/10/01/what-is-deep-learning-ai-a-simple-guide-with-8-practical-examples/.
  11. Mazzotti, Massimo. “Algorithmic Life.” 17 Jan. 2017, lareviewofbooks.org/article/ algorithmic-life/.
  12. Sagiroglu, Seref, and Duygu Sinanc. “Big Data: A Review.” 2013 International Conference on Collaboration Technologies and Systems (CTS), 2013, pp. 42–47.
  13. Stein, Joshua. Intellectual Property, core concepts. June 2020. PowerPoint Presentation.
  14. Stein, Joshua. Remix Culture. June 2020. PowerPoint Presentation.
  15. Silberg, Gary, et al. "Self-driving cars: The next revolution." White paper, KPMG LLP & Center of Automotive Research 9.2 (2012): 132-146.
  16. Zhao, Jianfeng, et al. “The Key Technology toward the Self-Driving Car.” International Journal of Intelligent Unmanned Systems, vol. 6, no. 1, 2018, pp. 2–20.