To Kill or Be Killed?

consequential driving - juan esparza (2).jpg

Check out the referenced source here.

The industry of self-driving cars is rapidly growing and within the next hundred years, you can expect the vast majority of human transportation to be fully autonomous. As you would expect, self-driving vehicles must be effective and extremely safe for us to want to transition towards them so much; but what happens when they're not? Or rather, when they cannot be relied on? Self-driving vehicles will still face situations where harming someone cannot be avoided, especially in society’s transitional period where both AI and man can and will collide on the streets. How will, and how should, we make self-driving cars react then? We are now caught up in an ethical dilemma, and this paper presents a potential answer: an adapted consequentialist perspective on Self-Driving cars. 

As of now, AI is considered a tool rather than an equal in comparison to humans—so in this case, like any other tool, we expect AI to be as effective as possible when put into practice. Yet, unlike normal hand tools which rely on human application, self-driving cars must act independently from human input. But this opens a wormhole of more problems AI must solve. Whereas human drivers may be excused for unethical behavior in situations where they find themselves in ethical dilemmas, autonomous vehicles are expected to behave accurately according to ethical requirements. Consequently, they have to meet the high expectations of society and will be judged by their moral standards. Yet the question still stands: what moral standard should be used?

Consider the scenario where a self-driving car has two harmful options to choose from: crashing into five pedestrians in order to save the driver; or crashing into a wall to save the five pedestrians, but killing the driver in the process. 

Should AI always aim for the greatest utilitarian good (i.e. in the worst possible cases, decide a plan of action that will lead to the least worst outcome)? What if the least worst outcome is killing the driver? On the one hand, the utilitarian option has successfully been chosen, doing the least amount of harm in terms of total casualties. On the other hand, the car would not sell, as people would not willingly buy a car that kills them when needed. But is it ethical to prioritize the driver over a pedestrian just because they are in the vehicle—which means that, from a certain perspective, people can buy my value? Is this fair, or does it even have to be? 

In real-life applications, these scenarios become more complicated for AI to calculate. But surprisingly, maybe that makes finding our solution easier. 

This paper by Vanessa Schaffner enriches the classical utilitarian perspective on autonomous driving ethics with an alternative view that is morally intuitive. Schaffner’s solution, in order to be as realistic as possible, takes into account the uncertainty of expected outcomes of a presented dilemma by focusing on probabilities and risks instead of actual consequences. Consequently, it is oriented towards the potential effects of crash-optimization algorithms to all parties involved. By using the theoretical lenses of negative utilitarianism and prioritarianism, it gives special emphasis to worst case scenarios and calls for a strategy of proactive prevention rather than of passive mitigation of damage.

In simpler terms, Schaffner’s solution intuitively combines aspects of utilitarianism and statistical probability in order to isolate worst case scenarios and actively work to predict and avoid rather than figuring out what to do when we meet them—an adaptive solution to the pressing issue of self-driving cars and their ethical dilemmas. 

Previous
Previous

Physics in the World of Finance

Next
Next

Clear Skies, Ad Buys: New Weather-Based Advertising Platform Fueled By AI