trolley problem auto a guida autonoma

The trolley problem: How do we teach ethics to self-driving cars?

Credit: Original: McGeddon Vector: Zapyon, CC BY–SA 4.0

In 1967, the English philosopher Philippa R. Foot he stated what might seem like a simple problem of ethics: a wagon (trolley) out of control is about to run over 5 people who are tied to the track. However, the carriage is about to reach a fork in the road and, if someone were to activate the lever that controls the fork, the carriage could change direction, thus running over a single person.

What to do? Pull the lever and thus reduce the death toll, or let the train run its course?
The answer may seem obvious, but it is not if we are the ones who have to pull the lever, or even less so if the variables in play change further.
Let’s try to understand in this article how the solution to this “simple” problem of ethics hides a huge importance in the modern technology, where we came to ask ourselves: in the age of artificial intelligence, what ethics should we teach self-driving cars?

The Birth of the Trolley Problem and the Doctrine of Double Effect

Is it better to let someone die or to kill and save lives?

It is from a reflection on this question that the trolley problem was born in 1967, when the English philosopher Philippa R. Footinside the article The problem of Abortion and the Doctrine of the Double effect where the philosopher criticized the doctrine of double effect posed by Thomas Aquinas. For the Italian theologian, simplifying, the solution lay in the consequences, that is, the double effect: if a amoral behavior, how to kill, brings with it a greater good to the harm inflicted, then an action normally considered immoral can become legal.

trolley problem trolley problem self-driving car
Credit: jan Alonola (kulupu pi lipu tenpo), CC BY–SA 4.0

Foot found himself reflecting on the question of abortion, a topic that is still hotly debated today in all parts of the world. To do so, however, he reduced the question to a sort of puzzle that would reach everyone:

The conductor of a train has fainted, and a carriage – trolley – is running without a guide and is about to run over five people who are unfortunately tied to the tracks. However, there is a chance for the five unfortunates: the trolley is heading towards a fork. It is therefore possible for a passerby who happens to witness the dramatic scene to divert the mad rush of the train by pulling the lever and thus save the five. However, on the secondary track there is a person trapped, who would be killed by such an action.

What to do then, let die or kill and reduce the number of deaths?
In this case, one might be led to think that pulling the lever does not imply the intention to kill, but only that of saving. What difference would there have been if the problem had been posed differently and – instead of the lever – we had had to act physically by killing someone?

The variants of the problem and the moral variables

Many of those interviewed on the trolley problem have responded over the years that yes, they would pull the lever to save more lives. However, it comes a few years later version of the problem of American philosopher Judith Jarvis Thomson.
The situation is the same: an out-of-control carriage is about to run over five people tied to the tracks. This time, there is no crossroads to divert the train, but above the tracks there is an overpass on which there is a man. The only way to save the five people And push This man who, falling from the overpass and being hit by the carriage, would block it, thus saving the five people.

fat man problem trolley problem
Diagram of the variant called “The fat man problem”

The problem is formally the same: I save five lives by killing one.. But the fact that in this case the killing is direct, changes everything. The answer of most interviewees changes radically: “I’m not sure I could kill a person”. The answer becomes even more complex if the person to be pushed is someone dear to us. And yet, even in the original case we would have killed a person by pulling the lever.

Clearly, an additional human variable appears: the physicality of direct action.
But not only that. If the man had been bad or detestable for whatever reason? Would we have been more justified in killing him to save five lives?
Clearly, not everyone would make the same choices: differences between individuals and their cultures influence our choices. In fact, interviewees gave different answers depending on the physicality of the people, their ethnicity or belonging to minor groups, showing the discrimination that can be present in our society.
Or worse yet, if there had been no man to push, but we had to throw ourselves in to save the five people, we would have been able to do it to get “the lesser evil”?

There are many variations of the trolley problemmore or less “cruel” or imaginative, which has made this problem heavily criticized over the years for its abstract form and because it reduces philosophy to a puzzle.

trolley problem variants
Credit: Jonas Kubilius, CC0

In recent years, however, this problem has found an extremely practical application: when we drive a carwe manage situations of danger depending on our instinct and our feelings, which is why we would not all act in the same way. And therefore, What should a self-driving car do if it comes across a trolley problem?

The Ethical Dilemma of Self-Driving Smart Cars

Let’s think about this trolley problem: one girl suddenly crosses the road chasing a ball. We, who are driving the car, must decide quickly what to do: swerving and hitting an elderly person on the sidewalk.

What would we choose? Speaking of responsibility, it was the little girl who did not respect the highway code, yet one would think that swerving was a duty, considering that she is a minor while there is an elderly person on the sidewalk. What if there were two, three or five elderly people? How many people must there be for the little girl’s life to become less important than theirs?
What if the person crossing was an elderly person and there was a minor on the sidewalk? A simple variation that completely changes the cards in the game.

self-driving cars

From ethical problems like this, a fundamental ethical dilemma arises for the introduction of fully autonomous cars. We are talking about the maximum level of automation, the one in which it is possible to drive a vehicle without conditioning it in any way.

In this case, what should we “teach” cars?

AI makes decisions based on what it knows

Nowadays, the car to fully automated driving – i.e. level 4 and 5 – they are not in circulation yet. THE problems that they have to face are in fact multiple from a point of view not only technological, but also ethical And perceptive.

Artificial intelligence, in fact, makes decisions based on the data it has and, to put it simply, on how we ask it to interpret them. It goes without saying that in order to be able to make a decision in a context like the one described above, where you have to decide between the life of a little girl who makes a mistake and that of an elderly person who has not committed any infraction, it is programmers’ task establish according to which ethics one should behave.

self-driving cars

Offhand, one might respond with a rational “whoever makes a mistake pays or the lesser harm is sought (such as one person killed instead of five)” as a basic rule. However, it should be considered that if, for example, the lesser evil is for our car to skid and crash into a barrier, while we can say with certainty that the human choice would be to preserve ourselves. It is therefore worth asking whether we would accept being inside a car that could decree that the lesser evil is to harm us in favor of someone else’s health.

All the considerations made above are clearly still part of a a debate that is still alive and fervent within the scientific community. Dangerous situations, in fact, are almost always due to human error, which this type of vehicle would solve. However, it is certain that in order to avoid human error – such as a risky overtaking – everyone on the road should have a self-driving car.

It must always be considered that these ethical problems must then be combined with technological ones. In fact, to date, fully self-driving cars are not able to instantly read the situation they are facing, so as to decree what is most ethical to do.