Nigel Crook

AI, Machine Learning and Moral Machine Blogs

Machines Becoming Moral – Part 2

RobotThespian – Engineered Arts Ltd

Ethical Alignment

In my book ‘Rise of the Moral Machine: Exploring Virtue Through a Robot’s Eyes‘, I include a short fictional story about a couple (Mr and Mrs Morales) who are in the process of purchasing their first autonomous vehicle. Having chosen the model, the colour and the trim of the car, the last set of choices they are required to make concern the vehicle’s ‘ethical alignment’: i.e. the alignment of the vehicle’s autonomous decisions on how it should drive with the Morales’ social and ethical preferences. Without giving too much of the story away, the Morales’ are presented with a series of situations each of which requires the autonomous vehicle to make a moral decision. These decisions are presented in terms of choices of who should be the casualties of an unavoidable collision, such as “should the vehicle run over the pensioner on the pedestrian crossing, or the child on the pavement?” (Figure 1). These situations are inspired by an ethical dilemma commonly described as ‘The Trolley Problem’.

Figure 1 The Autonomous Vehicle version of the Trolley Problem

The Trolley Problem

In the ‘Ethical Alignment’ short story Mr and Mrs Morales were unwittingly subjected to a version of a philosophical thought experiment called the Trolley Problem. The Trolley Problem is an ethical dilemma devised by philosopher Philippa Foot intended to highlight the differences between two classical approaches to moral thought: Deontological and Consequentialist (Utilitarian) ethics.

In its original form, the dilemma describes a trolley on rail tracks that is running out of control (Figure 2). On the track ahead of the trolley are 5 people who will all be killed if the trolley runs into them (B). Just before them, however, is a set of points to a side track that has only one person on it (C). Next to the points there happens to be a person standing near a lever (D) that, when pulled, would change the points so that the trolley moves onto the side track. The dilemma is framed around the decision of the person next to the lever: should they pull the lever and save the lives of the five workers at the expense of the one on the side track, or should they not pull the lever and allow the runaway trolley to kill the five workers.

The two options available to the person next to the lever represent two classical ethical positions: should they pull the lever and save more lives (utilitarianism), or should they not pull the lever, because in doing do they would be directly responsible for the death of the person on the siding, and intentionally killing an innocent person is morally wrong (deontological).

Book Cover. Rise of the Moral Machine

My Latest Book

“For anyone interested in where questions of robotics ultimately take us, I highly recommend this book.”

DR. SHARON DIRCKX
SPEAKER, FORMER NEUROSCIENTIST
AND AUTHOR OF ‘AM I JUST MY BRAIN?’

Available now on Amazon (Click here)

Read More


This dilemma has become the focus of much discussion on the ethics surrounding autonomous vehicles, raising questions along the lines of those that were put to Mr and Mrs Morales in the short story. Whilst it is certainly true that autonomous vehicles will face making decisions that are morally significant, it is far from clear that either utilitarianism or deontological ethics offer any real practical solutions to the dilemmas set before the Moraleses in configuring their new car (I explain why this is the case in Chapter 4 of ‘Rise of the Moral Machines’). What is more, it turns out that framing this in terms of the Trolley Problem is not helpful.

Figure 2 An illustration of the Trolley Problem

The Trolley Problem Doesn’t Help

When it comes to finding a practical solution to the ethical decision making of autonomous vehicles, the Trolley Problem doesn’t help at all. There are three reasons for this, all of which stem from the fact that the dilemma it presents is focussed on the consequences or outcomes of actions.

1. Computational complexity

The first reason concerns the computational complexity of calculating the outcomes of each of the actions that the autonomous vehicle may choose to take. The classic version of the dilemma centres on a runaway trolley that is on tracks. The fact that it is a runaway trolley means that no one on the trolley can act to mitigate the outcome by, for example, applying the breaks. This, together with the fact that the trolley is on tracks, ensures that there are only two outcomes: either the trolley continues on the track it is on killing five people, or, as a result of a single action by the person standing next to the lever, the trolley is diverted onto the siding, killing one person.

An autonomous vehicle, on the other hand, is not a runaway trolley constrained in its movement by tracks (Figure 3). As a result it has many more actions to choose from. Let’s have a go at quantifying them. Most cars have a steering angle of 30 degrees to the left and to the right, making a total of 60 degrees to choose from when deciding on which direction to steer. The car will also have a choice in terms of amount of acceleration or braking it applies. For the sake of simplicity, let’s assume that the car could choose from 20 different rates of acceleration/braking, where a value of 1 represents maximum braking and 20 represents maximum acceleration. Combining the acceleration/braking choices with the steering angle choices results in 60 × 20 = 1, 200 different possible actions that the car needs to consider, compared to the one action (and the possibility of not taking it) in the classic trolley problem.

Figure 3 An illustration of the range of choices available to an autonomous vehicle.

Imagine now that the autonomous vehicle finds itself in one of the scenarios presented to the Moraleses in the short story. The vehicle will need to compute the consequences of each of the 1,200 possible actions it could take. Because the vehicle is not on tracks, it will have to apply complex equations of motion that take into account the weight of the vehicle, its current speed and direction of travel, the friction on the road, and many other variables, to work out the path that the vehicle would take for each of those possible actions. These are non- trivial computations that the vehicle would need to perform in order to identify the potential outcomes of its actions in the split second that it has to make a decision.

2. Uncertain outcomes

Even if the vehicle was capable of making all those calculations in a split second, there is likely to be some inaccuracies in the predictions it makes. This takes us to the second reason why the trolley problem doesn’t help: there is likely to be a degree of uncertainty in the accuracy of the predicted outcomes of each course of action taken by the autonomous vehicle. Each of the motion computations arising from the vehicle’s choice of steering and acceleration/braking actions will inevitably include a margin of error and will therefore introduce a degree of uncertainty about the predicted outcome. Whereas in the Trolley Problem there is no uncertainty in the outcomes: either five people will die, or one will.

The uncertainty of the possible outcomes for the autonomous vehicle, however, is not due solely to the error in the calculations of the motion of the vehicle. Uncertainty is also created in the unpredictability of the reactions of the other agents involved in the incident. It could be, for example, that a person that is about to be hit by the vehicle is able to jump out of the way of the way as it swerves towards them. Furthermore, different choices of steering angle and acceleration/braking made by the autonomous vehicle might evoke different reactions from other road users. An on-coming car, for example, might swerve in the opposite direction to the autonomous vehicle to avoid a collision, but in doing so might then cause other casualties. Contrast that with the Trolley Problem in which there are no other rail vehicles involved and the people on the tracks are not able to react at all to the oncoming trolley. This is often emphasised in drawings of the dilemma by showing the people on the tracks being tied down by ropes (Figure 2).

3. Estimating ethical consequences is difficult

Having calculated the likely outcomes of all 1,200 choices of actions, and having taken into account the uncertainty of the predicted outcomes of each one, the autonomous vehicle then has to estimate the relative ethical impact of each of those uncertain outcomes. This is the most difficult step of all in the vehicle’s decision making process and constitutes the third reason why the Trolley Problem is not a particularly helpful framework for ethical decision support in autonomous vehicles. How does the vehicle weigh up the relative harm potentially caused to different pedestrians, or to the occupants of the vehicle or any other road users involved? Furthermore, how can consequentialist moral judgements like these be computed in a way that aligns with the ethical preferences of society as a whole?

The Wrong Focus

So I would strongly suggest not focussing on the Trolley Problem as a practical framework for thinking about ethics of autonomous vehicles. There are other approaches which I explore in Chapter 4 of ‘Rise of the Moral Machine: Exploring Virtue Through a Robot’s Eyes‘. We do need to have sorted this out, though, before our roads are filled with autonomous vehicles who’s decisions and actions can clearly have harmful outcomes on other road users.