Trolley Dilemmas Shouldn’t Influence Self-Driving Policies, Experts Argue
February 13, 2019      

One big issue around self-driving vehicles is whether the machine can make the same types of ethical or moral decisions that humans can when faced with a scenario that could cause harm. To study the issue, researchers have used “trolley dilemmas,” hypothetical scenarios in which humans are given a choice of who they would kill with a trolley that is speeding down a path, unable to stop. Psychologists use these types of scenarios to draw insights into human thinking, whether someone would kill an old person versus a young person, or someone who is wealthy versus someone who is homeless, for example.

An October 2018 study titled “The Moral Machine experiment” highlighted this approach, in which 2 million people from around the world participated in an online game that offered such choices. The resulting publicity of the experiment generated lots of media coverage and sparked continuing discussion over whether AVs can, or should, make such decisions.

Sam Anthony Perceptive Automata Trolley Dilemmas

Sam Anthony, Perceptive Automata

This approach bothered at least three authors, who last month published a counter-argument, titled “Doubting driverless dilemmas,” saying that the ideas presented on AVs and ethical decision “are too contrived to be of practical use, represent an incorrect model of proper safe decision making, and should not be used to inform policy.”

Julian De Frietas Harvard University Trolley dilemmas

Julian De Frietas, Harvard University.

The authors were Julian De Frietas and George Alvarez from the Department of Psychology at Harvard University, and Sam Anthony, CTO of Perceptive Automata, which is developing perception software for autonomous vehicles.

Robotics Business Review recently spoke with two of the authors of the piece, De Frietas and Anthony, about the problems of trolley dilemmas and applying them to autonomous vehicles.

Psychological experiment or policy tool?

Q: Talk about the motivations behind writing the commentary.

De Frietas: As someone who has published research involving trolley dilemmas, the whole point is to simplify real-world complexities so that you’re only looking at one or two factors in order to evaluate whether they’re influencing people’s moral intuitions. So the trolley dilemma makes a lot of sense as a psychology tool, but what struck me as strange was that they wanted to reapply that back onto the real world in order to try to inform policy and make a statement about the state of AVs right now.

So I always had this feeling, and one day Sam came to our labs and presented and expressed a similar sort of frustration. At that point, I thought ‘I’m not crazy for thinking this.’ I really did doubt myself at first, because their work had been covered in every single popular media that you could think of.

Anthony: From my perspective, we’re working on perception for autonomous vehicles, and there are big, real questions about what it means if an AV gets into an incident because it sees the world differently from how a human sees the world. There’s real meat there.

There are two problems with the trolley dilemma – first of all, it’s a distraction to the work that is being done on making AVs safer, and second, it has this built-in assumption that AVs can see the world perfectly. People shouldn’t think that’s true, because there’s a lot of work to do to characterize and understand how perception works for these vehicles, and how it differs from human perception.

Trolley Dilemma Moral Machine Graphic

Source: Awad et al.

Q: Do you see any value in what the Moral Machine experiment paper accomplished?

Anthony: From my perspective, these kinds of counter-factual experiments get used in psychology because they can illustrate interesting things about people their decision-making. The Moral Machine got a massive number of subjects, there’s cultural differences and I think that on some level, that could be an interesting way to think about understanding people’s moral decision making, and that’s work that Julian has also done on the subject. But considering it a policy or guide to how autonomous vehicles could or should behave has some various problems.

De Frietas: A trolley dilemma can find out some of the facts that influence people’s moral judgment by forcing them to choose between one or two predetermined choices. For example, you can contrast whether you’d prefer to kill a young person versus an old person. And yes, maybe in this hypothetical situation, and you only have one or two choices, and you can’t do anything else, then choosing one of them might say something, and then you could interpret what that might mean.

But one interesting thing about the paper is we actually just asked people directly, “Do you endorse the idea of programming vehicles like these to make decisions on whom to kill based on these sort of factors?”, and less than 20% of people think that’s a good idea.

So one of the things they say in the Moral Machine experiment is that they’re collecting the public’s opinions to inform policy, but if you just ask the public in a much more direct way – where you’re not forcing them to make a choice – they don’t actually endorse it. That’s an example of something that’s psychologically interesting, but not policy relevant.

That’s not even bringing into account that these cartoon dilemmas are very different from the real world, that the people they’re asking to make judgments online are not experts, and they don’t have any skin in the game. So that’s an example of how it’s useful in one way but not another.

One of the points we mention in our paper is that the dilemmas are, logically speaking, rare because they assume the situation where you have the ability and time to make a considered decision between whom to kill, but you can’t use that time and ability to take some simple action, like swerving or slowing down to avoid hitting either person. So there’s really no evidence that these sort of dilemmas occur in the way that the trolley dilemma is set up.


The trolley dilemma played for laughs on NBC’s “The Good Place”

Q: Since this research has come out, have either of you seen examples or evidence where policy makers are using this as their baseline?

Anthony: I think policy makers are trailing a bit compared to the AV industry, but on some level they are getting revved up. I think people are still talking about the NHTSA [National Highway Transportation Safety Adminsitration] guidelines, and that’s kind of where things have stayed in the larger safety question, but it wouldn’t be terribly surprising to me if this started to come up in conversations, in part because it’s been covered so widely.

Assuring the public on self-driving safety

Q: How should the AV industry go about reassuring the public that AVs are safe, in light of recent high-profile accidents and fatalities?

Anthony: I think the big thing is there needs to be explainability. It’s not “Here are the ethical fundamentals,” but rather “Here’s what the vehicle saw, here’s why the vehicle made this decision in this circumstance, here’s how that information is better than what a human could have available, here’s how that information is different than what a human would have available, here’s how we set the performance parameters such that there’s always room under normal conditions to do an emergency stop, and how we’re going to address questions if there is an incident about understanding what happened in that instance.”

Transparency and explainability are an effort to characterize the way that these vehicles’ perception and understanding of the world is different from humans is going to be important to building trust. It’s always going to be the case that an autonomous vehicle is going to know different things about the world – in some cases, with lidar, it could get information that human vision can’t get.

In other cases, understanding why somebody is holding their cell phone while they’re crossing the street is going to be more challenging. Building a system that says, ‘These are places where we can look at the world like a human does, these are places where we can look at the world and get more than a human does, and this is how that influences decisions, and having a lot of clarity and transparency around that, I think that’s going to be super important in building trust.

De Frietas: Speaking to Sam and many of these companies, it really is the case that they’re focusing on the correct safety goal, which is to avoid harm. Communicating the idea that these companies already have a lot of incentives and stake in making these cars as safe as possible, is reassuring to people.

This gets them out of the mindset of, “Oh, you know these cars are about to be released on the roads, and they’re going to be making moral decisions, and we haven’t even figured out what sort of moral decisions they have to make, and someone needs to get on that.”

In these isolated incidents [where accidents happened], I think each of them has to be treated seriously and responsibly, and say exactly what happened in each of those cases. One thing I got from this whole project was that even if there’s a lot of improvements to be made, we’re definitely not going to get any closer by considering trolley-type dilemmas.

Q: Do you think the public would be more comfortable with AVs if there were black boxes, like they have in airplanes, that could document all of the processes that took place during an incident?

Anthony: That’s a great question. Certainly that’s something that the people in the industry think about, whether safety is going to be more like current road safety – where you have cars that are functionally safe, and certifications of drivers – or whether it’s going to be more like airplanes, where you have a lot more data, more structure. Can you bring a structure that’s something like the NTSB [National Transportation Safety Board] uses with planes to the modeling of a driverless car? Whether that’s going to be industry best practices, safety organizations, the SAE, where that’s going to come from is an open question, how you define what a driver is will be something that changes dramatically when it comes to driverless vehicles.

Self-driving car iStock more money articleYou can have a black box, but if the information coming out of the black box isn’t interpretable, then it doesn’t buy you that much. So being able to say, “Well, the vehicle saw this pedestrian, and believed that this pedestrian wanted to cross the street, and if you took a poll of people, then everyone would agree with that assessment,” then you can understand behavior in that context. So having the data, and being able to go back and debrief with that data is super important, but the data itself needs to be comprehensible.

There’s also a really strong assumption built into these dilemmas that there’s perfect knowledge of the world. In my mind, that’s the most troubling, because we shouldn’t be operating from an assumption that these vehicles have perfect knowledge of the world. We need to be realistic and careful and clear about what these vehicles can perceive, and how that’s useful, and how that they don’t have perfect information about the world.

Q: Do you think that people expect that AVs should be able to perceive the world better than humans can, or at least at the same level? I’m thinking of the example where you have a 360-degree lidar, and most of these AVs can detect what’s behind the car, where humans either need to use the rear-view mirror or turn their head.

Anthony: I think they do, and I think that they should, but that’s still different from having perfect information. So the assumption that you know exactly the social category of someone that’s 100 meters away from your car, is that even remotely realistic, even in a situation where you have perfect human perception? I’m not sure it is.

There’s a huge benefit for an AV in that it can pay attention everywhere at once, so there’s no question of inattentiveness, daydreaming, etc. In that sense, it’s superhuman, but if you’re talking about a 360-degree lidar and whether it’s going to be able to say, “Well, that person is old and mobility challenged, so I want to be extra careful driving around them,” I don’t think you get that from a point cloud.

Q: Are there any situations where having ‘what-if’ scenarios is helpful in these discussions, such as ‘What if a jogger jumps out in front of the vehicle’, or ‘What if a meteor strikes the highway?’ Or should we not be talking about things that will never likely happen in the real world, and we should just get over the initial fear of using an autonomous vehicle?

De Frietas: I definitely think the fear of those events are going to be much more salient than the reality. That’s not to say that sometimes those sorts of events will occur, but will they be true trolley dilemmas, where you have an exactly 50-50 chance of hitting each person, and you have no choice but to drive head on – that seems unlikely.

There’s probably at any given point an action that you can take to buy more time, and to minimize harm, and to potentially avoid either pedestrian, but even if such situations occurred with some very vanishingly low frequency, part of the point of our article is to say that if you try to train machines on trolley-type dilemmas, you’re not going to have something that can both drive on the road and solve these sorts of dilemma situations.

The reason is that the more general goal, the one that makes sense, is to avoid harm, so that means that if most of what you’re doing on the road is just avoiding more mundane things, then optimizing to that goal will cover you. Then when you are in the dilemma situation, you want to be seeing the world through dilemma-type glasses, so you will still be just trying to just avoid and minimize harm rather than trying to choose whom to kill.

The last point is that even if you are in such a situation, you wouldn’t want to do it based on someone’s social categories, because one: it’s kind of hard to get that from their appearance, but it’s also kind of morally questionable to choose whether to harm someone based on the category to which they belong. So I think that there is a lot of moralizing of these events, which makes these scenarios really salient in people’s minds, but even in the very few cases where they occur, we really don’t see the benefit of taking a trolley-dilemma approach to them.