Let’s say theres a trolly on some tracks.On those tracks are five men. The trolly is barreling down these tracks and you know withabsolute certainty that it will hit those five men. Although, there is a side trackwith one man on it. In front of you is the switch. If you push it, the trolly will changedirections and strike and kill the single man. If you do nothing, the trolly will strikeand kill all five men. What do you do? Do you play an active role but condemn the manon the left track to death or do you do nothing but allow five individuals to die? This isa classic ethics thought experiment.
Most people agree that they should divert the trolleyand kill the one man. It works for the greater good and mitigates risk. Now in a differentscenario, let’s say you’re standing on a bridge over the trolly tracks and once againtheres a trolly barreling down towards five individuals on the track. Standing next toyou is a man so fat that you know that, if you were to push him onto the tracks, he wouldstop the trolly in its tracks and save the five individuals. Far fewer people say thatthey would push the man in this circumstance than say they would divert the trolly in thefirst circumstance.
They are just not comfortable taking such an active role in the fat man’sdeath. These thought experiments are just hypotheticals and its unlikely that anyonewould ever have to make those decisions, however ethicists and developers are now having tocreate algorithms to dictate what our autonomous cars of the future should do in similar ethicalconundrums. Unlike humans, self-driving cars will havethe ability to carefully choose their response to an oncoming collision, and will need aset of pre-designated rules to dictate what they should do in the event of an unavoidablecollision. One thought would just be to tell the cars to follow the laws. The problem withthis is that there aren’t really laws for these situations. Many laws are written consideringthat ethics is a messy and fluid concept.
When judging a situation, one can never reallyknow what motivated an individual to choose a certain action, but with self-driving cars,we will be able to understand the decision making process. Additionally, following thelaws could be harmful in some situations. For example, lets say a vehicle is stoppedat a traffic light and there is a pedestrian in the crosswalk directly in front of thecar. The car detects that a truck is coming in too fast from behind and will hit the car.The car cannot move forward without hitting the pedestrian. There are two options. Thecar can do nothing, stay put, get hit by the truck, and therefore hit the pedestrian infront or alternatively, the car can move forward, hit the pedestrian, but avoid being hit bythe truck. From our perspective it probably seems right for the car to move and hit thepedestrian, however this would mean that it was the car injuring the pedestrian now ratherthan the truck.
This presents tricky legal and ethical questions on whether the car isnow at fault for the injuries of the pedestrian or if the truck is. Given the laws we havenow, it’s possible that the fault would be laid upon the car. Another question iswho is at fault for an accident with a self-driving car. Considering that there isn’t a humandriving, is it the owner of the vehicles’s fault? Now let’s say that the pedestrianwas moving fast enough that he would no longer be in front of the car at the time of impactbetween the truck and the car, but that, in order for the car to move out of the pathbefore the time of impact between the truck and the car, it would need to hit the pedestrian.
Is it now ethical for the car to injure the pedestrian even though the pedestrian wouldlikely escape injury if the car didn’t move? Does it depend on the number of people inthe car? What if the car had a mother of three and the pedestrian was a felon? Would it bedifferent if the car had a felon in it and the pedestrian was the mother of three? That’swhere these ethical questions become even messier.Another school of thought is that the self-driving cars should be programmed to save the mosthuman lives, or cause the least possible injury, however there are issues with this solutionas well. Let’s think about another situation. Say there are two motorcycles coming downtowards the self-driving car on a road that is too narrow for the different vehicles topass eachother.
The car does not have enough time to slow down and there is nowhere toveer. One motorcyclist has a helmet on and one does not. Should the car hit the motorcyclistwith the helmet on because his injuries might be less severe or should the car hit the motorcyclistwho does not have a helmet on because he did not properly protect himself? If cars wereprogrammed to hit the motorcyclist with the helmet, that could mean that in a way it wouldbecome safer to ride without a helmet. It’s a tricky situation altogether, and a realethical conundrum. It’s very difficult for humans to explainor justify the rules behind our own ethics, so that’s why these questions are so difficultto answer. For this reason, a proposed solution to program self-driving cars is “moral modeling,”essentially programming by example.
The computer would be presented with a situation, and ahuman, or ideally an ethics board, would tell the computer what the “ethical” solutionwould be. Over time the computer would learn how to emulate the ethics of a human, andessentially make the same decisions that a human would make.These are all situations that are very unlikely to happen, but with how prevalent autonomouscars are likely to become, the algorithms that will determine how to crash could choosethe fate of dozens of lives each year. However, autonomous cars are predicted to save upwardsof 30,000 lives per year once they are widespread. These ethics may become a hotly contestedissue in a few years, however if these negotiations prevent or slow down the spread of driverlesscars many more lives could be lost than would ever be determined by the so called “deathalgorithms.”