Let’s say theres a trolly on some tracks. On those tracks are five men. The trolly is barreling down these tracks and you know with absolute certainty that it will hit those five men. Although, there is a side track with one man on it. In front of you is the switch. If you push it, the trolly will change directions and strike and kill the single man. If you do nothing, the trolly will strike and kill all five men. What do you do? Do you play an active role but condemn the man on the left track to death or do you do nothing but allow five individuals to die? This is a classic ethics thought experiment. Most people agree that they should divert the trolley and kill the one man. It works for the greater good and mitigates risk. Now in a different scenario, let’s say you’re standing on a bridge over the trolly tracks and once again theres a trolly barreling down towards five individuals on the track. Standing next to you is a man so fat that you know that, if you were to push him onto the tracks, he would stop the trolly in its tracks and save the five individuals. Far fewer people say that they would push the man in this circumstance than say they would divert the trolly in the first circumstance. They are just not comfortable taking such an active role in the fat man’s death. These thought experiments are just hypotheticals and its unlikely that anyone would ever have to make those decisions, however ethicists and developers are now having to create algorithms to dictate what our autonomous cars of the future should do in similar ethical conundrums. Unlike humans, self-driving cars will have the ability to carefully choose their response to an oncoming collision, and will need a set of pre-designated rules to dictate what they should do in the event of an unavoidable collision. One thought would just be to tell the cars to follow the laws. The problem with this is that there aren’t really laws for these situations. Many laws are written considering that ethics is a messy and fluid concept. When judging a situation, one can never really know what motivated an individual to choose a certain action, but with self-driving cars, we will be able to understand the decision making process. Additionally, following the laws could be harmful in some situations. For example, lets say a vehicle is stopped at a traffic light and there is a pedestrian in the crosswalk directly in front of the car. The car detects that a truck is coming in too fast from behind and will hit the car. The car cannot move forward without hitting the pedestrian. There are two options. The car can do nothing, stay put, get hit by the truck, and therefore hit the pedestrian in front or alternatively, the car can move forward, hit the pedestrian, but avoid being hit by the truck. From our perspective it probably seems right for the car to move and hit the pedestrian, however this would mean that it was the car injuring the pedestrian now rather than the truck. This presents tricky legal and ethical questions on whether the car is now at fault for the injuries of the pedestrian or if the truck is. Given the laws we have now, it’s possible that the fault would be laid upon the car. Another question is who is at fault for an accident with a self-driving car. Considering that there isn’t a human driving, is it the owner of the vehicles’s fault? Now let’s say that the pedestrian was moving fast enough that he would no longer be in front of the car at the time of impact between the truck and the car, but that, in order for the car to move out of the path before the time of impact between the truck and the car, it would need to hit the pedestrian. Is it now ethical for the car to injure the pedestrian even though the pedestrian would likely escape injury if the car didn’t move? Does it depend on the number of people in the car? What if the car had a mother of three and the pedestrian was a felon? Would it be different if the car had a felon in it and the pedestrian was the mother of three? That’s where these ethical questions become even messier. Another school of thought is that the self-driving cars should be programmed to save the most human lives, or cause the least possible injury, however there are issues with this solution as well. Let’s think about another situation. Say there are two motorcycles coming down towards the self-driving car on a road that is too narrow for the different vehicles to pass eachother. The car does not have enough time to slow down and there is nowhere to veer. One motorcyclist has a helmet on and one does not. Should the car hit the motorcyclist with the helmet on because his injuries might be less severe or should the car hit the motorcyclist who does not have a helmet on because he did not properly protect himself? If cars were programmed to hit the motorcyclist with the helmet, that could mean that in a way it would become safer to ride without a helmet. It’s a tricky situation altogether, and a real ethical conundrum. It’s very difficult for humans to explain or justify the rules behind our own ethics, so that’s why these questions are so difficult to answer. For this reason, a proposed solution to program self-driving cars is “moral modeling,” essentially programming by example. The computer would be presented with a situation, and a human, or ideally an ethics board, would tell the computer what the “ethical” solution would be. Over time the computer would learn how to emulate the ethics of a human, and essentially make the same decisions that a human would make. These are all situations that are very unlikely to happen, but with how prevalent autonomous cars are likely to become, the algorithms that will determine how to crash could choose the fate of dozens of lives each year. However, autonomous cars are predicted to save upwards of 30,000 lives per year once they are widespread. These ethics may become a hotly contested issue in a few years, however if these negotiations prevent or slow down the spread of driverless cars many more lives could be lost than would ever be determined by the so called “death algorithms.”
0 Comments