We thought it would be automated nuclear weapons systems gone crazy. Or weapons laden drones prowling the skies, killer satellites in space or Terminator cyborgs who have decided that in the interest of survival humans should be exterminated.
But no. It’s going to be cute, little self-driving cars.
We already have semi-autonomous killing machines in the battlefields but (theoretically) they will do everything except making the final decision to pull the trigger. That final, ultimate decision is supposed to be left up to a human somewhere who can analyze the situation and decide whether or not the drone is targeting a friend or an enemy and then issue the go-/no-go death sentence.
But we are rapidly coming to a time when machines (or the algorithms that drive them) will be given the power to make those final decisions about who lives and who dies without human intervention – but it won’t be on the battlefield, it will be on the highways.
Self-driving cars have a nearly spotless record so far in the limited tests that have been underway for the past half-dozen years. There have been a few fender-benders that weren’t the fault of the cars, but that’s about it. But sooner or later a self-driving car is going to be forced into a situation (not necessarily of its own doing) where it is going to have to make the ultimate decision on who lives and who dies, and they are going to have to do it so fast that even if the human driver wanted to take over there wouldn’t be time.
Two kids try to sprint across a highway. There isn’t time to stop (even with advanced breaking systems controlled by the car) so the car can either veer to the left or the right.
The car is going to have to decide in that split second which child to hit and which one to spare.
A car and a motorcycle collide and spin out right in front of the self-driving car. Does the car run over the motorcyclist lying in the road or plow into the car? Running over the motorcyclist would almost certainly kill him or her (if they aren’t already dead) but the car might have multiple people inside? Then again, hitting the car might not kill any of the passengers. Tough decision even for a human.
And what happens when the car has to decide whether to kill other people or the occupants of the car? Let’s say an earthquake in San Francisco causes a section of the Golden Gate Bridge to collapse only a few dozen feet in front of a self-driving car. There is still one lane open but that one lane has a school bus in the way. Does the autonomous car hit the bus and risk killing dozens of kids (and possibly the occupants of the car too) or does it decide to avoid the bus, drive into the gaping hole and plunge hundreds of feet into the waters below?
Or a more likely scenario, debris from a construction project falls onto a busy street in front of the car. Its choices are run up onto the sidewalk and probably kill a number of pedestrians or pull into oncomming traffic and risk a head-on collision. Would it make a difference if the President and his family were in the car? Or the Pope? Or Stephen Hawking? Or Google executives Sergey Brin, Larry Page or Eric Schmidt? (Or an even more chilling possibility. What if the dealer offered an optional safety feature when you buy the car – pay an extra thousand bucks and the car will always chose to kill someone else first.)
I’m not sure I would want to be the programmer writing the algorithms that are going to make those decisions?