Why Google’s Driverless Car Is Evil

Google’s autonomous car is a media darling, often portrayed as a techno-marvel that will free the world’s commuters from the burden of operating motor vehicles.

At the 2013 Los Angles Auto Show last month, Ron Medford, director of safety for self-driving cars for the search giant, described the robotic vehicle program by invoking the company’s “don’t be evil” credo:

We certainly hope that our friends in the auto industry [and others] don’t see us as anything but another way in which we’re trying to do good for the world.

 

By that, I assume Medford means safer driving and motorists able to reclaim hours lost to road tedium for more productive tasks. That well could be true.

But consider this: American drivers, by conservative estimates, spend an hour a day behind the wheel—just about the only time we don’t have our eyes glued to screens. If even a small portion of America’s 200,000 or so drivers are turned into online media consumers for that hour, with the Google car doing the driving and Google ads served up along the way, Google benefits big time.

Some people might have qualms about trusting sensors, radars, cloud-supplied road data, and onboard processing power. That doesn’t concern me so much. The margin of error for computers, I believe, is much lower than what results from human fallibility.  What does bother me is how Google, once it has us in its vehicles, will “optimize” our in-car experiences.