IBM’s AI Ship Failure Showcases Critical Shortcoming With Autonomous Vehicles

Working with a host of others, IBM launched an autonomous ship that was to showcase how an AI could more quickly, safely, and efficiently navigate to a destination than a human crew.  However, this effort failed in June but not because of the AI, software, or anything to do with the solution that was being tested.  It failed, much the same way we might see an autonomous car failure because the ship’s powerplant had a problem.

With a human captained ship, the crew, which could consist of one person, would generally have the capability to diagnose and fix the problem.  Much like with a car, if a problem results, the driver can attempt to fix the problem, diagnose the problem, and, if needed, call for remote or in-person help.  

But with this Autonomous Ship, the AI not only couldn’t institute a repaint but also wasn’t even equipped to analyze the problem so that someone could go to the ship and fix it, let alone have the ship fix itself.  

I think this is an oversight on how we are building autonomous vehicles; we also need to consider either high-speed remote help or include the ability for the car to diagnose the problem and either fix the problem itself or guide the passenger/driver through the fix.  Let me explain.  

The Autonomous Missing Link

The week I’m writing this, where I live, is setting heat records, which means were you to break down on the side of the road, you wouldn’t be able to sit in your car, or you’d bake, and without water or cell service, you’d be in real trouble and in danger of getting heatstroke and dying from it.  

There is a running joke on the internet about how drivers back when I was growing up knew how to do essential maintenance, and now drivers need to be warned not to drink gasoline.   I still recall sitting outside drinking coffee 20 years ago and having two kids drive up, tell me not to talk down to them because they knew a lot about cars, and then proceed to ask me what the air cleaner on top of their engine did (they used the technical term “circular thing with the trumpet sticking out of it.”  

At least when you are driving your car, you can often feel that things aren’t going. Write and stop at a gas station and check to see if you have a problem.  But with an autonomous car, you’ll be sleeping, reading, watching a movie, teleconferencing, or doing anything but driving and figure the car knows how to take care of itself.  

The car will have tons of sensors, will be able to use the in-car network to provide early warning of an automotive failure, and should (but currently doesn’t) have the ability to anticipate a future problem and prioritize going to the closest gas station, dealership, or relevant shop to get the problem fixed.  

But, ideally, over time, it would be building in the capability, either through redundancy or robotics, to take care of the problem automatically.  It helps that autonomous cars are going electric, and autonomous trucks are expected to travel in convoys with one or two people providing in-transit support and security. Still, one of those people likely should also be a mechanic so that the cargo isn’t put at extended risk due to a fixable mechanical failure.  

Wrapping Up:  The Mechanical Missing Link

Collectively the skills needed to maintain a car are being taught to drivers less and less often.  As we begin to turn cars into horizontal elevators, these skills will continue to decline, but a danger to drivers will only increase.  This increase will be because as people stop driving, their ability to notice a stalled car will be reduced, let alone knowing how to help the stalled driver diagnose the problem, get the driver to safety, and get the problem fixed as expeditiously as possible.  

Due to this skill decline, autonomous driving cars will need a far higher level of redundancy so passengers can rely on this form of transportation and so we don’t start hearing about a lot of drivers who have been stranded in unsafe locations due to an unanticipated mechanical problem. 

By initially building in deeper diagnostics so that problems can be caught and corrected before they become catastrophes, the industry can better assure the entire safety promise of autonomous cars.  If this isn’t done, the increasing number of stories where autonomous cars caused deaths, like the Tes; Autopilot stories, could damage the related brands.  Assuring that doesn’t happen means the industry needs to take a deeper look at solutions that provide not only autonomous driving but the redundancy and anticipatory diagnostics to assure you don’t wake up in your autonomous car in 110-degree weather, in a dead cell area, and no one around to help.