This week I was part of an SNIA panel talking about Ethics and AI. While I enjoyed the chat, I pretty much enjoy talking to anyone about anything; I left the chat feeling that the goal of an Ethical AI created by humans is impossible. It is impossible because when it comes to Ethics, not only isn’t there a globally accepted set of ethical rules but, even when rules exist, if they come into conflict with financial goals, those goals tend to override ethical concerns.
We saw that happen twice as Google’s Ethics team lost its top two leaders; the second one appeared to be fired for complaining about the ethics of the first one’s termination. These terminations are doubly problematic for Google as both leaders were women in a male-dominated industry. Facebook didn’t even appear to try to overcome their ethical problems associated with the mass distribution of false information and seemed to have designed an AI Algorithm to maximize social damage to maximize revenue from that problem.
This problem is only the visible tip of the iceberg. There are undoubtedly AI projects across the eco-system that will, eventually, harm scale, and if we don’t stop kicking this can down the road, the outcome could be dire.
The Problem With Morals And Ethics
The foundation for this problem is the need for an immutable set of rules that define what is right and what is wrong in terms of AI behavior. The gold standard for these rules has historically been Isaac Asimov’s three rules of Robotics which are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey human beings’ orders except where such orders would conflict with the first law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
Now, if you think about it, these rules could be used broadly if the definition of “harm” was broad enough. The implication would be that an AI could not act against the best interests of any human. That would prevent AIs in weapons systems, AI rob callers, AI-driven scams, and AI-driven malware. The underlying concepts form the core of how autonomous vehicles are trained, focused on the concept that the vehicle can’t intentionally harm anyone. However, the concept of harm through inaction isn’t yet part of the programming. An example would be a parked car intentionally moving to prevent another car from harming a pedestrian that it determined was at risk. But, while it would seem beneficial, particularly to that theoretical pedestrian, the resulting liability would likely make that part of the law moot for the car to do this.
And there lies the problem, while we inherently know that the right thing for an autonomous car to do would be to sacrifice itself to protect a human, adult, or child. We likely wouldn’t allow that behavior because of the economics surrounding the decision.
This corruption of an AI played out in Science Fiction in the movie Robocop, where four laws the cyborg officer had to follow were corrupted by the unique last law. The first three had to do with protecting people; the fourth was classified, protected senior executives from the firm (OCP) that created him from being arrested, effectively corrupting the cyborg. From Wargames to HAL in 2001, Space Odyssey Science Fiction writers have been warning us of the potential problems of an AI that was either corrupted or had no rules in the first place, but we, as an industry, have yet to take those lessons seriously.
Wrapping Up: Without Rules AIs Are Extremely Dangerous
AIs make decisions at machine speeds. If we can’t consistently differentiate what is right or wrong in our behavior, we likely also cannot make that determination regarding the behavior of AIs. Asimov attempted to define a simple set of rules back around 1940, then wrote a series of books discussing how robots could get around those rules.
As we move forward into this new age of autonomous machines, implementing rules that will prevent these machines from doing massive amounts of damage is critical to our future. Yet, as we’ve seen from companies like Google and Facebook, we are ill-suited to both set the rules and to ensure their enforcement, and we’ll continue to see AI efforts that may be profitable but will do the world far more harm than good.
I’ll leave you with this, an analysis of what Ultron was trying to do in the Age of Ultron; it showcases very well the problem of creating a system that could have way too many of its creator’s flaws.
In the end, if we can’t reliably tell right from wrong, neither will the AIs we eventually create.