IBM Creates Critical Tool To Detect AI Bias     

Bias could turn future AIs into very dangerous things because bias corrupts the process and would almost certainly lead to results that are untrustworthy. This is particularly scary with AIs particularly as they advance from Narrow AIs, to Broad AIs, and eventually emerge as General AIs. This is because the farther you go down this evolutionary path the more autonomous the AI becomes and a powerful AI that can’t be trusted could easily be a company killer. There are few companies that have the focus and breadth of skills to take on a task like this and IBM fortunately is one of them. Their Cambridge based MIT-IBM Lab is the tip of the spear when it comes to battling bias and, this week, they announced some impressive advancements.

Let’s talk about that this week.


Bias, with regard to an AI, will grow in danger the more the related tool is relied upon and it can be introduced in three areas and they aren’t mutually exclusive (meaning all three areas could introduce bias). The first is in the quality of the data, samples that are too small, that have left out critical data sets, or that organize the data in a way that results in it either being forgotten or misinterpreted is the most obvious place to look for bias. However, the AI algorithms themselves could be biased (and given they are created by humans and humans tend to be biased this could be especially problematic), and, finally the individual getting the output from an AI may themselves be biased and either misinterpret the data or weights put on the identified results.

Bias is relatively easy to introduce but very difficult to identify and remove largely because the folks using the system may share the same biases and thus not recognize that the system is performing poorly. Bias could result in friendly fire accidents for defense implementations, erroneous diagnosis from doctors relying on the biased system, and Smart Cities that are more insane than smart putting citizens at risk particularly where there are autonomous cars running.

The identification and elimination of bias is therefore critical for any firm deploying an AI system.

IBM Cambridge Labs Steps Up

I spoke with IBM Research’s Cambridge Lab Director Lisa Amini who oversees much of this effort to eliminate bias from these emerging AIs. IBM has committed $240M over 10 years to create and refine artificial intelligence. IBM’s Cambridge Lab works very closely with MIT and there 100% participation by both entities in these AI related products.

Currently much of the focus is on the current generation of Narrow AIs, which are single task, single domain, but have superhuman accuracy ad speed tied to their very narrow tasks. What they are doing will have implications for AIs as they evolve first into Broad AI (multi-task, multi-domain, multi-modal, distributed explainable AIs), and eventually into fully autonomous and independent General AIs.

Their work to eliminate bias begins with a focus on DNNs (Deep Neural Networks) to assess information flow and determine what information is essential to the question (basically the elimination of noise). They also do work on debugging Neural Networks, identifying and classifying them with human intervention, to make sure there is no bias built into the AI itself. Part of this work is to both identify hostile attacks and create them as part of a feedback loop designed to anticipate and mitigate attack surfaces potentially real time. To do this they use a process called the Crown Network which bounds and defines the attack surface.

Some of interesting attacks they have identifies is the use of yellow post-it notes on stop signs which make the AI think it is another type of sign (and might cause an autonomous car to run a stop sign) and big yellow dark glasses that can make an AI miss with facial recognition and categorize someone as a celebrity in error. This effort is largely executed by a team of world class Principal Investigators who aggressively assure IBM’s efforts are unbiased.

Wrapping Up:

While General AIs are estimated to be around 30 years in our future, we have Narrow AIs today and in a few short years these will evolve into Broad AIs with far more autonomy and breadth. Bias could badly damage the effectiveness and accuracy of these AIs, and, particularly in medicine, it could have deadly results. An effort to remove this bias is critical to the success of this technology and IBM’s joint lab with MIT in Cambridge is the sharp end of that spear.

With a planned investment of $240M and an effort that spans most IBM labs, 17 department centers, and 30+ active related projects the firm is serious about AI safety, security, and accuracy. That’s old school IBM and we need old school IBM for something as potentially revolutionary and dangerous as AI.