Intel: Using Artificial Intelligence to Create A Better World      

There is a lot of concern that the AIs we are creating will either accidentally or purposely decide that their best path to whatever their end goal is will be to eliminate humans. This isn’t just the idea of a Terminator moment – it could be something as simple as self-replication without limits where the damn things propagate so quickly and consume so many natural resources that there is nothing left for us.

Intel has been at the forefront of firms trying to find another, better way forward with AI and their resident Ethnologist Genevieve Bell has largely been championing this fight. Intel had an AI Day last week and Bell presented, via video, in the final session. Sadly, it came late in the day and after the cocktail mixer had started so I think most of those attending missed it. But it was likely the most important session of the entire event. Here are some of my thoughts.

Keeping AIs Focused on Good

A great deal of the concern surrounding AIs arises from the fact that we aren’t particularly good at thinking through risk ourselves. For instance, the economic collapse a few years back that was due to really bad lending and investment practices could have been anticipated by a high-school student. Yet the folks in charge saw only the opportunity for fast money and not the catastrophic outcome that would, and did, come rather quickly afterwards.

Or take the Hadron Collider and the chance that it could turn the earth into a black hole ending all life. The calculations were done showcasing the chance was extremely remote and then, once the collider was operating, the calculations were checked only to find they were off in the wrong direction by several magnitudes. We’re still here but given we can’t easily move ahead if we keep doing things that even remotely might wipe out the entire solar system. That would seem to be a rather foolish thing to do.

Now, this same human thinking is going into the creation of anonymous and increasingly intelligent manufacturing systems, security systems, financial systems, cars, planes, and, yes, weapons of mass destruction. So you can appreciate why folks like Elon Musk, Bill Gates, and Stephen Hawking (who recently popped up saying we need to find a way to move off the planet) are warning us about deadly AIs.

This underscores the pronounced need to focus AIs on good.

Intel and The Good AI

In that final session of Intel’s AI Day event, examples were given on how artificial intelligence could help families in need, could help identify and protect children who are abused and used as sexual slaves (something that disgusts me to even have to say), and to protect against bullies.

On this last subject, Recode reporter Ina Fried’s story was the most interesting both because it was a story about her and because it resonated with me. Fried belongs to the LGBT community whose members are often targeted by bullies and hate crimes. However, it was the attacks by members of the Apple community that she found most harmful and hurtful. I’ve had similar run ins with both the Apple and Linux communities with the latter going so far as to kill one of my pets so I certainly sympathized. But the fix, I thought, that Intel CEO Brian Krzanich articulated in his AI Day keynote was both unique and compelling.

What Krzanich described was an AI solution that would, rather than just identifying and blocking bullies, which often incenses them and causes escalation, work as a tool to help bullies realize what they are doing. It would help prevent further incidents either by showcasing the likely repercussions (which is particularly useful for kids) or alerting parents or employers before escalation occurs.

Telling a parent or an employer after someone has gone off the deep end does little good because the event is often obvious and the damage done. But if this could be stopped before the escalation happens perhaps we could prevent the next mass shooting instead of just regretting it.

You see, AIs could make us all smarter; not just smarter people, but smarter parents, smarter bosses, and, hopefully, smarter scientists. I’m still convinced the last thing we’ll all hear on this planet is some overzealous scientist saying “oops”.

Wrapping Up:

There are actually a number of efforts trying to keep AIs from going Terminator or worse there is AI Shield by the Lifeboat Foundation, and Elon Musk just donated $10M to the Future Life Institute with a similar goal. There is even a mysterious “Shield AI” out there but little is known other than it seems to be designed to create friendly terminator like drones (that’ll end well).

But we also need core technology companies like Intel and thought leaders like Brian Krzanich and Genevieve Bell out there focused on keeping us safe. Otherwise those cute robots that marginally clean our floors and windows could evolve into something that will chase us around and not in any fun way. It is nice to know that at least some folks in power are trying to make sure that the last human word uttered isn’t “oops”.