We are surrounded by fake news, whether it’s from someone trying to discredit someone else, trick us into doing something not in our best interests, or convince us that a viable medical treatment is bad or a non-viable medical treatment is good. Particularly with videos and the emergence of Deep Fakes, we may increasingly see people we trust misrepresented and giving us bad advice or making us believe a false narrative. Worse, even if we later find that video is false, the impression it leaves with us may cause us to place or remove trust from a person or institution in error.
Intel has created an AI technology named FakeCatcher which, with a 96% accuracy rate, can identify these fake videos and provide us with critical context over the content we are consuming to assure we are not as easily tricked. However, the technology in this class has one big problem, and addressing that problem won’t be easy.
Let’s talk about Intel’s FakeCatcher and the problem that surrounds this entire class of AI-based security tools this week.
The big problem
Let’s start with the problem. AI tools to catch other AIs doing bad things are critical to our safety. However, offense is always a ton easier than defense. The issue with tools in this class is that they are generally trained by looking at existing criminal work. But that limits them to what has been in the market while attackers are free to innovate and create different methods to execute their crimes. AIs can even increasingly be used to suggest, based on what they know about the mitigation tools, methods that these tools won’t initially be able to identify.
Whether we are talking about a physical or digital attack, the attacker always has the advantage because they get unlimited time to study the entity they want to trick, whereas the victim must protect against an overwhelming number of potential exploits and attack vectors. You could have years of preparation by the attacker mostly focused on figuring out how to get past security, with only microseconds on the defense side to stop the attack.
The good news is that for most of us, no one is going to go through that level of preparation and most likely wouldn’t know you had access to a tool like FakeCatcher until they were caught or if you were foolish enough to broadcast you used the tool. (By the way this is often why security firms have such a huge problem getting reference accounts. Smart CSOs know that part of keeping their organizations safe is that few inside or outside the company know what security protections are in place.
Tiering security protection
FakeCatcher, should we gain access to it, could help us determine real-time if someone is attempting to trick us into doing something not in our best interest. However, for high-value targets (top politicians, top executives, anyone responsible for a high-value asset, and certain celebrities) this technology alone may not be enough. This is because, with high-value targets, the effort to trick that person will likely be far better prepared and the attacker may discover that FakeCatcher is in use and craft a way around the tool.
This extra exposure typically requires a far higher level of security training for the potential victim and multiple layers of security to better assure they aren’t compromised or tricked. This isn’t to disparage Intel’s tool, but to showcase that no defensive security tool can stand up to a well-funded targeted attack that takes the existence of this tool into account. I should also add that there is a recurring problem with security products in that they typically track a successful attack but don’t report unsuccessful attempts that would otherwise allow the security team to be more proactive in their defense.
Wrapping up: FakeCatcher is a critical tool but…
Intel’s FakeCatcher is a critical tool in our effort to secure against emerging threats. However, defensive tools are and remain at a disadvantage because they are generally trained on existing work and an attacker can innovate around existing work. This means if you are responsible for high-value targets, something like FakeCatcher should be part of your toolset, but you can’t rely on it exclusively. You must also constantly remind the potential targets of Deep Fake-like attacks and treat every video as if it is potentially fake and take the time to assure that it is valid before acting on it.
In short, FakeCatcher, coupled with healthy skepticism and training against confirmation bias (where we readily accept things that seem to agree with our existing worldview) should be a far more comprehensive protection against being tricked than either alone.