This past March, WikiLeaks dumped 8,761 CIA documents collectively known as “Vault 7.” These documents contained information about what was essentially the government agency’s . They included malware, viruses and Trojans used for espionage purposes. More importantly, they had information about zero day vulnerabilities the CIA had been using to hack computers, tablets, smartphones and other devices for intelligence gathering purposes. Frighteningly, all of it was made available to hackers in one fell swoop. Wired called it “a one-stop guide to zero day exploits.”
On the bright side, cyber security researchers have access to the same information, which means they have some time to steel the rest of us against any fallout that could ensue from these previously undisclosed cyber weapons. Nevertheless, these zero day threats are out in the open now, and they can be used against us.
With that in mind, there’s no better time than now to dive into the world of zero day exploits. This post looks at how zero days behave, assesses some of the most infamous examples of them, and perhaps most importantly, provides best practices for how to deal with these elusive threats.
Part 1: The Evolution of Zero Days
A Sinister and Growing Cyber Threat
A zero day threat is a vulnerability that developers and security researchers have known about for less than a day. In many cases, these threats are first identified by penetration testers and white hats, which gives them time to issue emergency patches. In other cases, such as the CIA example, they’re leaked, which puts the good guys and the bad guys on even footing. Then, there are occasions in which hackers find the vulnerability first. They can thereby exploit the flaw in code for nefarious purposes, and in doing so, incidentally tip off researchers of its existence.
In the past few years, the prevalence of these threats has spiked. In 2012, 14 zero day vulnerabilities were discovered. This number jumped to 23 in 2013, and then inched up to 24 in 2014. But in 2015 – the most recent year for data – 24 became 54, which is the equivalent of a . Part of the reason for this increase, according to Ars Technica, is that in 2015, a spyware contractor known as The Hacking Team was infiltrated. Among the casualties were six then-undisclosed zero day threats.
Perhaps even more ironic than a firm called “The Hacking Team” getting hacked is that Adobe Flash, and other programs that are known for being sources of zero day threats, improved its patching speed. Common sense tells us that this is a good thing, and in a way, it is. However, Adobe still of the year’s zero days, according to Symantec. The faster these holes get patched, the quicker cyber attackers come back to find new ones. It’s worth noting that Adobe Flash is no longer supported by Google, Mozilla, Facebook and most other prominent internet technology companies.
Building on this irony, the targets of zero day threats aren’t companies with poor security posture. Rather, according to a comprehensive study about zero day threats – researched and written by the RAND Corporation – zero day exploits are primarily used against organizations that are for newly discovered threats. Companies that procrastinate on applying existing patches are the low-hanging fruit since they can be breached using older, simpler vulnerabilities.
In other words, as companies improve their overall computer management, zero day exploits may actually become more popular. This makes perfect sense upon closer examination: If hackers can’t use known vulnerabilities against organizations, then they’ll have to find new, unknown, and more complex ones.
That’s exactly what they’re doing. These brand new threats are by far some of the most difficult cyber attacks to defend against. Half the time, we never see them coming. Even when we do, we are too late.
Discovering the Vulnerability First Helps, But It’s Not a Panacea
In many cases, hackers will sniff out a vulnerability before the white hats do. The most recent example of this occurred after a cyber security firm announced April 8, 2017 that hackers had been in Microsoft Word since January. The attack starts as a phishing scam involving Word Documents sent via email. This type of social engineering is par for the course in today’s cyber-threat landscape. But in this case, all the user needs to do is click on the .rtf attachment. This tactic departs from macro-malware methods, which require the user to enable macros for the malware to unpack its payload.
With this new exploit (), opening the document automatically triggers a remote code execution vulnerability in the Windows Object Linking and Embedding (OLE) API. This allows the hacker to connect to Word via a remote server, and then download a .hta file. Just like that, the hacker will have access to the infected endpoint. According to DarkReading, cyber criminals to distribute the notorious Dridex banking Trojan.
This is hardly the first time hackers have leveraged vulnerabilities before they made it on the good guys’ radar. One Internet Explorer bug from 2013 was only discovered after researchers found evidence that cyber criminals had already used on multiple occasions to break into Windows PCs. According to security researcher Brian Krebs, the actual fix, , was released two weeks after the first signs of foul play were detected as an emergency patch. During those two weeks, it was difficult to say how many more businesses and users were affected by the vulnerability or to put a value on the amount of productivity that was lost in remediation efforts.
To be certain, finding the threat first is the best way to make sure an emergency patch takes less than two weeks to deliver (or in the case of CVE-2017-0199, it took three months). Even then, there are no guarantees. There were plenty of examples of zero days that hackers used to their advantage mere hours after they were announced, whether it’s because the patch didn’t come soon enough or because not everyone updated quickly enough to prevent an intrusion.
One of the most notable examples of this occurred April 2014 with the Heartbleed bug, which was born of a programming mistake. This was a serious flaw on the OpenSSL cryptographic software library that allowed anyone on the web of the systems protected by versions 1.01 and 1.02. But it didn’t stop at memory; the primary and secondary SSL keys themselves could also be stolen.
“This data could then, in theory, be used as skeleton keys to bypass secure servers without leaving a trace that a site had been hacked,” ZDNet contributor, Steven J. Vaughan-Nichols, wrote.
The bug was initially announced to the public on April 7. Within 24 hours, hackers used it to breach what the New York Times only as “a major corporation.”
Zero Day Malware Is Also on the Rise
Code vulnerabilities aren’t the only attack path that organizations struggle to adequately defend. There’s also the issue of brand new or mutated malware that does not have a known signature. The problem with these strains, which can exist in the wild for months before finally being discovered by researchers, is that traditional anti-virus software cannot detect them by signature. This increases the likelihood that the malware will evade firewalls and web filters.
According to a recent study cited by DarkReading, in the fourth quarter of 2016 either leveraged zero day vulnerabilities or was brand new. On top of that, the report found that many of the pre-existing attacks had been repackaged and distributed in new ways.
A separate report published by McAfee corroborated these findings, noting that : Three quarters of decline for new malware were followed by three quarters of growth. The pendulum swung once again, and after a slow first three quarters in 2016, we can expect an uptick in malware innovation and zero day exploits for the remainder of 2017.
One bright spot is that ransomware saw a decline in Q4. Unfortunately, that was almost exclusively attributed to the fall of Locky and Cryptowall. Other strains, such as Cerber, are still at large. With malware innovation on the rise, no one knows what new strains of ransomware will emerge during the time left in 2017.
Part 2: Layered Cyber Security for Prevention
First and Foremost: Streamline Patch Management
At this point, the risk of not having the ability to swiftly patch a computing environment should be obvious. Within hours of the Heartbleed bug’s release, hackers were already exploiting it. In some cases, patches were released well after hackers ferreted out the vulnerabilities. Nevertheless, time is of the essence with zero day vulnerabilities: The sooner you patch, the better.
Further, it’s worth noting that zero day vulnerabilities never actually go away. According to the RAND Corporation, the average life expectancy of a zero day vulnerability is nearly seven years. A quarter of zero days become obsolete within a year of discovery. However, just as many survive for more than 9.5 years.
In other words, zero days can survive until they become obsolete by nature of evolving IT. Alternatively, they can be eradicated from your computing environment the moment a fix becomes available if you have a reliable methodology in place for streamlined patching.
Leverage Active Protection and Application Whitelisting
When it comes to preventing zero day threats and new, signatureless, or mutated malware from executing, the most effective method is application whitelisting. Consider, for instance, that web browsers are some of the most prolific sources of zero day exploits. A non-suspecting user may visit a rogue website, at which point malicious code on that site can exploit vulnerabilities in a web browser. From here, it’s much easier for malware to execute on a system, seemingly without the user having taken any noticeable action.
This is why active, layered protection with application control is so crucial. In addition to a firewall, which is useful for blocking known threats, a layered approach utilizes real-time scanning on the internet and on individual machines to identify suspicious activity. This builds another key layer of defense, making infiltration twice as difficult to achieve.
Application control takes this a step further by creating a repository of allowed executables. Rather than blacklisting known malicious software (technically, your firewall should already do this), an application whitelist prevents any executable program (known or unknown) that does not have explicit administrative authorization from launching. All program executions on computers and servers are hereby monitored in real-time and, ideally, in conjunction with an active protection tool that can spot unusual or malicious activity, even in programs that are otherwise trustworthy.
As a result, malware that has previously undiscovered or undocumented signatures cannot run. Likewise, even if a zero day vulnerability or advanced persistent threat somehow enables the injection of malware into the system, it won’t actually be able to launch. The situation is effectively diffused.
Lastly, IT administrators require a simplified process to make all of this happen and the ability to customize privileges and application access by user. Specifically, they need:
- Granular control: Refine and organize application control through publisher-based approvals, policy-based control, and protection at the local machine level.
- Flexibility: The freedom to create tailored policies for different users and groups for their unique computer usage requirements.
- Centralized management: Deployment and configuration must be possible via a single web-based or on-premises console.
A tangential benefit of these capabilities is that organizations can make sure computers, servers, and bandwidth are used only for their intended purposes and not as vessels for malicious activity. More importantly, sophisticated threats that would otherwise cut through perimeter defenses like a hot knife through butter can be stopped before they ever cause harm to your organization.
The Cyber War Is Only Just Beginning
Relative to the scope of human history, the use of conniving cyber attacks (for good or for evil) is in its infancy. In the coming years, we expect hackers to become smarter and more conniving. Moreover, we foresee their targets becoming more dynamic as endpoints evolve. For that matter, we expect the stakes of an intrusion to spike as critical infrastructure is digitized.
However, we also believe that even in the face of sophisticated advanced persistent threats, never-before-seen malware, and of course, dangerous zero day threats, layered cyber security will always act as the basis for the safeguarding of computers, servers, and the sensitive data within. The sooner you lay this foundation, the sooner you can begin future-proofing your organization’s cyber security.
A self-proclaimed ‘tech geek’, Matt has worked in technology for a decade and divides his time between blogging at Faronics and working in IT. A huge New York Giants fan, when not watching football Matt gets his game on playing Call of Duty with his friends and other tech bloggers.