HP Threat Report: AI Is Becoming More Common

HP released its quarterly threat report, which includes some increasingly scary elements. Unlike its peers, HP created a separate security unit called Wolf Security that can focus more sharply on our increasing security exposures. While this security unit is somewhat hobbled in that it is subordinate to HP’s PC and printing businesses, this allows it to focus more deeply on the endpoints it provides, resulting in a greater level of protection where most breaches continue to occur: the user or at the endpoint. 

This is a defense-in-depth approach that is narrower than a typical security company would or could supply since it covers both software and hardware elements for HP’s enterprise PCs and printers, arguably making HP more secure than the competition.

Like many independent security firms and unlike its OEM peers, HP also researches and issues security reports that highlight what it has found, and this quarter’s report is as scary as the last one was. 

Users Remain the Biggest Exposure

One thing that isn’t changing quarter-over-quarter is that users remain the biggest exposure conduit, and current user training is proving to be inadequate. While prior user attacks focused on one click to infection, security tools have improved. Now it takes three to four clicks on average to become infected. However, social engineering and AI are being used to trick users into making those three clicks and these users too often turn off their security protection in order to unknowingly download and install malware that compromises their credentials and their firm. 

I remain convinced that we need to do far more regular internal security tests with penalties for failing so users take this far more seriously because even the most rigorous security protections are worthless if the users disable them.

AI Use

AI has moved from occasional conceptual tests to far more common production uses for malware delivery but not yet for malware creation. These tools are becoming extremely good at tricking users into doing harmful things. For instance, GitHub’s repository has been compromised with Lumma Stealer that targets gamers looking for cheats to improve their scores. Game cheats are often misidentified as malware and blocked, so it takes disabling your security software to install them. Along with the cheat comes malware designed to give the attacker access to the system and its contents, including IDs and passwords. And another form of malware called XWorm has become a very popular worm designed to steal crypto currency wallets. 

Commercial Malware

Even though AI isn’t being used to create malware, this doesn’t mean that malware isn’t advancing. Commercial companies continue to build and sell malware offerings, but the recent trend is to do so in building block form. In other words, attackers use multiple malware downloads over time to fully compromise the system but make it so the components go in without triggering a security warning. This then allows the complete tool to build after all of the components have been installed, often using PowerShell as the download and installation tool. The stages are Attack, Intermediate and Infection and are very hard to detect until the damage is being done. Right now, you can create a major malware campaign with virtually no programming skills. 

The Danger of Pictures

One of the recent changes is that attackers are attaching malware to downloadable pictures like those images of old obsolete websites. If you download the picture, you are in the path to becoming infected. The website may not be at fault since these pictures are uploaded onto legitimate sites that don’t have the technology to scan the pictures for one of the malware building blocks. 

With pictures and some other file types, you can measure the success of a campaign by the number of downloads or views. The average is 29K. That’s a lot of infected people per effort. 

Wrapping Up:

Companies like HP and units like Wolf Security are working hard to both identify and mitigate the increasing number of AI-enabled malware attacks. However, no matter how effective the defense, it is pretty much worthless if the user bypasses it or turns off the security software, and this is happening way too often. 

If we truly want to address this threat, users must become a more active part of the solution. If a malicious malware attack is successful and the user helped create it, it could cost them not only their job but their life savings (depending on the damage), but the current levels of user training are inadequate. 

If this isn’t fixed, this exposure will become one of the strongest arguments to replace many users with AIs that will follow proper security protocols and be less of a threat to the organizations within which they work.