With ChatGPT, Malware Becomes More Adaptive and ‘Quieter’

With ChatGPT, Malware Becomes More Adaptive and ‘Quieter’

In our last post, we discussed how bad actors can use ChatGPT to take phishing attacks to a new level. That’s not the only threat. The AI chatbot can also generate malware that is more effective at gaining access to the network and finding data that may be valuable to the hacker.

Hooded, unidentified person sitting at a laptop typing, with holographic numbers floating on the screen.

Malware that Can Rapidly Analyze Machine Code

Take Hydra malware, for example. These malicious apps take their name from the mythical multiheaded serpent that would grow two new heads when one was cut off. Hydra malware operates in the same way. It has a lot of feelers and nodes that try to infect everything they can. When a particular attack type is unsuccessful, it goes back to its central controller to obtain more methods of breaching the network.

That methodology changes with ChatGPT, which can understand code — even machine code. If the Hydra malware gets a foothold on the edge of the network, it can analyze the publicly facing code to identify vulnerabilities. It can then do rapid analytics on source code inside the network without waiting for human response. The Hydra virus can adapt to the network faster than a human can read the code.

How to Respond

To reduce the risk that AI-enabled malware will penetrate the network, organizations must follow true zero trust principles. In a true zero-trust network, it isn’t possible for applications to communicate directly, much less infect one another. It requires more than a firewall or security appliance between applications. In that architecture, the applications might as well be publicly facing. With true zero trust, every application must be behind its own firewall so that there’s no direct route between two applications.

Another critical component is zero-day detection. A network layer appliance can only see what’s going on in its narrow scope of the network. It cannot use the techniques of an advanced web application firewall to analyze the application itself. That’s why it’s important to use RASP utilities that work at the application layer within the operating system itself. Like a WAF, it can see what the code is supposed to do and control what people accessing that application are allowed to do.

Application whitelisting and true binary validation are the final pieces of the puzzle. Traditional antivirus software relies on the ability to detect a signature or behavior type to identify malware. With adaptive malware, even zero-day antivirus software cannot update fast enough. With application whitelisting, everything is blocked except the applications that are supposed to be running. Binary validation tools watch the application’s source code to ensure that nothing gets modified. Working together, the tools provide effective protection against AI-driven malware.

Green and red data on a graph.

Malware that Automatically Finds Valuable Data

With ChatGPT, malware can make decisions autonomously. It can understand where passwords and other interesting data are stored locally, and even how to connect to the data sources and extract data automatically.

Traditionally, after malware gets a foothold on the network, a human has to look for resources and then hack them to see if they’re desirable. With ChatGPT and smart automation, this can be done on the fly. The malware can read a database to determine if it has relevant information and impersonate the patterns of the application so that the database administrator never sees anything unusual. Because the malware does not have to exfiltrate as much data to get what the hacker wants, the malware becomes quieter. Quiet malware is less likely to trigger internal security controls.

How to Respond

To combat this threat, organizations can use the same electronic discovery tools hackers use to identify valuable data and ensure that it’s properly locked down. AI engines can identify abnormal usage patterns and throw alerts so that security teams can investigate and determine if a breach is in progress.

Phone with a screen that says "Threat Detected" and a "Block" button.

How DeSeMa Can Help

The DeSeMa team includes seasoned security experts who understand how to combat AI-enabled malware. Let us help you tune your environment and implement tools and techniques that address this rapidly developing threat.

In the age of ChatGPT, traditional methods used to identify and block phishing emails are no longer effective. Our next post will discuss how ChatGPT enables more adaptive malware to gain a foothold in your IT environment.

Get Started Today!