Researchers use ChatGPT to make new malware

Bloomberg News

Artificial intelligence is revolutionizing the digital world, and that includes malware.

Researchers with cybersecurity company Cyberark said they successfully used the AI-driven text generator ChatGPT, which is capable of writing some computer code, to create polymorphic malware, an advanced type of malicious programs that can actually alter its own code to evade detection and resist removal.

Like many programs of its kind, ChatGPT has content filters that attempt to recognize when someone is trying to use it for less-than-ethical purposes like writing malware. Consequently, the researchers found that straight up asking it to write malware did not work. In response to a request to write code "injeecting [sic] a shellcode into 'explorer.exe' in Python," ChatGPT said that is both dangerous and unethical.

So far so good, right?

The researchers tried again, this time being much more specific and authoritative in their tone while, at the same time, avoiding words and phrases that might trigger content filters. The program was more amenable to this request. ChatGPT generated functional code capable of inserting a DLL file into Explorer.exe (something that generally you don't want done without your permission).

The researchers added that the program is capable of doing this many times with different constraints, and so they could easily develop multiple "mutations" of the same malware until satisfied with the results.

"One of the powerful capabilities of ChatGPT is the ability to easily create and continually mutate injectors. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect," said the report.

To demonstrate, the researchers further developed their malicious code to include a component that periodically queries ChatGPT for new modules that perform malicious actions. Essentially, this means the original coders don't even need to update their malware themselves in response to new security measures; the program is capable of modifying itself on the fly to meet novel challenges.

ChatGPT Malware

This is not a dire warning of a theoretical future, but a description of what can be done right now, today. The study's author, an AI chatbot, warned that humans dismiss the points raised in the report at their peril.

"The use of ChatGPT's API within malware can present significant challenges for security professionals. It's important to remember, this is not just a hypothetical scenario but a very real concern. This is a field that is constantly evolving, and as such, it's essential to stay informed and vigilant," said the AI that wrote the report.

This article originally appeared in Accounting Today
For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Machine learning Malware Cyber security
MORE FROM FINANCIAL PLANNING