Listen to the article
In brief
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.
Google has warned that several new malware families now use large language models during execution to modify or generate code, marking a new phase in how state-linked and criminal actors are deploying artificial intelligence in live operations.
In a report released this week, the Google Threat Intelligence Group said it has tracked at least five distinct strains of AI-enabled malware, some of which have already been used in ongoing and active attacks.
The newly-identified malware families “dynamically generate malicious scripts, obfuscate their own code to evade detection,” while also making use of AI models “to create malicious functions on demand,” instead of having those hard-coded into malware packages, the threat intelligence group stated.
Each variant leverages an external model such as Gemini or Qwen2.5-Coder during runtime to generate or obfuscate code, a method GTIG dubbed “just-in-time code creation.”
The technique represents a shift from traditional malware design, where malware logic is typically hard-coded into the binary.
By outsourcing parts of its functionality to an AI model, the malware can continuously make changes to harden itself against systems designed to deter it.
Two of the malware families, PROMPTFLUX and PROMPTSTEAL, demonstrate how attackers are integrating AI models directly into their operations.
GTIG’s technical brief describes how PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API every hour to rewrite its own VBScript code, while PROMPTSTEAL, linked to Russia’s APT28 group, uses the Qwen model hosted on Hugging Face to generate Windows commands on demand.
The group also identified activity from a North Korean group known as UNC1069 (Masan) that misused Gemini.
Google’s research unit describes the group as “a North Korean threat actor known to conduct cryptocurrency theft campaigns leveraging social engineering,” with notable use of “language related to computer maintenance and credential harvesting.”
Per Google, the group’s queries to Gemini included instructions for locating wallet application data, generating scripts to access encrypted storage, and composing multilingual phishing content aimed at crypto exchange employees.
These activities, the report added, appeared to be part of a broader attempt to build code capable of stealing digital assets.
Google said it had already disabled the accounts tied to these activities and introduced new safeguards to limit model abuse, including refined prompt filters and tighter monitoring of API access.
The findings could point to a new attack surface where malware queries LLMs at runtime to locate wallet storage, generate bespoke exfiltration scripts, and craft highly credible phishing lures.
Decrypt has approached Google on how the new model could change approaches to threat modeling and attribution, but has yet to receive a response.
A weekly AI journey narrated by Gen, a generative AI model.
Read the full article here
Verify the accuracy of this article using AI-powered analysis and real-time sources.
Enter your email to receive detailed fact-checking analysis
You've used your 5 free reports. Sign up for unlimited access!
Already have an account? Sign in here
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
