As we dive into the evolving landscape of cyber threats, I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional whose expertise in artificial intelligence, machine learning, and blockchain offers a unique perspective on the intersection of cutting-edge technology and cybersecurity. With AI becoming a powerful tool for both defenders and attackers, Dominic’s insights are invaluable in understanding the latest threats like PROMPTFLUX malware and the broader implications of AI-driven cyberattacks. In this conversation, we’ll explore how malware leverages AI for evasion, the capabilities and limitations of these threats, and the growing trend of state-sponsored and financially motivated actors using advanced tools to scale their operations.
Can you walk us through what PROMPTFLUX malware is and why it’s generating so much buzz in the cybersecurity community?
Absolutely, Tailor. PROMPTFLUX is a fascinating and somewhat alarming piece of malware written in Visual Basic Script, or VB Script. What makes it stand out is its ability to interact with Google’s Gemini AI model through an API to rewrite its own code. This isn’t just a static piece of malicious software; it’s designed for what’s called ‘just-in-time self-modification.’ Essentially, it queries the AI to generate new, obfuscated versions of itself to evade detection by traditional antivirus systems that rely on static signatures. It’s like a chameleon constantly changing its colors to blend into the background, and that adaptability is why it’s caught so much attention.
How does PROMPTFLUX specifically use AI to stay under the radar of security tools?
PROMPTFLUX has a component dubbed the ‘Thinking Robot,’ which periodically sends highly specific prompts to Gemini’s API. These prompts ask for VB Script code changes tailored for antivirus evasion, and they’re designed to be machine-parsable, meaning the AI returns only the raw code without extra text. One version of PROMPTFLUX reportedly rewrites its entire source code every hour, which is significant because it creates a moving target for defenders. By constantly morphing, it reduces the chance of being caught by systems looking for known patterns, making it a real challenge to track and block.
What can you tell us about the current state of PROMPTFLUX in terms of its capabilities and where it might be headed?
Right now, PROMPTFLUX is still believed to be in a development or testing phase. It has some clever tricks, like saving its updated, obfuscated versions to the Windows Startup folder to ensure it persists on an infected system. It also tries to spread by copying itself to removable drives and mapped network shares. However, it lacks a clear method to initially compromise a device or network, which suggests it’s not fully operational yet. The fact that some of its self-modification functions are commented out in the code, alongside active logging of AI interactions, indicates the creators are still refining it. I think we’re looking at a proof-of-concept that could become much more dangerous once fully realized.
There’s been some skepticism about how effective PROMPTFLUX really is. What’s your take on the criticisms regarding its ability to evade detection?
I’ve seen the counterarguments, particularly from security researchers who argue that the hype around PROMPTFLUX might be overblown. They point out that the prompts it sends to Gemini assume the AI inherently knows how to bypass antivirus systems, which isn’t necessarily true. There’s also no randomness or entropy in the self-modifying code to guarantee each version is unique, and without guardrails, the rewritten code might not even function properly. I think there’s validity to the skepticism—right now, it’s more experimental than polished. But I also believe we shouldn’t underestimate it. The intent behind creating a metamorphic script is clear, and even if it’s not perfect yet, it’s a sign of where malware development is heading with AI integration.
Beyond PROMPTFLUX, can you shed light on other AI-powered malware that’s been observed recently?
Certainly. There’s a growing list of malware leveraging AI in creative ways. Take FRUITSHELL, for instance—it’s a PowerShell-based reverse shell that uses hard-coded prompts to bypass detection by AI-driven security tools. Then there’s PROMPTLOCK, a cross-platform ransomware written in Go that dynamically generates malicious Lua scripts at runtime using an AI model. You’ve also got PROMPTSTEAL, used by certain state-sponsored actors, which targets data and generates commands via AI APIs, and QUIETVAULT, a JavaScript credential stealer focused on GitHub and NPM tokens. These examples show how AI isn’t just a gimmick; it’s being embedded into malware for everything from evasion to dynamic execution, making traditional defenses struggle to keep up.
With state-sponsored actors also abusing AI tools for reconnaissance and phishing, how do you see this trend impacting the broader threat landscape?
It’s a game-changer, Tailor. We’re seeing actors from various nations using AI to streamline operations—everything from crafting convincing phishing lures to developing command-and-control frameworks. They’re even bypassing AI safety barriers by posing as students or participants in capture-the-flag exercises to get advice on exploitation techniques. This lowers the barrier to entry for sophisticated attacks, allowing threat actors to scale their operations with speed and precision. What’s more concerning is that AI is becoming the norm rather than the exception for these groups. As businesses integrate AI into daily operations, it opens up new attack vectors like prompt injection, which are low-cost but high-reward for attackers.
Looking ahead, what is your forecast for the role of AI in cyber threats over the next few years?
I think we’re on the cusp of a major shift. AI will likely become a core component of most advanced cyber threats, not just for high-end state-sponsored actors but also for financially motivated criminals. We’ll see more malware like PROMPTFLUX evolving into fully functional, self-adapting tools that can rewrite themselves in real-time to evade detection. At the same time, defenders will need to leverage AI just as aggressively to predict and counter these threats. The accessibility of powerful AI models means the attack surface will expand—think more prompt injection attacks or deepfake campaigns for social engineering. It’s going to be a cat-and-mouse game, but with much higher stakes as the technology becomes more integrated into our digital lives.
