AI Agents Emerge as a Top Cybercrime Target

With the explosion of personal AI agents, a new and deeply personal attack surface has emerged. To understand these evolving threats, we’re speaking with Dominic Jainy, an IT professional whose work at the intersection of AI, machine learning, and blockchain gives him a unique perspective on the digital battlefront. We’ll explore the shift from traditional password theft to the hijacking of an AI’s very “soul,” the subtle genius of supply chain attacks targeting AI skill platforms, and the alarming consequences of exposed AI instances that could give attackers a master key to corporate networks.

An infostealer was recently observed capturing AI agent files like openclaw.json for gateway tokens and soul.md for operational principles. How does this shift the threat landscape beyond just stealing passwords, and what specific new risks does this “AI identity theft” create for users?

It represents a terrifying evolution in digital crime. For years, we’ve been conditioned to protect our passwords and financial data, but this is different. Stealing a file like soul.md isn’t just about gaining access; it’s about capturing the very essence of a user’s digital assistant—its behavioral rules, its ethical framework. When an attacker gets their hands on the openclaw.json file, they don’t just get a password; they get a gateway authentication token. This allows them to remotely connect to your AI agent and, more frighteningly, masquerade as you in authenticated requests. Imagine your AI, which you’ve trained and trusted with sensitive tasks, now silently working for someone else. This “AI identity theft” means an attacker could manipulate your professional workflows, access private data, and operate with your implicit authority, making the damage far more insidious than a simple account breach.

Attackers captured these sensitive AI files using a broad file-grabbing routine, not a custom-built tool. What does this “incidental” success signal about current malware capabilities, and how do you expect threat actors to now adapt their methods specifically for targeting AI assistants?

This is the real canary in the coal mine. The fact that an off-the-shelf infostealer like Vidar accomplished this by accident is incredibly alarming. It means the existing, widespread malware toolkits are already capable of causing this damage without even trying. They were likely just searching for any file containing “secrets” and, as the researchers put it, “inadvertently struck gold.” This initial, accidental success is proof of concept for the entire black-hat community. Now that they know this data is valuable and accessible, the adaptation will be swift and deliberate. I fully expect to see specialized modules developed to specifically seek out, decrypt, and parse AI agent files from platforms like OpenClaw, much like they’ve built custom routines for stealing credentials from Chrome browsers or session data from Telegram. The game has changed, and attackers will now be hunting for AI souls with surgical precision.

Malicious skills on platforms like ClawHub are reportedly bypassing scans by hosting malware on external lookalike domains. Could you walk us through how this supply chain attack works, and what makes AI skill registries such an increasingly attractive target for threat actors?

It’s a classic bait-and-switch, brilliantly adapted for the AI ecosystem. An attacker creates a new “skill” for an AI agent and uploads it to a trusted marketplace like ClawHub. The skill itself, the part that gets scanned by security tools like VirusTotal, is clean—it contains no malicious code. It’s essentially a decoy. The real danger is hidden; the skill is programmed to connect to an external website that the attacker controls. This site is often a lookalike of a legitimate service, making it seem trustworthy. The malware is hosted there, completely bypassing the initial security check on ClawHub. AI skill registries are becoming such a prime target because they represent a massive concentration of trust and users. By compromising this single point in the supply chain, an attacker can distribute their malware to thousands of users who believe they are downloading a legitimate, vetted enhancement for their AI.

With reports of hundreds of thousands of exposed OpenClaw instances online, what are the most severe consequences of a remote code execution vulnerability in this context? Can you provide an example of how an attacker could pivot from one compromised AI agent to an entire corporate network?

The consequences are catastrophic, and the figure of hundreds of thousands of exposed instances is just staggering. A remote code execution, or RCE, vulnerability means an attacker can run any code they want on the system where the AI agent is hosted. The AI agent often becomes a pivot point into a much more secure environment. For instance, imagine an employee running an OpenClaw agent on their work laptop. This agent has been given permissions to access company email, connect to internal APIs, and query cloud services. If an attacker exploits an RCE vulnerability in that single exposed agent, they don’t just control the AI; they control the laptop and everything it has access to. From there, they can move laterally across the corporate network, exfiltrate sensitive data, or deploy ransomware. The attacker doesn’t need to breach the firewall; they just need to find one exposed agent that has already been given the keys to the kingdom.

Issues like the inability to delete AI agent accounts on the Moltbook forum highlight a new class of data permanence problems. Beyond privacy concerns, what are the long-term security implications when a user cannot erase their AI’s operational history and associated data?

This is a ticking time bomb. The inability to delete an AI agent’s account and its history creates a permanent, unchangeable digital footprint. From a security standpoint, this is a nightmare. Over time, that AI agent’s operational history on a forum like Moltbook will contain a wealth of information—subtle clues about the user’s habits, their professional network, the projects they’re working on, and the systems they interact with. This data becomes a permanent, publicly accessible reconnaissance database for attackers. If a vulnerability is ever discovered in Moltbook or the agent itself, this historical data provides the perfect context for an attacker to craft a highly sophisticated and personalized attack. You’re essentially leaving a detailed blueprint of your digital life out in the open, forever, with no way to take it back.

What is your forecast for the security of personal AI agents?

I foresee a turbulent period of adjustment. The rapid, viral adoption of platforms like OpenClaw—which has over 200,000 stars on GitHub—has outpaced our security practices. We are going to see a surge in attacks specifically targeting these agents, moving beyond incidental discoveries to highly targeted campaigns. The industry will be forced to respond, leading to the development of new security standards, better scanning tools for AI skill marketplaces, and a greater emphasis on secure-by-default configurations. However, the fundamental challenge is that these agents are, by design, deeply integrated into our personal and professional lives. Securing them won’t be like securing a simple application; it will be like securing a digital extension of ourselves, and that’s a much harder, more personal battle to win.

Explore more

Strategies for Navigating the Shift to 6G Without Vendor Lock-In

The global telecommunications landscape is currently standing at a crossroads where the promise of near-instantaneous connectivity meets the sobering reality of complex architectural transitions. As enterprises begin to look beyond the current capabilities of 5G-Advanced, the move toward 6G is being framed not merely as an incremental boost in peak data rates but as a fundamental reimagining of what a

How Do You Choose the Best Wi-Fi Router in 2026?

Modern households and professional home offices now rely on wireless networking as the invisible backbone of daily existence, making the selection of a router one of the most consequential technology decisions a consumer can face. The current digital landscape is defined by an intricate web of high-bandwidth activities, ranging from immersive virtual reality meetings to the constant telemetry of dozens

Hotels Must Bolster Cybersecurity to Protect Guest Data

The digital transformation of the global hospitality industry has fundamentally altered the relationship between hotels and their guests, turning data protection into a cornerstone of operational integrity. As properties transition into digital-first enterprises, the safeguarding of guest information has evolved from a niche IT task into a vital pillar of brand reputation. This shift is driven by the reality that

How Do Instant Payments Reshape Global Business Standards?

The traditional three-day settlement cycle that once governed global commerce has effectively dissolved into a relic of financial history as real-time payment systems become the universal benchmark for corporate operations. In the current economic landscape of 2026, the speed of capital movement has finally synchronized with the speed of digital information, creating a paradigm where instantaneous transaction finality is no

Can China Dominate the Global 6G Technology Market?

The global telecommunications landscape is currently witnessing a seismic shift as China officially accelerates its pursuit of next-generation connectivity through the approval of expansive field trials and technical standardization protocols for 6G technology. This strategic move, recently sanctioned by the Ministry of Industry and Information Technology, specifically greenlights the extensive use of the 6 GHz frequency band for intensive regional