Dominic Jainy is a seasoned IT professional whose expertise sits at the intersection of artificial intelligence, machine learning, and blockchain technology. With a career dedicated to dissecting how emerging technologies redefine industrial landscapes, he has become a leading voice on the evolving nature of digital warfare and cyber defense. In this conversation, we explore the blurring lines between state-sponsored operations and the criminal underground, specifically examining how intelligence agencies are co-opting illicit markets to achieve strategic geopolitical goals.
Groups often present themselves as hacktivists while acting as fronts for advanced state threats during high-profile wiper attacks. How do these entities maintain their public personas while executing destructive operations, and what technical indicators can help security teams peel back these layers of deception?
The illusion of hacktivism provides a convenient psychological and political buffer for state actors like Iran’s Ministry of Intelligence and Security (MOIS). Entities such as “Handala” use highly charged political rhetoric, such as pro-Palestine messaging, to create a narrative that their destructive wiper attacks—like the one on the Fortune 500 company Stryker on March 11—are the work of passionate individuals rather than government bureaus. To maintain this persona, they leverage social media and public leak sites to mimic the chaotic energy of grassroots activists. However, security teams can peel back these layers by looking for “Void Manticore” signatures, which often involve sophisticated persistence mechanisms that go far beyond typical activist capabilities. We also see technical overlaps in the attack chains where state-level reconnaissance tools are paired with “noisy” destructive wipers, a combination that suggests a level of coordination and resource backing typical of a national intelligence agency.
State actors are increasingly purchasing commercial infostealers or certificates from malware-as-a-service providers rather than developing proprietary tools. Why is this “buy over build” strategy becoming a preferred choice today, and how does this shift complicate the way defensive teams prioritize their resources?
The “buy over build” strategy is essentially an exercise in economic and operational efficiency, especially for groups like MuddyWater that may not be technically elite but are highly motivated. By spending as little as $500 on a specific loader or a commercial certificate, these actors can bypass a year’s worth of internal R&D costs and labor. This shift is incredibly dangerous because it democratizes high-impact malware, allowing state actors to deploy tools like the Rhadamanthys infostealer which are frequently updated by criminal developers. For defensive teams, this creates a “signal-to-noise” problem where they might see a common piece of malware and assume it’s a routine financial threat. This complicates resource prioritization because the presence of a “cheap” tool no longer guarantees a low-level threat; it could very well be the vanguard of a state-sponsored destructive campaign.
When a security operations center identifies activity linked to common cybercrime, there is a tendency to label it as a lower priority. What are the specific dangers of misclassifying a state-sponsored destructive attack as a routine criminal incident, and what protocols can prevent this oversight?
The danger of misclassification is absolute catastrophe, as the end goal of a criminal is usually a payout, whereas the end goal of a state actor is often total destruction. If a SOC sees an infostealer and treats it as a standard credential theft incident, they might miss the window to prevent a wiper attack that could paralyze the entire enterprise. We saw this risk highlighted by threat intelligence experts who noted that Iranian actors are now deeply embedding themselves in these criminal ecosystems to hide their true intent. To prevent this, organizations must move away from “threat-level” labels based solely on the malware type and instead adopt protocols that trigger deeper investigations when certain high-value targets are hit. Protocols should include behavioral analysis that looks for lateral movement patterns that align with state-sponsored tactical objectives rather than the “smash and grab” style of typical criminals.
The use of Initial Access Brokers allows intelligence agencies to bypass the long-term infiltration phase by simply purchasing credentials. How is this marketplace evolving to serve nation-state clients, and what proactive steps should organizations take to monitor if their access is being sold in underground channels?
The Initial Access Broker (IAB) marketplace is evolving into a professionalized supply chain where credentials for specific government entities or critical infrastructure are being auctioned off like high-end commodities. For state agencies, this is an “easy win” because it eliminates the months of phishing and reconnaissance normally required to breach a hardened target. We are seeing more instances where state actors scan Dark Web forums and Telegram channels for access that aligns specifically with their geopolitical targets in the US, Israel, or the Gulf. Organizations must proactively monitor these underground channels by utilizing threat intelligence services that specifically scrape IAB advertisements for mentions of their domain or IP ranges. Beyond monitoring, the most effective defense is a “Zero Trust” architecture and mandatory multi-factor authentication, which can render stolen credentials useless even if they are bought and sold multiple times on the black market.
There are cases where state-affiliated hackers operate as affiliates for established ransomware-as-a-service groups to target critical infrastructure. What does this deep integration look like during an active breach, and how can investigators distinguish between purely financial motives and state-directed strategic objectives?
During an active breach, this integration looks like a professional ransomware operation—think of the October 2025 attack on an Israeli hospital which was initially attributed to the Eastern European group Qilin. The attackers used standard RaaS tools and ransom notes, but the true intent was likely disruption rather than money, as indicated by the later attribution to Iranian state-affiliated actors. Distinguishing between motives requires investigators to look at the “aftermath” and the victim profile; if the ransom demand is secondary to the destruction of backups or the leaking of sensitive geopolitical data, it points toward state direction. Investigators should also look for anomalies in the negotiation phase, such as a lack of engagement from the “criminals” or demands that seem designed to be impossible to meet, suggesting the financial aspect is merely a smokescreen for state-sponsored sabotage.
What is your forecast for the future of state-sponsored cyberattacks utilizing criminal underground services?
I forecast that the boundary between state intelligence and organized cybercrime will become almost entirely invisible over the next few years. As geopolitical tensions rise and resources are strained by physical conflicts, state actors will increasingly outsource the “dirty work” of initial infection and infrastructure hosting to criminal entities to maintain plausible deniability. We will likely see more “hybrid” attacks where a legitimate criminal ransomware strain is used as a delivery vehicle for state-sponsored wipers, making attribution nearly impossible for all but the most well-resourced intelligence agencies. For the average organization, this means the era of dismissing “low-level” malware is over; every blip on the radar must be treated as a potential gateway for a nation-state actor, requiring a shift toward much more aggressive, intelligence-led defense strategies.
