Today we’re joined by Dominic Jainy, an IT professional whose expertise lies at the intersection of artificial intelligence, machine learning, and blockchain. His work provides a crucial lens through which to understand the rapidly evolving landscape of cybercrime. This interview will explore the alarming democratization of cybercrime through low-cost subscription services, the escalating threat posed by AI-powered deception tactics like deepfakes, the intricate international efforts required to dismantle these criminal networks, and the practical steps organizations can take to defend themselves and aid law enforcement.
Cybercriminal platforms like RedVDS offered sophisticated tools for as little as $24 a month, yet enabled fraud causing over $40 million in losses in the U.S. How does such a low-cost service facilitate such massive financial damage, and what does this reveal about cybercrime’s accessibility?
It’s startling when you juxtapose those two numbers: a $24 monthly fee and over $40 million in damages. What this reveals is a fundamental shift in the cybercrime ecosystem. We’re no longer talking about needing to be a master coder to inflict serious harm. Platforms like RedVDS effectively productized cybercrime, offering a complete toolkit with disposable virtual computers and unlicensed software. This “crime-as-a-service” model lowers the barrier to entry so dramatically that anyone with a credit card and malicious intent can launch sophisticated, anonymous attacks. The platform handles the technical overhead, allowing criminals to focus entirely on execution, which is how you see a single, cheap service become the engine for fraud on a global scale.
Attackers are now pairing generative AI with these services to create realistic phishing emails and even deepfake videos. Could you walk us through how criminals use these AI tools to tailor their attacks, and what new challenges this deception creates for an average employee trying to spot a scam?
The integration of generative AI is a genuine game-changer, and it’s terrifyingly effective. An attacker can use AI to quickly scan a company’s public information to identify a high-value target, then generate a phishing email that perfectly mimics the writing style of a trusted colleague or a legitimate business partner. But it goes further. We’ve seen hundreds of instances where they use AI to create deepfake videos or clone voices. Imagine receiving a video call from someone who looks and sounds exactly like your CEO, urgently instructing you to make a wire transfer. The classic advice of looking for typos or awkward phrasing is becoming obsolete. For an employee, this creates an immense psychological pressure and erodes the fundamental trust we place in what we see and hear, making it incredibly difficult to distinguish a legitimate request from a highly sophisticated scam.
The disruption of RedVDS involved a complex, coordinated action between a tech company, U.S. and UK legal partners, and international law enforcement. What unique legal and technical hurdles do these cross-border takedowns present, and can you share an example of how victim cooperation becomes critical to success?
These takedowns are a logistical and legal labyrinth. A criminal group might be based in one country, using servers from a platform like RedVDS hosted in another, to attack a victim in a third. Each step requires navigating different legal frameworks, privacy laws, and evidence standards, which is why collaboration between a tech giant like Microsoft, legal teams in both the US and UK, and an organization like Europol is essential. Victim cooperation is the linchpin that holds it all together. When companies like H2-Pharma, which lost over $7.3 million, or the Gatehouse Dock Condominium Association come forward, they provide the crucial evidence—the digital breadcrumbs—that connects the crime to the infrastructure. Their willingness to report transforms an abstract attack into a concrete legal case, giving authorities the leverage needed to act across borders and seize the criminals’ tools.
Business email compromise scams often involve criminals observing communications before impersonating a partner to request wire transfers. What are the subtle, tell-tale signs that a seemingly legitimate email is an attack, and what step-by-step verification process should an employee follow before acting on such a request?
The most dangerous element of these attacks is the patience of the criminals; they lurk inside a compromised account, learning the rhythm and language of the business. The most common tell-tale sign is a sudden change in a routine process, often cloaked in a sense of extreme urgency. For example, a trusted partner suddenly requests payment to a new bank account or a senior executive demands an immediate, unusual wire transfer. The first step for any employee facing this is to slow down and question that urgency. The critical verification step must happen outside of the email chain. Don’t reply to the email to confirm. Instead, pick up the phone and call the person on a previously known, trusted number to verbally verify the request. This out-of-band verification is the single most effective defense against falling for an impersonation.
Beyond internal defenses like multi-factor authentication, victims are urged to report attacks to help authorities. From a practical standpoint, what does this reporting process involve for a company that has been victimized, and how does that intelligence directly contribute to dismantling a criminal network like RedVDS?
For a victimized company, reporting involves compiling and sharing the raw evidence of the attack. This includes the malicious emails with their full headers, transaction records of fraudulent payments, and any relevant server logs. It can feel like a painful process when you’re already dealing with a loss, but that data is invaluable. When authorities receive reports from many of the nearly 190,000 organizations impacted by RedVDS, they begin to see patterns. They can map the IP addresses, the server domains, and the financial trails, building a comprehensive intelligence picture of the entire criminal network. Each report acts as a puzzle piece, and when enough pieces are in place, it gives them the evidence needed to justify a coordinated, international takedown. Every report truly helps dismantle these networks at scale.
What is your forecast for the evolution of cybercrime-as-a-service models, especially concerning their integration with advanced AI?
I believe we’re on the cusp of seeing these models evolve into fully autonomous attack platforms. The next generation of “crime-as-a-service” won’t just provide tools; it will offer AI agents that can independently identify targets, conduct hyper-personalized social engineering campaigns using deepfakes in real-time, and even pivot their attack methods based on the victim’s defenses, all with minimal human oversight. The speed, scale, and sophistication will be unlike anything we’ve seen. Consequently, our defensive posture will have to shift dramatically from relying on human vigilance to deploying our own AI-driven security systems that can detect and neutralize these autonomous threats in milliseconds, long before a human could even notice something is wrong.
