In the ever-evolving landscape of cybersecurity and information warfare, few threats are as insidious as state-sponsored disinformation campaigns. Today, we’re sitting down with Dominic Jainy, an IT professional with deep expertise in artificial intelligence, machine learning, and blockchain. With a keen interest in how these technologies are wielded across industries, Dominic offers a unique perspective on the Russian fake-news network CopyCop, a sprawling operation that has expanded its reach with over 200 new websites since March 2025. This interview delves into the network’s sophisticated use of AI, its targeted influence operations across democratic nations, and the technical and strategic challenges it faces in spreading propaganda.
How did CopyCop emerge as a key player in Russia’s disinformation efforts, and what makes it stand out in the broader landscape of information warfare?
CopyCop, also known as Storm-1516, is essentially a linchpin in Russia’s covert influence operations. It’s a network that’s been meticulously built to flood the information space with fabricated narratives through fake media outlets and other deceptive platforms. What sets it apart is its sheer scale and technical sophistication—over 300 websites in 2025 alone, targeting democracies like the US, Canada, and France. Unlike earlier, cruder efforts, CopyCop uses advanced AI to churn out content that looks legitimate at first glance, often impersonating real political or media entities to erode trust in institutions and manipulate public opinion, especially around issues like support for Ukraine.
What does the addition of over 200 new websites since March 2025 tell us about the scale and ambition of CopyCop’s operations?
This expansion is a clear signal of escalation. Adding over 200 sites in such a short time frame shows not just ambition but also a significant investment in resources and infrastructure. It’s about amplifying reach—more websites mean more touchpoints to influence diverse audiences across multiple regions. It also suggests a confidence in their ability to operate at this scale without immediate disruption, leveraging dormant sites that build credibility over time before being activated for specific campaigns. This is a long game, designed to poison the information environment on a global level.
Can you describe the types of fake media outlets CopyCop creates and how they manage to appear credible to unsuspecting readers?
CopyCop’s fake outlets are incredibly varied—they might mimic local news sites, political party pages, or even niche blogs. For instance, they’ve impersonated a French royalist party with a site that looks authentic at first glance. The trick lies in the details: polished designs, consistent content updates, and branding that aligns with real-world entities. They also set up fictional fact-checking organizations to “verify” their own lies, creating a false sense of authority. It’s psychological—people are more likely to trust something that looks familiar or claims to be unbiased, especially when it’s reinforced across multiple platforms.
How does CopyCop tailor its disinformation to specific regions, particularly with new languages like Turkish, Ukrainian, and Swahili?
They’re highly strategic about localization. CopyCop doesn’t just translate content; they adapt narratives to resonate with cultural and political contexts. For example, they’ve rolled out subdomains like africa.truefact.news for Swahili-speaking audiences, focusing on issues that matter locally. In Ukraine, the content often pushes anti-Western sentiment, while in Turkey, it might exploit regional tensions. Using these languages broadens their audience and makes the content feel more authentic, slipping under the radar of people who might not expect disinformation in their native tongue.
What are the core narratives CopyCop pushes, especially regarding Russia, Ukraine, and the West?
At its heart, CopyCop’s messaging is about reshaping perceptions to favor Russian interests. They consistently paint Russia as a misunderstood or victimized power, while portraying Ukraine as corrupt or unstable, often fabricating stories to undermine international support. The West, particularly the US and its allies, is framed as aggressive or hypocritical, sowing division within democracies. These narratives aren’t random—they’re crafted to exploit existing doubts or tensions, amplifying polarization and distrust in targeted countries.
Why do you think CopyCop has expanded its focus to new countries like Canada, Armenia, and Moldova?
This expansion reflects a deliberate pivot to exploit new vulnerabilities. Canada, as a close US ally, is a logical target for disrupting North American unity on issues like Ukraine. Armenia and Moldova, on the other hand, are geopolitically sensitive due to their proximity to Russian spheres of influence and internal political challenges. By targeting these regions, CopyCop can test narratives in diverse contexts, stir local unrest, and weaken international coalitions that oppose Russian policies. It’s about finding cracks and prying them open.
How significant is CopyCop’s shift to self-hosted AI models, and what advantages does this give them over using commercial Western AI services?
It’s a game-changer. By moving to self-hosted models based on Meta’s Llama 3 architecture, like dolphin-2.9 or Lexi-Uncensored, CopyCop cuts ties with Western AI providers that might impose restrictions or track usage. This gives them total control over content generation, allowing for uncensored output aligned with state propaganda. It’s also an operational security move—self-hosting reduces the risk of being shut down by external providers and lets them fine-tune models on Kremlin-aligned sources, ensuring the messaging stays on point.
What do technical artifacts, like knowledge cutoffs or formatting errors, reveal about the limitations of CopyCop’s AI tools?
These artifacts are like fingerprints—they expose the automated nature of the content. Knowledge cutoffs, such as references to events only up to January 2023, show the model’s training data limitations. Formatting errors, like inconsistent JSON outputs or visible disclaimers about “objective tone,” betray the AI’s struggle to mimic human writing perfectly. These slip-ups highlight a key weakness: while the tech enables scale, it’s not foolproof. Uncensored models often degrade in performance, leading to mistakes that can tip off attentive readers or researchers.
What are some of the operational mistakes CopyCop has made that undermine its efforts to stay under the radar?
They’ve had some notable blunders. For instance, published articles sometimes include raw model instructions or metadata, like notes about summarizing text, which scream “automated content.” There are also instances where their infrastructure—like Python scripts for restarting AI services—was accidentally exposed in media interviews. These mistakes break the illusion of authenticity and give analysts concrete evidence to trace their operations. It’s a classic case of scaling too fast without ironing out the kinks.
What is your forecast for the future of disinformation campaigns like CopyCop, especially as AI technology continues to evolve?
I think we’re only seeing the tip of the iceberg. As AI tech advances, networks like CopyCop will get better at masking their footprints—think fewer formatting errors and more convincing, hyper-personalized content. We might see deeper integration with social media, where AI not only generates posts but also interacts as fake personas in real time. On the flip side, detection tools will also improve, creating a cat-and-mouse game. My biggest concern is the potential for these campaigns to destabilize smaller, less digitally literate regions, where defenses against disinformation are still catching up. It’s going to be a critical battleground in the coming years.