Dark Web Forums: Limited Interest in Large Language Models (LLMs) as Cybercrime Tools

The emergence of large language models (LLMs) has sparked interest and concern within the cybersecurity community. However, recent research conducted by cybersecurity firm Sophos suggests that threat actors on dark web forums show little interest in utilizing these advanced AI tools, and in some cases, express concerns about the potential risks they pose.

Research Findings: There is minimal discussion on AI in dark web forums

Sophos examined four prominent dark web forums known for hosting discussions related to cybercriminal activities. Surprisingly, the research revealed that discussions on AI, particularly regarding LLMs, were surprisingly scarce. In fact, the research found just 100 posts related to AI in two of the forums.

Focus on compromised ChatGPT accounts and circumventing LLM protections

Among the limited LLM-related discussions identified, a significant portion revolved around compromised ChatGPT accounts being offered for sale. Additionally, there was an emphasis on finding ways to bypass the built-in protections of LLMs, commonly known as ‘jailbreaks.’ It appears that cybercriminals were more interested in taking advantage of existing LLM resources than exploring their potential for creating new threats.

Concerns about LLM-generated code and implications for cybercrime

Interestingly, many users on these dark web forums expressed specific concerns about code generated by LLMs. These concerns primarily revolved around operational security issues and the potential for detection by antivirus and endpoint detection and response (AV/EDR) systems. It appears that cybercriminals are cautious about using LLMs due to fears of their activities being exposed or compromised.

Sophos Study: LLMs and Fraud on a Massive Scale

Parallel to this research, Sophos conducted a separate study that demonstrated how LLMs could be used to conduct fraud on a massive scale, even with minimal technical skills. Utilizing LLM tools like GPT-4, Sophos researchers built a fully functioning e-commerce website complete with AI-generated images, audio, and product descriptions.

Creating hundreds of similar websites quickly with the click of a button

To illustrate the tremendous potential for mass production of fraudulent websites, Sophos X-Ops revealed that they were able to create hundreds of similar websites in a matter of seconds using a single button. This automation highlights the efficiency and scalability that LLMs can bring to cybercriminal activities.

Purpose of the research: Preparing for AI-based threats before they become widespread

Sophos emphasized that the research was not conducted merely to provide insights into the current state of dark web forums but to proactively prepare for the potential threats that AI-based tools like LLMs might pose in the future. By understanding the current landscape and potential misuse of LLMs, cybersecurity professionals can develop countermeasures and preventive strategies to mitigate emerging risks effectively.

Potential for AI technology to be utilized for automated threats

The research findings indicate that while Dark Web forums currently show limited interest in LLMs, the potential for their application in automated threats cannot be overlooked. As the capabilities of LLMs continue to advance, cybercriminals may ultimately embrace these technologies to automate and amplify their malicious activities.

Integrating generative AI elements into classic scams

This study aligns with previous observations on the integration of generative AI elements in traditional cyber scams. For instance, scammers have already utilized AI-generated text or photographs to deceive and lure victims into various fraudulent schemes. As AI technology becomes more accessible and sophisticated, threat actors are likely to explore new avenues to exploit unsuspecting targets on a larger scale.

Despite the limited current interest among dark web forums in using LLMs, it is essential for the cybersecurity community to remain vigilant and proactive in addressing AI-based threats. The potential for these powerful AI tools to be harnessed for malicious purposes cannot be ignored. Close collaboration between researchers, industry experts, and law enforcement agencies will be crucial in mitigating the emerging risks and enhancing our collective resilience to future AI-driven cyber threats.

Explore more

How AI Agents Work: Types, Uses, Vendors, and Future

From Scripted Bots to Autonomous Coworkers: Why AI Agents Matter Now Everyday workflows are quietly shifting from predictable point-and-click forms into fluid conversations with software that listens, reasons, and takes action across tools without being micromanaged at every step. The momentum behind this change did not arise overnight; organizations spent years automating tasks inside rigid templates only to find that

AI Coding Agents – Review

A Surge Meets Old Lessons Executives promised dazzling efficiency and cost savings by letting AI write most of the code while humans merely supervise, but the past months told a sharper story about speed without discipline turning routine mistakes into outages, leaks, and public postmortems that no board wants to read. Enthusiasm did not vanish; it matured. The technology accelerated

Open Loop Transit Payments – Review

A Fare Without Friction Millions of riders today expect to tap a bank card or phone at a gate, glide through in under half a second, and trust that the system will sort out the best fare later without standing in line for a special card. That expectation sits at the heart of Mastercard’s enhanced open-loop transit solution, which replaces

OVHcloud Unveils 3-AZ Berlin Region for Sovereign EU Cloud

A Launch That Raised The Stakes Under the TV tower’s gaze, a new cloud region stitched across Berlin quietly went live with three availability zones spaced by dozens of kilometers, each with its own power, cooling, and networking, and it recalibrated how European institutions plan for resilience and control. The design read like a utility blueprint rather than a tech

Can the Energy Transition Keep Pace With the AI Boom?

Introduction Power bills are rising even as cleaner energy gains ground because AI’s electricity hunger is rewriting the grid’s playbook and compressing timelines once thought generous. The collision of surging digital demand, sharpened corporate strategy, and evolving policy has turned the energy transition from a marathon into a series of sprints. Data centers, crypto mines, and electrifying freight now press