Dark Web Forums: Limited Interest in Large Language Models (LLMs) as Cybercrime Tools

The emergence of large language models (LLMs) has sparked interest and concern within the cybersecurity community. However, recent research conducted by cybersecurity firm Sophos suggests that threat actors on dark web forums show little interest in utilizing these advanced AI tools, and in some cases, express concerns about the potential risks they pose.

Research Findings: There is minimal discussion on AI in dark web forums

Sophos examined four prominent dark web forums known for hosting discussions related to cybercriminal activities. Surprisingly, the research revealed that discussions on AI, particularly regarding LLMs, were surprisingly scarce. In fact, the research found just 100 posts related to AI in two of the forums.

Focus on compromised ChatGPT accounts and circumventing LLM protections

Among the limited LLM-related discussions identified, a significant portion revolved around compromised ChatGPT accounts being offered for sale. Additionally, there was an emphasis on finding ways to bypass the built-in protections of LLMs, commonly known as ‘jailbreaks.’ It appears that cybercriminals were more interested in taking advantage of existing LLM resources than exploring their potential for creating new threats.

Concerns about LLM-generated code and implications for cybercrime

Interestingly, many users on these dark web forums expressed specific concerns about code generated by LLMs. These concerns primarily revolved around operational security issues and the potential for detection by antivirus and endpoint detection and response (AV/EDR) systems. It appears that cybercriminals are cautious about using LLMs due to fears of their activities being exposed or compromised.

Sophos Study: LLMs and Fraud on a Massive Scale

Parallel to this research, Sophos conducted a separate study that demonstrated how LLMs could be used to conduct fraud on a massive scale, even with minimal technical skills. Utilizing LLM tools like GPT-4, Sophos researchers built a fully functioning e-commerce website complete with AI-generated images, audio, and product descriptions.

Creating hundreds of similar websites quickly with the click of a button

To illustrate the tremendous potential for mass production of fraudulent websites, Sophos X-Ops revealed that they were able to create hundreds of similar websites in a matter of seconds using a single button. This automation highlights the efficiency and scalability that LLMs can bring to cybercriminal activities.

Purpose of the research: Preparing for AI-based threats before they become widespread

Sophos emphasized that the research was not conducted merely to provide insights into the current state of dark web forums but to proactively prepare for the potential threats that AI-based tools like LLMs might pose in the future. By understanding the current landscape and potential misuse of LLMs, cybersecurity professionals can develop countermeasures and preventive strategies to mitigate emerging risks effectively.

Potential for AI technology to be utilized for automated threats

The research findings indicate that while Dark Web forums currently show limited interest in LLMs, the potential for their application in automated threats cannot be overlooked. As the capabilities of LLMs continue to advance, cybercriminals may ultimately embrace these technologies to automate and amplify their malicious activities.

Integrating generative AI elements into classic scams

This study aligns with previous observations on the integration of generative AI elements in traditional cyber scams. For instance, scammers have already utilized AI-generated text or photographs to deceive and lure victims into various fraudulent schemes. As AI technology becomes more accessible and sophisticated, threat actors are likely to explore new avenues to exploit unsuspecting targets on a larger scale.

Despite the limited current interest among dark web forums in using LLMs, it is essential for the cybersecurity community to remain vigilant and proactive in addressing AI-based threats. The potential for these powerful AI tools to be harnessed for malicious purposes cannot be ignored. Close collaboration between researchers, industry experts, and law enforcement agencies will be crucial in mitigating the emerging risks and enhancing our collective resilience to future AI-driven cyber threats.

Explore more

Is Recruiting Support Staff Harder Than Hiring Teachers?

The traditional image of a school crisis usually centers on a shortage of teachers, yet a much quieter and potentially more damaging vacancy is hollowing out the English education system. While headlines frequently focus on those leading the classrooms, the invisible backbone of the school—the teaching assistants and technical support staff—is disappearing at an alarming rate. This shift has created

How Can HR Successfully Move to a Skills-Based Model?

The traditional corporate hierarchy, once anchored by rigid job descriptions and static titles, is rapidly dissolving into a more fluid ecosystem centered on individual competencies. As generative AI continues to redefine the boundaries of human productivity in 2026, organizations are discovering that the “job” as a unit of work is often too slow to adapt to fluctuating market demands. This

How Is Kazakhstan Shaping the Future of Financial AI?

While many global financial centers are entangled in the restrictive complexities of preventative legislation, Kazakhstan has quietly transformed into a high-velocity laboratory for artificial intelligence integration within the banking sector. This Central Asian nation is currently redefining the intersection of sovereign technology and fiscal oversight by prioritizing infrastructural depth over rigid, preemptive regulation. By fostering a climate of “technological neutrality,”

The Future of Data Entry: Integrating AI, RPA, and Human Insight

Organizations failing to recognize the fundamental shift from clerical data entry to intelligent information synthesis risk a complete loss of operational competitiveness in a global market that no longer rewards manual speed. The landscape of data management is undergoing a profound transformation, moving away from the stagnant, labor-intensive practices of the past toward a dynamic, technology-driven ecosystem. Historically, data entry

Getsitecontrol Debuts Free Tools to Boost Email Performance

Digital marketers often face a frustrating paradox where the most visually stunning campaign assets are the very things that cause an email to vanish into a spam folder or fail to load on a mobile device. The introduction of Getsitecontrol’s new suite marks a significant pivot toward accessible, high-performance marketing utilities. By offering browser-based solutions for file optimization, the platform