Dark Web Forums: Limited Interest in Large Language Models (LLMs) as Cybercrime Tools

The emergence of large language models (LLMs) has sparked interest and concern within the cybersecurity community. However, recent research conducted by cybersecurity firm Sophos suggests that threat actors on dark web forums show little interest in utilizing these advanced AI tools, and in some cases, express concerns about the potential risks they pose.

Research Findings: There is minimal discussion on AI in dark web forums

Sophos examined four prominent dark web forums known for hosting discussions related to cybercriminal activities. Surprisingly, the research revealed that discussions on AI, particularly regarding LLMs, were surprisingly scarce. In fact, the research found just 100 posts related to AI in two of the forums.

Focus on compromised ChatGPT accounts and circumventing LLM protections

Among the limited LLM-related discussions identified, a significant portion revolved around compromised ChatGPT accounts being offered for sale. Additionally, there was an emphasis on finding ways to bypass the built-in protections of LLMs, commonly known as ‘jailbreaks.’ It appears that cybercriminals were more interested in taking advantage of existing LLM resources than exploring their potential for creating new threats.

Concerns about LLM-generated code and implications for cybercrime

Interestingly, many users on these dark web forums expressed specific concerns about code generated by LLMs. These concerns primarily revolved around operational security issues and the potential for detection by antivirus and endpoint detection and response (AV/EDR) systems. It appears that cybercriminals are cautious about using LLMs due to fears of their activities being exposed or compromised.

Sophos Study: LLMs and Fraud on a Massive Scale

Parallel to this research, Sophos conducted a separate study that demonstrated how LLMs could be used to conduct fraud on a massive scale, even with minimal technical skills. Utilizing LLM tools like GPT-4, Sophos researchers built a fully functioning e-commerce website complete with AI-generated images, audio, and product descriptions.

Creating hundreds of similar websites quickly with the click of a button

To illustrate the tremendous potential for mass production of fraudulent websites, Sophos X-Ops revealed that they were able to create hundreds of similar websites in a matter of seconds using a single button. This automation highlights the efficiency and scalability that LLMs can bring to cybercriminal activities.

Purpose of the research: Preparing for AI-based threats before they become widespread

Sophos emphasized that the research was not conducted merely to provide insights into the current state of dark web forums but to proactively prepare for the potential threats that AI-based tools like LLMs might pose in the future. By understanding the current landscape and potential misuse of LLMs, cybersecurity professionals can develop countermeasures and preventive strategies to mitigate emerging risks effectively.

Potential for AI technology to be utilized for automated threats

The research findings indicate that while Dark Web forums currently show limited interest in LLMs, the potential for their application in automated threats cannot be overlooked. As the capabilities of LLMs continue to advance, cybercriminals may ultimately embrace these technologies to automate and amplify their malicious activities.

Integrating generative AI elements into classic scams

This study aligns with previous observations on the integration of generative AI elements in traditional cyber scams. For instance, scammers have already utilized AI-generated text or photographs to deceive and lure victims into various fraudulent schemes. As AI technology becomes more accessible and sophisticated, threat actors are likely to explore new avenues to exploit unsuspecting targets on a larger scale.

Despite the limited current interest among dark web forums in using LLMs, it is essential for the cybersecurity community to remain vigilant and proactive in addressing AI-based threats. The potential for these powerful AI tools to be harnessed for malicious purposes cannot be ignored. Close collaboration between researchers, industry experts, and law enforcement agencies will be crucial in mitigating the emerging risks and enhancing our collective resilience to future AI-driven cyber threats.

Explore more