As technology advances, so do the tactics of cybercriminals. A recent development is the use of artificial intelligence (AI) in phishing attacks, which are designed to trick people into releasing sensitive information or downloading malware. Researchers have uncovered a new phishing email campaign that employs AI-powered language models, specifically ChatGPT and Google BERT, to create sophisticated email attacks.
The Rise of AI-Based Phishing Attacks
Threat actors have increasingly been relying on AI since November 2022. AI-based phishing is particularly worrisome because malicious actors can use machine learning models to create convincing scams that can be difficult to identify. They can imitate the writing styles of real people, leading targets to believe that they are receiving legitimate emails.
New phishing email campaign using ChatGPT and Google BARD
The new phishing campaign that researchers discovered is an example of an AI-based phishing attack that uses language models to generate convincing emails. ChatGPT and Google Bard are popular AI language models for text generation. These language models are fed with prior text inputs and use them to predict and generate new content based on the provided prompts.
AI-based security platforms like Trustify
As AI-based phishing attacks become more prevalent, organizations need to remain vigilant and take proactive steps to protect themselves. AI-based email security platforms such as Trustif are essential in fighting against such attacks. Trustif integrates AI, natural language processing (NLP), and machine learning technologies to identify and block malicious emails from entering an organization’s network.
The significance of phishing emails as a threat to organizations
Phishing attacks are an ongoing issue for organizations of all sizes as they often serve as an entry point for more serious security breaches. According to the FBI, phishing scams accumulated costs of over $4.2 billion between 2013 and 2020. Notably, phishing emails are often the initial point of attack before a more serious data breach occurs.
Case Study: Impersonation of Facebook in a Phishing Email
In one of the phishing emails discovered by researchers, the threat actor impersonated Facebook to gain access to a target’s login credentials. The email appeared to be legitimate, using the same logos, font, and layout as Facebook’s official email communications. Upon analysis, however, the researchers discovered that the email consisted of AI-generated text.
The discovery of AI-generated text in phishing emails raises safety concerns
The AI-generated text in phishing emails complicates security efforts as it can be challenging to discern between AI-generated and human-written emails. The use of chatbots or robotic voices for impersonation further complicates matters since AI-powered voices sound increasingly human-like.
The use of platforms like ChatGPT in generating convincing phishing emails and malware is concerning
Malicious actors can use language models like ChatGPT and Google Bard to generate convincing phishing emails that are difficult to distinguish from legitimate emails. AI-generated text can also be used for other kinds of cyberattacks, such as creating dangerous malware designed to infiltrate an organization’s network.
The emergence of vendor fraud through false invoices
Vendor fraud is another kind of cybercrime that has emerged, involving the creation of fraudulent invoices. Malicious actors use these invoices to trick companies into transferring funds or paying for fake products or services.
As sound technology continues to advance, cybercrime is also evolving and becoming much more sophisticated. As demonstrated by the new phishing email campaign that researchers uncovered, AI-based phishing attacks pose a significant threat to organizations. It’s crucial that organizations remain vigilant and take proactive measures to protect themselves. By investing in AI and machine learning-based security solutions, they can stay one step ahead of malicious actors and protect their sensitive data from harm.