The Rise of AI-Generated Phishing: Inside the Tactics, Risks, and Countermeasures for ChatGPT and BERT-Enabled Email Attacks

As technology advances, so do the tactics of cybercriminals. A recent development is the use of artificial intelligence (AI) in phishing attacks, which are designed to trick people into releasing sensitive information or downloading malware. Researchers have uncovered a new phishing email campaign that employs AI-powered language models, specifically ChatGPT and Google BERT, to create sophisticated email attacks.

The Rise of AI-Based Phishing Attacks

Threat actors have increasingly been relying on AI since November 2022. AI-based phishing is particularly worrisome because malicious actors can use machine learning models to create convincing scams that can be difficult to identify. They can imitate the writing styles of real people, leading targets to believe that they are receiving legitimate emails.

New phishing email campaign using ChatGPT and Google BARD

The new phishing campaign that researchers discovered is an example of an AI-based phishing attack that uses language models to generate convincing emails. ChatGPT and Google Bard are popular AI language models for text generation. These language models are fed with prior text inputs and use them to predict and generate new content based on the provided prompts.

AI-based security platforms like Trustify

As AI-based phishing attacks become more prevalent, organizations need to remain vigilant and take proactive steps to protect themselves. AI-based email security platforms such as Trustif are essential in fighting against such attacks. Trustif integrates AI, natural language processing (NLP), and machine learning technologies to identify and block malicious emails from entering an organization’s network.

The significance of phishing emails as a threat to organizations

Phishing attacks are an ongoing issue for organizations of all sizes as they often serve as an entry point for more serious security breaches. According to the FBI, phishing scams accumulated costs of over $4.2 billion between 2013 and 2020. Notably, phishing emails are often the initial point of attack before a more serious data breach occurs.

Case Study: Impersonation of Facebook in a Phishing Email

In one of the phishing emails discovered by researchers, the threat actor impersonated Facebook to gain access to a target’s login credentials. The email appeared to be legitimate, using the same logos, font, and layout as Facebook’s official email communications. Upon analysis, however, the researchers discovered that the email consisted of AI-generated text.

The discovery of AI-generated text in phishing emails raises safety concerns

The AI-generated text in phishing emails complicates security efforts as it can be challenging to discern between AI-generated and human-written emails. The use of chatbots or robotic voices for impersonation further complicates matters since AI-powered voices sound increasingly human-like.

The use of platforms like ChatGPT in generating convincing phishing emails and malware is concerning

Malicious actors can use language models like ChatGPT and Google Bard to generate convincing phishing emails that are difficult to distinguish from legitimate emails. AI-generated text can also be used for other kinds of cyberattacks, such as creating dangerous malware designed to infiltrate an organization’s network.

The emergence of vendor fraud through false invoices

Vendor fraud is another kind of cybercrime that has emerged, involving the creation of fraudulent invoices. Malicious actors use these invoices to trick companies into transferring funds or paying for fake products or services.

As sound technology continues to advance, cybercrime is also evolving and becoming much more sophisticated. As demonstrated by the new phishing email campaign that researchers uncovered, AI-based phishing attacks pose a significant threat to organizations. It’s crucial that organizations remain vigilant and take proactive measures to protect themselves. By investing in AI and machine learning-based security solutions, they can stay one step ahead of malicious actors and protect their sensitive data from harm.

Explore more

Raedbots Launches Egypt’s First Homegrown Industrial Robots

The metallic clang of traditional assembly lines is finally being replaced by the precise, rhythmic hum of domestic innovation as Raedbots unveils a suite of industrial machines that redefine local manufacturing. For decades, the Egyptian industrial sector remained shackled to the high costs of European and Asian imports, making the dream of a fully automated factory floor an expensive luxury

Trend Analysis: Sustainable E-Commerce Packaging Regulations

The ubiquitous sight of a tiny electronic component rattling inside a massive cardboard box is rapidly becoming a relic of the past as global regulators target the hidden environmental costs of e-commerce logistics. For years, the digital retail sector operated under a “speed at any cost” mentality, often prioritizing packing convenience over spatial efficiency. However, as of 2026, the legislative

How Are AI Chatbots Reshaping the Future of E-commerce?

The modern digital marketplace operates at a velocity where a three-second delay in response time can result in a permanent loss of consumer interest and substantial revenue. While traditional storefronts relied on human intuition to guide shoppers through aisles, the current e-commerce landscape uses sophisticated artificial intelligence to simulate and surpass that personalized touch across millions of simultaneous interactions. This

Stop Strategic Whiplash Through Consistent Leadership

Every time a leadership team decides to pivot without a clear explanation or warning, a shockwave travels through the entire organizational chart, leaving the workforce disoriented, frustrated, and increasingly cynical about the future. This phenomenon, frequently described as strategic whiplash, transforms the excitement of a new executive direction into a heavy burden of wasted effort for the staff. Instead of

Most Employees Learn AI by Osmosis as Training Lags

Corporate boardrooms across the country are echoing with the same relentless command to integrate artificial intelligence immediately, yet the vast majority of people expected to use these tools have never received a single hour of formal instruction. While two-thirds of organizations now demand AI implementation as a standard operating procedure, the workforce has been left to navigate this technological frontier