AI vs Human Endeavors in Phishing: An Examination of the IBM X-Force Research on Cybersecurity Threats

Phishing, a technique employed by threat actors to deceive individuals into divulging sensitive information, remains the primary infection vector for cybersecurity incidents. IBM X-Force undertook a groundbreaking research project led by Chief People Hacker Stephanie “Snow” Carruthers, exploring the efficiency of human-written phishing emails compared to those generated by AI language models. Their findings shed light on the factors contributing to the success of human-written emails, the looming threat of AI tools, and offer recommendations to counter the growing influence of generative AI in cybercrime.

The research project by IBM X-Force, under the guidance of Stephanie Carruthers, aimed to examine the effectiveness of human-written and AI-generated phishing emails. Carruthers sought to ascertain whether AI-based approaches could outperform the human touch in deceiving recipients.

Click Rates of Human-Written vs. AI-Generated Phishing Emails

Analyzing the data, Carruthers discovered that human-written phishing emails exhibited a 3% higher click rate compared to their AI-generated counterparts. Delving deeper, it was revealed that the AI-generated emails garnered an 11% click rate, while human-written emails proved slightly more successful at 14%.

Factors Contributing to the Success of Human-Written Emails

The researchers attribute the success of human-written emails to their ability to appeal to human emotional intelligence. These emails were crafted carefully, exploiting psychological triggers and persuading recipients to take action. Furthermore, the selection of a specific program within the organization, instead of employing vague or generic topics, allowed the human-written emails to appear more authentic and relevant.

Threat of AI Tools for Phishing

The emergence of tools such as WormGPT, an industrial variant of ChatGPT, raises concerns over the potential for AI models to bypass ethical guardrails and facilitate sophisticated phishing attacks. These unrestrained AI versions offer a streamlined approach for attackers to scale their operations, intensifying the threat faced by organizations and individuals.

Phishing as the Common Infection Vector

IBM’s 2023 Threat Intelligence Index substantiates that phishing remains the most prevalent infection vector for cybersecurity incidents. With the continued evolution of AI and its integration in cybercrime, the significance of tackling phishing attacks becomes increasingly crucial.

Potential Use of Generative AI for Attackers

Carruthers highlights the possibility of generative AI augmenting open-source intelligence analysis for attackers. Though not explored in the research project, the growing sophistication of generative AI models may provide cybercriminals with advanced tools to orchestrate more efficient and targeted phishing campaigns.

As the research conducted by IBM X-Force reveals, human-written phishing emails still exhibit a superior success rate compared to their AI-generated counterparts. The ability to target emotional intelligence and adopt a personalized approach remains a fundamental advantage. However, as generative AI continues to advance, the threat landscape evolves, necessitating a holistic and collaborative approach to cybersecurity. Adapting preventive measures, enhancing employee training, and remaining vigilant against the capabilities of generative AI are crucial for organizations to protect themselves from the ever-growing peril of phishing attacks.

Explore more

How Small Businesses Can Master Payroll and Compliance

The moment an ambitious founder signs the paperwork for their very first hire, they unwittingly step across an invisible threshold from simple entrepreneurship into the high-stakes arena of federal and state tax regulation. This transition is often quiet, masked by the excitement of a growing team and the urgent demands of a scaling product. Yet, beneath the surface of that

Is AI the Problem or Is It How We Use It in Hiring?

A job seeker spends an entire Sunday afternoon meticulously tailoring a resume and answering complex behavioral prompts, only to receive a standardized rejection email less than ninety minutes after clicking submit. This “two-hour rejection” has become a defining characteristic of the modern job market, creating a profound sense of alienation among professionals who feel they are screaming into a digital

Is Generative AI Slowing Down the Recruitment Process?

The traditional handshake between talent and opportunity has morphed into a high-stakes digital standoff where algorithmic speed creates massive human resource bottlenecks. While generative artificial intelligence promised to streamline the matching of candidates to roles, it has instead ignited a digital arms race that threatens to bury hiring managers under a mountain of synthetic perfection. Today, the ease of generating

AI Use by Job Seekers Slows Down the Hiring Process

The global labor market is currently facing an unprecedented crisis where the very tools designed to accelerate professional connections are instead creating a massive digital bottleneck in the talent pipeline. While the initial promise of generative artificial intelligence was to streamline the match between skills and vacancies, the reality in 2026 has shifted toward a high-stakes game of algorithmic hide-and-seek.

Is AI Eliminating the Entry-Level Career Path?

The traditional corporate hierarchy is currently navigating a foundational structural shift that threatens to dismantle the decades-old “entry-level gateway” once used by every aspiring professional to launch a career. As of 2026, the modern workplace is no longer a predictable ladder where young graduates perform foundational tasks to earn their climb; instead, it has become an automated landscape where cognitive