ServiceNow Patches Critical AI Impersonation Flaw

Article Highlights
Off On

A single email address became the only key an attacker needed to unlock an entire enterprise’s AI infrastructure, bypassing every modern security defense in a newly discovered ServiceNow vulnerability that has now been patched. This high-severity flaw exposed the fragile trust placed in integrated AI systems and highlighted a new frontier of enterprise security risks.

The BodySnatcher Flaw a Critical Threat to Enterprise AI

Security researchers recently uncovered a critical vulnerability within the ServiceNow AI Platform, now identified as CVE-2025-12420. Codenamed “BodySnatcher” by the security firm AppOmni, the flaw carried a CVSS score of 9.3 out of 10, signaling its extreme severity. The vulnerability’s core threat was its ability to allow an unauthenticated attacker to completely impersonate any user on a target instance.

The exploit targeted the platform’s Virtual Agent integration, a key component of its AI-powered assistance. By manipulating a weakness in the account-linking process, a threat actor could assume the identity of another user, including those with the highest administrative privileges. This effectively gave an outsider the digital keys to an organization’s ServiceNow environment without needing to steal credentials or defeat a single password.

Background the Growing Risk in AI Integrated Enterprise Platforms

ServiceNow has become an indispensable tool for countless organizations, serving as the central nervous system for IT operations, customer service, and human resources. The recent push to integrate sophisticated AI assistants like Now Assist into these workflows has unlocked new levels of efficiency but has also introduced novel and complex security challenges that legacy security models were not designed to handle. A vulnerability that circumvents foundational security controls like multi-factor authentication (MFA) and single sign-on (SSO) is particularly alarming. These technologies are the bedrock of modern enterprise security, and their bypass renders an organization defenseless against this specific attack vector. For companies relying on ServiceNow for sensitive operations, the “BodySnatcher” flaw represented a direct threat to their data integrity, operational continuity, and overall security posture.

Research Methodology Findings and Implications

Methodology

The discovery of “BodySnatcher” was the result of a targeted security research initiative by AppOmni. Their investigation focused on the authentication and authorization mechanisms governing the interaction between ServiceNow’s core platform and its integrated AI services. Researchers meticulously analyzed the data flow and logic within the Virtual Agent, which ultimately led them to identify a critical flaw in its account-linking protocol.

Following the confirmation of the vulnerability, AppOmni adhered to a responsible disclosure process, privately reporting their detailed findings to ServiceNow in October 2025. This collaborative approach ensured that ServiceNow had the necessary information to understand the flaw’s root cause, develop a comprehensive fix, and prepare a patch for its customers without alerting malicious actors to the exploit.

Findings

The technical investigation revealed that the vulnerability was caused by a potent combination of two distinct security failures. The primary issue was the presence of a hardcoded, platform-wide secret used within the Virtual Agent API. This static secret, when combined with a flawed logic in how user accounts were linked and verified, created the perfect conditions for an impersonation attack.

The most significant finding was the simplicity of the exploit. An attacker did not need insider knowledge, stolen credentials, or sophisticated tools. All that was required to achieve complete user impersonation was the target’s email address. This low barrier to entry meant the flaw was not only severe in its potential impact but also highly accessible, posing a widespread risk to all unpatched ServiceNow instances.

Implications

The practical consequences of a successful exploit were severe. An attacker impersonating an administrator could execute privileged AI workflows to disable security controls, create backdoor accounts for persistent access, or alter critical system configurations. This level of control would essentially allow a remote attacker to seize an organization’s core AI-driven operational capabilities.

Furthermore, the ability to impersonate any user meant that an attacker could exfiltrate or modify vast amounts of sensitive corporate data, from financial records and intellectual property to employee information. The potential for privilege escalation was immense, as a compromise within the ServiceNow platform could be used as a launchpad for broader attacks across an organization’s entire IT environment.

Reflection and Future Directions

Reflection

The discovery of “BodySnatcher” stands as one of the most significant AI-driven vulnerabilities identified to date. It serves as a powerful case study on the emergent risks associated with complex, interconnected systems where AI agents are granted high levels of trust and autonomy. The flaw exposed how a simple logical error in an AI integration could undermine an entire ecosystem of otherwise robust security measures.

At the same time, the incident highlights the effectiveness of the responsible disclosure model. The swift and discreet communication between AppOmni and ServiceNow enabled a rapid response, resulting in a timely patch that protected customers before the vulnerability could be actively exploited. This collaboration proved essential in mitigating a potentially catastrophic security failure across the enterprise landscape.

Future Directions

This discovery should prompt further security research into the authentication and secret management practices of other enterprise-grade AI platforms. The design patterns and integration methods used in ServiceNow are not unique, suggesting that similar vulnerabilities may exist in other ecosystems where third-party or native AI agents are deeply embedded. Moving forward, the security industry must evolve its best practices to address the unique attack vectors introduced by AI. This includes developing more rigorous testing methodologies for AI agent integrations and shifting toward a security paradigm that assumes even trusted internal components can be subverted. The focus must be on validating identity and authorization at every step of an AI-driven workflow, not just at the perimeter.

Conclusion a Call to Action for Immediate Patching

In summary, the “BodySnatcher” flaw represented a profound and direct threat to organizations leveraging ServiceNow’s powerful AI capabilities. Its ability to bypass modern authentication controls using only an email address made it a critical risk that demanded an urgent response.

ServiceNow’s swift development and deployment of a patch in October 2025 successfully neutralized this threat before it was seen to be exploited. The incident, however, served as a crucial lesson in the evolving security landscape, underscoring that constant vigilance and prompt patching were essential defenses in an era of increasingly integrated artificial intelligence.

Explore more

Is a Hiring Freeze a Warning or a Strategic Pivot?

When a major corporation abruptly halts its recruitment efforts, the silence in the human resources department often resonates louder than a crowded room full of eager job candidates. This phenomenon, known as a hiring freeze, has evolved from a blunt emergency measure into a sophisticated fiscal lever used by modern human capital managers. Labor represents the most significant operational expense

Trend Analysis: Native Cloud Security Integration

The traditional practice of routing enterprise web traffic through external security filters is rapidly collapsing as businesses prioritize native performance within hyperscale ecosystems. This shift represents a transition from “sidecar” security models toward a framework where protection is an invisible, intrinsic component of the cloud architecture itself. For modern enterprises, the friction between high-speed delivery and robust defense has become

Alteryx Debuts AI Insights Agent on Google Cloud Marketplace

The rapid proliferation of generative artificial intelligence across the global corporate landscape has created a paradoxical environment where the demand for instantaneous answers often clashes with the critical necessity for data accuracy and regulatory compliance. While thousands of employees within large organizations are eager to integrate large language models into their daily workflows to boost individual productivity, senior leadership remains

What Is the True Scope of the Medtronic Data Breach?

The recent confirmation of a sophisticated network intrusion at Medtronic has sent ripples through the medical technology sector, highlighting the persistent vulnerability of critical healthcare infrastructure in an increasingly digital world. This specific incident came to light after the notorious cybercrime syndicate known as ShinyHunters publicly claimed to have exfiltrated over nine million records from the company’s internal databases. These

How Does BlueNoroff Use AI to Target Global Crypto Assets?

The boundary separating a standard business interaction from a sophisticated state-sponsored financial heist has blurred as threat actors integrate generative artificial intelligence into their core operations. This shift represents a fundamental evolution in how state-aligned groups secure funding, moving away from crude attacks toward highly personalized, machine-learning-enhanced strategies. BlueNoroff, an elite subunit of the notorious Lazarus Group, has emerged as