I’m thrilled to sit down with Aisha Amaira, a renowned MarTech expert with a deep passion for blending technology and marketing. With her extensive background in CRM marketing technology and customer data platforms, Aisha has a unique perspective on how businesses can harness innovation for customer insights. Today, we’re diving into a critical topic: a recently discovered vulnerability in Salesforce’s Agentforce platform. Our conversation will explore the nature of this security flaw, the mechanics of indirect prompt injection, the broader implications for AI systems, and the essential steps to safeguard such technologies.
Can you walk us through how this vulnerability, known as ‘ForcedLeak,’ was uncovered in Salesforce’s Agentforce platform?
Absolutely. The vulnerability was identified by researchers at Noma Security, who were examining how AI agents interact with user inputs in Salesforce’s ecosystem. They focused on routine customer forms, like the Web-to-Lead form, and discovered that these could be weaponized. By embedding malicious instructions within the form’s fields, they realized an AI agent could be tricked into executing unintended actions, potentially leaking sensitive CRM data. It was a clever exploit of something as mundane as a marketing tool, showing how even everyday systems can become attack vectors if not properly secured.
What is Indirect Prompt Injection, and why is it considered such a significant threat in this context?
Indirect Prompt Injection is a technique where attackers embed harmful instructions into seemingly harmless input, like a customer form, which an AI then processes as legitimate commands. It’s akin to cross-site scripting in traditional web security, but here, it targets AI systems. The danger lies in its hybrid nature—it combines technical exploits with social engineering, tricking both the system and potentially the human interacting with it. In the case of Agentforce, this meant an attacker could manipulate the AI to expose confidential data, making it a potent and stealthy attack method.
How were attackers able to conceal malicious instructions within something as common as a Web-to-Lead form?
The key was exploiting the form’s description field, which has a generous 42,000-character limit. This allowed attackers to hide complex, multi-step payloads within what looked like standard business inquiries. These instructions were crafted to blend in, appearing as legitimate requests or comments, so neither the AI nor a human reviewer would suspect anything amiss until the hidden script was triggered. It’s a stark reminder of how much damage can be done when input fields aren’t rigorously sanitized.
Can you explain the role of an expired domain in this attack and how it became a security risk?
Certainly. Part of Salesforce’s content security policy included a whitelist with an expired domain. Researchers spotted this oversight and re-registered the domain for a mere $5, turning it into a trusted channel for data exfiltration. Once they controlled this domain, they could direct the AI to send sensitive information to it without raising red flags. This incident highlights how small lapses, like not updating a whitelist, can open significant security gaps in otherwise robust systems.
What was Salesforce’s response to this vulnerability, and how swiftly did they address it?
Salesforce acted promptly once the issue was disclosed. On September 8, 2025, they rolled out a patch implementing ‘Trusted URL allowlists’ for Agentforce, ensuring that the AI wouldn’t interact with unverified links or domains. They’ve also emphasized their commitment to collaborating with the research community to tackle evolving threats like prompt injection. While they didn’t publicly credit the specific researchers, their response shows a proactive stance in protecting their customers from such vulnerabilities.
Why do experts argue that AI agents like Agentforce present a larger attack surface compared to traditional systems?
AI agents are inherently more complex because they integrate memory, decision-making, and tool execution capabilities. This means a single compromise can cascade rapidly—what experts call spreading at ‘machine speed.’ Unlike traditional systems, where human intervention might slow down an attack, AI can autonomously propagate a breach across connected systems. In Agentforce’s case, once malicious instructions were executed, the AI’s ability to act independently amplified the potential damage, making these systems a prime target for attackers.
What steps do cybersecurity experts recommend to prevent similar vulnerabilities in AI-driven platforms?
The consensus is to treat AI agents as critical production systems. This means securing the surrounding infrastructure—APIs, forms, and middleware—to limit the impact of prompt injection. Experts advocate for sanitizing all external inputs before they reach the AI, using mediation layers to strip out suspicious content like hidden instructions or links. Additionally, maintaining strict configurations, inventorying every AI agent, validating outbound connections, and monitoring for sensitive data access are all crucial to building robust guardrails around these technologies.
What is your forecast for the future of AI security in cloud-based platforms like Salesforce?
I believe we’re at a pivotal moment for AI security. As platforms like Salesforce continue to integrate AI agents into core business processes, the attack surface will only grow. We’ll likely see more sophisticated exploits targeting human-AI interactions, like prompt injection, unless proactive measures become standard. My forecast is that within the next few years, we’ll see a shift toward embedded security-by-design principles in AI development, with stronger input validation and real-time threat detection. However, it will require a cultural change—businesses must prioritize security as much as innovation to stay ahead of increasingly creative adversaries.