Dominic Jainy brings a sophisticated perspective to the intersection of artificial intelligence and cybersecurity, drawing on years of experience in machine learning and blockchain architecture. As organizations move toward autonomous workflows, his insights into how non-human identities interact with sensitive data have become essential for modern security teams. In this conversation, he explores the rising threat of “toxic combinations”—the dangerous overlap of permissions that occurs when AI agents bridge disparate SaaS applications—and provides a roadmap for securing the invisible connections that now define the enterprise landscape.
High-profile leaks have shown agent API tokens and plaintext credentials stored together in unencrypted tables. What specific architectural flaws allow these “toxic combinations” to persist, and what step-by-step protocols should teams follow to ensure third-party keys aren’t exposed through agent-to-agent messaging?
The architectural flaw at the heart of these leaks, like the one involving Moltbook where 1.5 million agent API tokens and 35,000 email addresses were exposed, is the failure to treat the agent as a distinct, high-risk security boundary. When developers build social networks or platforms for 770,000 active agents, they often prioritize connectivity over isolation, leading to unencrypted tables where internal tokens sit right next to third-party credentials. To stop these toxic combinations, teams must first implement a protocol where every agent-to-agent interaction is stripped of plaintext secrets before storage. Second, organizations need to move away from single-application security reviews and instead focus on the “bridge” itself, ensuring that an agent authorized to talk to one service cannot inherently pass those credentials to another unauthorized peer. Finally, establishing a strict non-human identity inventory ensures that every bot is treated with the same scrutiny as a human user, preventing the silent accumulation of “shadow” keys in shared messaging spaces.
When an MCP connector bridges a developer’s IDE with a business messaging platform, what are the primary risks regarding unauthorized data flows? How can administrators identify if instructions from a chat app are flowing back into an IDE context to trigger a prompt injection?
The primary risk in this scenario is the creation of an unmonitored trust relationship between two environments that were never meant to share a security context. When an MCP connector links an IDE to a messaging platform like Slack, it creates a bi-directional highway where code snippets can leak out, but more dangerously, malicious instructions can flow back in. Administrators can identify these risks by monitoring for “runtime drift,” where the behavior of the integration starts to deviate from its original, approved purpose. For instance, if a connector designed to post snippets starts receiving complex execution commands from a chat channel, that is a major red flag for a prompt injection attack. Identifying these flows requires a platform that can visualize the runtime graph continuously, as manual oversight simply cannot keep up with the speed at which these connections are wired together.
Non-human identities now outnumber human accounts in most SaaS environments, yet many organizations focus only on user-level permissions. How do you build a comprehensive inventory for bots and AI agents, and what metrics best quantify the risks of over-privileged API access across different platforms?
Building a comprehensive inventory starts with the realization that 56% of organizations, according to the State of SaaS Security 2025 report, are already deeply concerned about over-privileged API access. To tackle this, you must place every AI agent, bot, and OAuth integration into a centralized register that assigns an owner and a mandatory review date to every non-human entity. The best metrics for quantifying risk aren’t just the number of permissions, but rather the “cross-app scope grant” score—specifically, how many write scopes an identity holds across multiple sensitive platforms simultaneously. You should also track the ratio of non-human to human identities to understand your true attack surface, as these automated accounts often operate without the traditional guardrails of multi-factor authentication. By focusing on how many agents have “read” access in one app and “write” access in another, you can pinpoint the exact locations where a toxic combination is most likely to result in data exfiltration.
Traditional access reviews often analyze one application at a time, failing to see the trust relationships formed at runtime through OAuth grants. How can security teams transition to a “bridge-centric” review process, and what are the tells that an integration’s behavior has drifted from its original purpose?
Transitioning to a bridge-centric review requires a fundamental shift in perspective; you are no longer just asking “who has access to GitHub?” but “what path exists between GitHub and Slack?” This involves creating a review trail for every connector that explicitly names both sides of the relationship and the specific trust established between them. One of the clearest tells of behavioral drift is when an integration begins requesting new scopes or interacting with a different set of data than it did during its first week of deployment. If an AI agent that was originally provisioned to summarize documents in Drive suddenly begins querying records in Salesforce, the “runtime graph” has changed, and the integration is drifting. Security teams should look for these cross-app scope anomalies, as they are the primary indicators that a once-safe connection has mutated into a high-risk liability.
Since automated connectors can be installed in seconds, manual governance often fails to keep pace with the runtime graph. How does mapping the connections between identities and data flows across the entire SaaS environment change your threat response, and how do you prioritize which cross-app scopes to revoke?
Mapping these connections shifts your threat response from being reactive and siloed to being proactive and holistic, allowing you to see an exposure as a single event rather than three separate, unrelated approvals. For example, using a knowledge graph like Reco’s allows you to see the moment an IDE connects to a messaging channel and flag it as an unauthorized permission breakdown before it can be exploited. Prioritization becomes much simpler when you can see the full chain: you should always revoke the “write” scopes on identities that bridge two high-value data stores first. If an agent has the power to read from your source code and write to a public-facing bot, that is a high-priority revocation compared to an agent that only has read access within a single, isolated environment. By visualizing the data flow, you can focus your energy on the bridges that have the shortest path to exfiltration rather than getting bogged down in thousands of low-risk, single-app permissions.
What is your forecast for the future of AI agent security risks as these autonomous systems become more integrated into enterprise workflows?
My forecast is that the next generation of major breaches will not rely on traditional vulnerabilities or zero-day exploits, but will instead involve AI agents doing exactly what they were authorized to do, albeit in a malicious context. As these systems become more autonomous, they will create a “web of trust” that is so complex that manual human oversight will become physically impossible. We will likely see a surge in “indirect prompt injection” attacks, where an agent picks up a malicious instruction from a seemingly harmless document and then uses its legitimate API scopes to exfiltrate data across the entire SaaS stack. To survive this shift, organizations must move away from static, once-a-year reviews and adopt dynamic, continuous monitoring that can kill a session the microsecond an agent’s behavior diverges from its intended mission. The future of security isn’t about blocking the agent; it’s about having the visibility to see the full chain of its actions before the chain breaks.
