Aembit Announces Summit on Agentic AI Security

With the rapid integration of autonomous AI into corporate environments, the security landscape is undergoing a seismic shift. We are joined by Dominic Jainy, a veteran IT professional specializing in artificial intelligence and machine learning, to explore the emerging challenges and strategies in this new frontier. Drawing from the latest discourse among industry leaders from companies like Anthropic, LinkedIn, and Walmart, this conversation will delve into the practical realities of securing agentic AI, from re-evaluating risk and identity to adapting the entire software development lifecycle for a world where systems think for themselves.

Security leaders are now establishing guardrails for agentic AI as accountability models remain unsettled. What are the top considerations for risk assessment in these autonomous environments, and what practical enforcement approaches are proving effective?

The primary consideration is the shift from predictable, static risk to dynamic, behavioral risk. We’re no longer just assessing a piece of code for vulnerabilities; we’re assessing an autonomous entity that can learn and make decisions. This means our risk models must account for unpredictable agent behavior and the potential for emergent, unintended consequences. Accountability is the biggest gray area. When an agent acts, who is responsible? The developer? The operator? The company? A key approach we’re seeing is the establishment of stringent, identity-based guardrails. For example, instead of giving an AI agent broad access, you create a policy that says, “This agent can access customer database X, but only during business hours, only for read operations, and it must re-authenticate if its behavior deviates by more than 15% from its established baseline.” This makes enforcement automated and contextual, rather than a manual, after-the-fact cleanup.

With the emergence of concepts like an OWASP Top 10 for agentic AI, how do these new threats differ from traditional application security risks? What novel techniques can organizations use to control secrets usage and improve investigative visibility for these systems?

The difference is profound. Traditional application security risks, like SQL injection or cross-site scripting, are well-understood vulnerabilities in code logic. The new threats in an agentic AI world are far more nuanced. We’re talking about things like model poisoning, prompt injection that hijacks the agent’s purpose, or an agent autonomously chaining actions together to exfiltrate data in a way no human would have conceived. To combat this, we have to move beyond code scanning. A critical technique is centralizing and automating credential management specifically for these non-human agents. You can’t have API keys hard-coded where an agent can find and misuse them. Instead, platforms are emerging that manage secrets and enforce access based on the agent’s identity and context, providing a clear audit trail. This platform telemetry is essential; without a detailed log of every action the agent takes and every resource it tries to access, investigation becomes impossible.

As organizations confront “frontier-scale” AI, concerns are rising about sophisticated, AI-enabled threat activities. What specific implications does this have for security operations and the software development lifecycle?

“Frontier-scale” AI changes the game entirely because we’re now facing adversaries, including nation-states, who are using AI to orchestrate attacks. This means attacks can be faster, more complex, and more adaptive than anything a human-led security operations team has ever faced. For security operations, it’s a massive challenge. Your team can’t manually track the behavior of thousands of autonomous agents. The implication is that you need AI to fight AI; monitoring and response must be automated. For the software development lifecycle, the shift is just as dramatic. Security can no longer be a final checkpoint before deployment. Engineering practices must evolve to include continuous assessment of agent behavior, starting in the development environment. We need to build sandboxes where we can test an agent’s potential actions and establish a baseline of normal behavior before it ever touches a production system.

Agentic systems often behave in ways that older security models didn’t anticipate, challenging long-standing expectations. In what key ways do they break these models, and what foundational shifts are needed for identity and access management to address these new risks?

Older security models are fundamentally built on the assumption that the actor—whether a person or a simple script—is predictable. You grant permissions, and the actor operates within those fixed boundaries. Agentic systems shatter this expectation. They operate autonomously, their actions can be emergent, and they can interact with other systems in novel combinations that were never explicitly programmed. This completely breaks perimeter-based security and static role-based access control. The foundational shift has to be toward a true identity-first security posture. It’s no longer about what network you are on, but who you are. For an AI agent, this means its identity, its context, and its behavior become the new perimeter. Access management must become dynamic, enforcing centrally managed policies in real-time for every single action the agent attempts.

Practitioners across retail, technology, and media are deploying agentic AI. What are the most pressing operational challenges they face in production, and what are some concise, immediately applicable security techniques they are using successfully?

The most pressing challenge is maintaining control and visibility once these agents are in production. It’s one thing to test an agent in a lab, but it’s another to have it operating on live data, interacting with other production systems. The fear of an agent going “rogue” or causing a massive data breach is very real. A concise technique that’s gaining traction is implementing fine-grained, policy-based access control for every agent. For example, a practitioner at a large retailer like Walmart might do this: First, they define a unique identity for an inventory-management AI agent. Second, they write a clear, human-readable policy stating this agent can only access the inventory API and the shipping database, nothing else. Third, they use an access management platform to enforce that policy, so any attempt by the agent to access, say, the customer payment system is automatically blocked and logged. This is a simple, powerful way to apply a least-privilege principle to autonomous systems.

What is your forecast for agentic AI security over the next five years?

Over the next five years, I forecast a dramatic maturation from theoretical discussions to the widespread adoption of specialized security tooling for AI. The conversation will move away from “What if?” to “Here’s how.” We will see the CISO’s role expand to become a key stakeholder in AI governance, with dedicated budgets for AI-specific security platforms. The concept of managing AI agents through identity and access management will become standard practice, just as it is for human employees today. Ultimately, the organizations that thrive will be those that treat AI security not as a technical hurdle, but as a core business enabler, allowing them to innovate with confidence while maintaining robust control and visibility over their autonomous workforce.

Explore more

AI Drives Growth and Automation in Social Media

Artificial intelligence is no longer a futuristic concept whispered in strategy meetings but has become the foundational engine driving a new era of execution and competitive advantage in social media marketing. This technology acts as a powerful force multiplier, enabling brands, agencies, and creators to achieve unprecedented results in operational efficiency, precise audience engagement, and strategic, scalable growth. As the

Trend Analysis: Human-Centric Data Center Security

Amid the monumental construction boom transforming landscapes with new data centers to power our AI-driven world, a quiet but persistent vulnerability is proving that the biggest threats are not always digital. The unprecedented global expansion in data center construction, fueled by the relentless demands of artificial intelligence and cloud computing, is introducing a novel set of security challenges. While technology

Trend Analysis: Artificial Intelligence Hiring

India’s professional landscape is undergoing a seismic shift, moving decisively from a period of cautious post-pandemic recovery to a new era of confident, technology-driven expansion. At the heart of this transformation is artificial intelligence, which has emerged as the primary engine of job creation and economic momentum. This analysis dissects the key data behind the AI hiring boom, exploring its

Will HDI Global Transform Korea’s Insurance Market?

The South Korean property and casualty insurance market, a behemoth valued at an estimated EUR 80 billion, is now the focal point for one of the world’s leading corporate insurers, HDI Global, which has made a calculated and strategic entry into Seoul. This move marks a significant step in the firm’s Asia–Pacific expansion, but it also raises a critical question

AI’s Power Needs Remap the Data Center Landscape

The digital map of our world is being aggressively redrawn, not by cartographers, but by the colossal energy demands of artificial intelligence and high-performance computing. A profound migration is underway as data center developers, faced with insurmountable power and land constraints in traditional hubs like Northern Virginia and Silicon Valley, are forced to look beyond familiar territory. This is no