Are AI Agents the New Insider Threat in Cybersecurity?

I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional whose deep expertise in artificial intelligence, machine learning, and blockchain has positioned him as a leading voice in the evolving landscape of cybersecurity. With a passion for exploring how these cutting-edge technologies shape industries, Dominic offers unique insights into the dual role of AI as both a powerful tool for defense and a potential source of new threats. In this conversation, we dive into the complexities of AI agents as insider risks, the latest trends in AI-driven cyberattacks, innovative defensive strategies, real-world vulnerabilities, and the challenges organizations face in securing this rapidly advancing technology.

How do you view AI agents in the context of insider threats, and what makes them unique compared to traditional software risks?

AI agents are a game-changer in how we think about insider threats because they’re not just static pieces of code—they’re dynamic, intelligent entities capable of autonomous decision-making. Unlike traditional software, which operates within strict, predefined parameters, AI agents can learn, adapt, and even exhibit behaviors that mimic human unpredictability. This means they can be authorized to perform tasks on a network, but if they malfunction or get manipulated, they can act like a rogue employee with high-speed access to sensitive systems. The risk is amplified because their actions can be harder to predict or control compared to conventional software vulnerabilities.

What are some of the most concerning trends you’ve observed in AI-related cybersecurity threats recently?

One of the biggest trends in 2025 is the leap in AI reasoning capabilities. These newer models can think through problems longer and even self-correct, which is incredible for productivity but terrifying for security. We’re seeing AI being weaponized in offensive ways—phishing emails are now near-perfect, website cloning is cheaper and faster, and deepfakes are being used to impersonate job applicants or executives. What’s particularly alarming is how accessible these tools have become to bad actors. It’s not just nation-states anymore; even small-time hackers can leverage AI to craft sophisticated attacks with minimal effort.

Can you share how AI is being used defensively to combat cyber threats in organizations today?

On the flip side, AI is proving to be a powerful ally in cybersecurity. Many companies are deploying AI agents to supercharge their security operations. These tools can analyze vast amounts of data in real time, detect anomalies, and even run full investigations before a human analyst steps in. This drastically cuts down response times—sometimes by a factor of three to five. Beyond security operations, AI agents are also making waves in areas like customer service and finance, automating complex tasks and freeing up human resources. The key is ensuring these defensive agents are themselves secure, which is a whole other challenge.

Could you walk us through a real-world example of a vulnerability or attack involving AI agents that’s caught your attention?

Absolutely, one striking case involves prompt injection vulnerabilities in AI systems integrated with office tools. Imagine an AI assistant with access to sensitive data, like files on a cloud drive. Hackers have figured out how to embed hidden instructions in seemingly innocent emails, tricking the AI into zipping up confidential data and sending it out. These kinds of exploits are tough to stop because they exploit the fundamental way AI processes language and instructions. It’s a stark reminder that as we give AI more autonomy, we’re also opening new doors for attackers to walk through.

What strategies are security teams exploring to monitor and manage the risks posed by AI agents?

Security teams are increasingly focusing on real-time guardrails—mechanisms that monitor what’s going into and coming out of an AI agent. This means scrutinizing prompts for suspicious patterns and screening outputs to prevent leaks of sensitive information. Another promising approach is behavioral tracking, where you establish a baseline of normal activity for an AI agent and flag deviations that might indicate compromise or misuse. The challenge is that AI behaviors are far more complex than traditional software, and tricks like prompt injections can be hidden in unexpected formats, like foreign languages or even emojis.

What are some of the biggest hurdles companies face when rolling out AI agents in their operations?

One major hurdle is the high failure rate of AI pilot projects. Many organizations rush to adopt AI, hoping for quick wins, but without a clear strategy, these initiatives often flop. A recent study suggested a huge portion of pilots don’t deliver tangible results because they lack integration with core business processes. On the other hand, newer startups built from the ground up with AI are seeing massive success, scaling rapidly with minimal staff. For larger companies, the struggle is balancing innovation with security—business leaders push for adoption, while security teams grapple with uncharted risks.

Looking ahead, what is your forecast for the future of AI in cybersecurity over the next few years?

I believe we’re at a critical juncture. Over the next few years, AI will become even more embedded in both attack and defense strategies. We’ll likely see more sophisticated AI-driven attacks, with agents acting autonomously to exploit vulnerabilities at scale. At the same time, defensive AI will evolve to anticipate and counter these threats in real time, but only if we address the insider risk they pose. The race will be to build trust and control into these systems before their capabilities outpace our ability to secure them. It’s going to be a tight balance between innovation and safety, and I think the winners will be those who prioritize robust, adaptable security frameworks from the start.

Explore more

AI Redefines Software Engineering as Manual Coding Fades

The rhythmic clacking of mechanical keyboards, once the heartbeat of Silicon Valley innovation, is rapidly being replaced by the silent, instantaneous pulse of automated script generation. For decades, the ability to hand-write complex logic in languages like Python, Java, or C++ served as the ultimate gatekeeper to a world of prestige and high compensation. Today, that gate is being dismantled

Is Writing Code Becoming Obsolete in the Age of AI?

The 3,000-Developer Question: What Happens When the Keyboard Goes Quiet? The rhythmic tapping of mechanical keyboards that once echoed through every software engineering hub has gradually faded into a thoughtful silence as the industry pivots toward autonomous systems. This transformation was the focal point of a recent gathering of over 3,000 developers who sought to define their roles in a

Skills-Based Hiring Ends the Self-Inflicted Talent Crisis

The persistent disconnect between a company’s inability to fill open roles and the record-breaking volume of incoming applications suggests that modern recruitment has become its own worst enemy. While 65% of HR leaders believe the hiring power dynamic has finally shifted back in their favor, a staggering 62% simultaneously claim they are trapped in a persistent talent crisis. This paradox

AI and Gen Z Are Redefining the Entry-Level Job Market

The silent hum of a server rack now performs the tasks once reserved for the bright-eyed college graduate clutching a fresh diploma and a stack of business cards. This mechanical evolution represents a fundamental dismantling of the traditional corporate hierarchy, where the entry-level role served as a primary training ground for future leaders. As of 2026, the concept of “paying

How Can Recruiters Shift From Attraction to Seduction?

The traditional recruitment funnel has transformed into a complex psychological maze where simply posting a vacancy no longer guarantees a single qualified applicant. Talent acquisition teams now face a reality where the once-reliable job boards remain silent, reflecting a fundamental shift in how professionals view career mobility. This quietude signifies the end of a passive era, as the modern talent