I’m thrilled to sit down with Dominic Jainy, a seasoned IT professional whose expertise in artificial intelligence, machine learning, and blockchain brings a unique perspective to the ever-evolving field of cybersecurity. With a deep understanding of how technology intersects with security challenges, Dominic is the perfect person to help us unpack the complexities of threat detection and identity-based attacks. In this conversation, we’ll explore the overwhelming issue of false positives in cybersecurity alerts, the critical role of context in distinguishing real threats from benign activity, the growing menace of identity-based attacks, and how AI is stepping in to support overworked security teams.
Can you walk us through the recent findings on false positives in cybersecurity alerts and what they reveal about the struggles security teams face?
Absolutely, Dwaine. A recent report highlighted that a staggering 71% of alerts received by organizations over a year were false positives. That’s a huge number, and it shows just how much noise security teams have to sift through daily. This high rate of false alarms creates a real challenge—teams are constantly on edge, trying to figure out what’s a genuine threat and what’s just harmless activity. It leads to wasted time, resources, and, frankly, a lot of frustration, which can dull their ability to respond effectively when a real attack happens.
Why does context play such a pivotal role in identifying whether an alert signals a real cyber threat or not?
Context is everything in cybersecurity. Without it, you’re just looking at raw data points that could mean anything. For instance, a login from an unusual location might look suspicious, but if you know the employee is traveling for work, it’s no big deal. Context helps security teams connect the dots—combining user behavior, threat intelligence, and environmental factors to make sense of an alert. Without that full picture, distinguishing between a hacker and a legitimate user becomes incredibly tough and time-consuming.
What are some common behaviors that trigger alerts but often turn out to be harmless, and why do they raise red flags?
Many everyday actions can set off alerts, like logging in from a new location, tweaking firewall rules, or changing email forwarding settings. These raise red flags because they mimic tactics attackers use—hackers often log in from odd places or alter settings to steal data or maintain access. The problem is, legitimate users do these things too, all the time. It’s a fine line, and without proper preparation or context, businesses can end up in a constant state of alarm over nothing.
How have identity-based attacks shifted the way security teams approach threat response?
Identity-based attacks have really changed the game. Hackers aren’t just breaking through firewalls anymore; they’re stealing credentials and using trusted accounts to blend in. This means compromised credentials are often the first clue something’s wrong. The focus has shifted heavily toward identity management—things like disabling hacked accounts or resetting passwords are now critical steps in stopping an attack early. It’s a stark reminder that protecting user identities is just as important as securing the network itself.
Can you explain how alert fatigue ties into the rise of identity-based attacks?
Alert fatigue is a huge issue, and hackers know it. When security teams are bombarded with alerts—most of which are false positives—they get desensitized. They might miss subtle signs of an identity-based attack, like a compromised account being used quietly over time. Hackers exploit this exhaustion, slipping through the cracks while teams are distracted. It’s a vicious cycle, and staying vigilant under that kind of pressure is incredibly tough.
How is artificial intelligence starting to help manage the flood of cybersecurity alerts?
AI is becoming a game-changer in handling alerts. While it might seem like a small impact—tackling just 10% of alerts for some organizations—that still translates to hundreds of thousands of alerts that don’t need human review. AI can quickly analyze patterns and context, flagging what’s likely benign and prioritizing what needs attention. It’s not about replacing people; it’s about lightening the load so human expertise can be applied where it matters most.
In what ways does AI empower security professionals to focus on the most pressing threats?
AI acts like a first line of defense for alerts. It sifts through millions of notifications, filtering out the noise and reducing the number of incidents that need a human to step in. This frees up security teams to dive into complex threats that require critical thinking and nuanced decision-making. Instead of drowning in alerts, they can focus on strategy and response, which is where their skills really shine.
What are the biggest hurdles in telling apart harmless activity from malicious behavior when data or context is limited?
The biggest hurdle is the lack of a complete picture. Without enough data or context, every alert looks like a potential disaster. For example, a firewall change could be an admin doing their job, or it could be an attacker opening a backdoor. If you don’t have the full story—who made the change, why, and under what circumstances—you’re just guessing. This uncertainty slows down response times and increases the risk of missing real threats while chasing shadows.
What’s your forecast for the role of AI in cybersecurity over the next few years?
I think AI is going to become indispensable in cybersecurity. As threats grow more sophisticated and data volumes explode, human teams alone won’t be able to keep up. AI will likely take on even more of the grunt work—triaging alerts, correlating data, and providing actionable insights in real time. But it’s not just about automation; I see AI evolving to predict threats before they happen, using patterns and behaviors to stop attacks in their tracks. It’s an exciting time, but it’ll also require careful integration to ensure trust and accuracy in these systems.