Dominic Jainy is a seasoned IT professional whose expertise at the intersection of artificial intelligence, machine learning, and blockchain provides a unique lens through which to view modern cybersecurity operations. With years of experience exploring how emerging technologies can both complicate and secure organizational infrastructures, he has become a vocal advocate for more meaningful performance measurement in the Security Operations Center. His approach prioritizes the human element of security, arguing that the psychological well-being of analysts is just as critical to defense as the algorithms they manage.
This discussion explores the inherent dangers of relying on quantitative “vanity metrics” that prioritize speed and volume over the actual detection of threats. We examine how focusing on the wrong numbers can lead to analyst burnout and a weakened security posture, while also looking at better alternatives like red teaming and hypothesis-led hunting. The conversation highlights the transition from a “ticket-focused” culture to one rooted in adversary awareness and technical mastery.
When security teams prioritize the volume of tickets processed or the speed of closing them, analysts often feel pressured to triage items as false positives. How does this environment change the way a team handles complex threats, and what specific behaviors emerge when log volume is valued over log quality?
When a team prioritizes speed over depth, the entire culture shifts from proactive defense to a state of reactive survival. Analysts begin to view every alert as an obstacle to be cleared rather than a signal to be investigated, leading to a pervasive “false positive” bias. This environment is particularly dangerous for handling complex, multi-stage threats because these attacks often require a slow, methodical unravelling that high-pressure metrics do not allow for. Specific behaviors emerge where analysts stop asking “why” an event occurred and start looking for any excuse to hit the close button, effectively turning a highly technical role into a repetitive administrative task.
Measuring success by the sheer number of detection rules can inadvertently flood a system with noise. What practical steps can a manager take to maintain strict thresholds for rule accuracy, and how can a team transition from focusing on quantity to emphasizing the value of true positives?
To break the cycle of noise, managers must implement and enforce “hard thresholds” for false positive rates before any detection rule is ever allowed into a production environment. If a rule generates a flood of alerts with a low rate of accuracy, it should be sent back for refinement or decommissioned entirely to preserve the team’s focus. The transition to emphasizing true positives involves changing what the organization celebrates; instead of rewarding an analyst for writing ten mediocre rules, the focus should be on the one rule that caught a stealthy, high-impact technique. This shift requires a commitment to quality over quantity, ensuring that every alert that reaches an analyst’s desk is worthy of their time and expertise.
Time to detect (TTD) and time to respond (TTR) are frequently cited as the only reportable metrics that demonstrate a functional service. Why is red or purple teaming the most effective way to validate these numbers, and what specific data points should be captured to ensure an accurate assessment?
Red and purple teaming provide a necessary dose of reality because they simulate actual adversary behavior in a controlled but live environment, proving whether the SOC is truly effective. Unlike automated reports that can be manipulated by quick ticket closures, a live exercise forces the team to demonstrate how long it actually takes to see an intruder and how long it takes to stop them. To ensure an accurate assessment, organizations should capture the exact timestamp of the initial compromise, the moment the first alert was triggered, and the final time of containment. These specific data points bridge the gap between theoretical capability and actual operational performance, providing the only honest proof of a SOC’s health.
Burnout is a significant risk for analysts who feel like “ticket monkeys” measured solely on how quickly they click through alerts. How do initiatives like hypothesis-led hunting and specialized tool training improve job satisfaction, and what is the direct relationship between analyst engagement and overall security posture?
The “ticket monkey” phenomenon is one of the primary drivers of industry burnout because it strips the intellectual challenge and agency away from the analyst. Initiatives like hypothesis-led hunting re-engage the brain by allowing analysts to use their understanding of threat actors to search for evidence of attacks that tools might have missed. When analysts are given the time to become experts in their tools and the threat landscape, their job satisfaction increases because they feel like they are contributing to a meaningful mission. This engagement is directly tied to security posture; a curious and empowered analyst is far more likely to spot a subtle, non-signature-based anomaly than an analyst who is exhausted and merely clicking through a queue.
Broad log coverage can become a blind spot if the data collected does not align with the actual techniques used by threat actors. How can an organization track whether their documentation and log ingestion are truly relevant, and what metrics help bridge the gap between technical data and organizational awareness?
Broad coverage is often a deceptive safety net because it creates the illusion of visibility while actually burying relevant signals under a mountain of useless data. An organization can track relevance by mapping their log ingestion directly to known adversary techniques and measuring the “completeness” of their documentation regarding specific threat actors. Bridging the gap requires metrics centered on analyst awareness, such as tracking whether training reports on new threats are actually being read and actioned within the SOC. By focusing on the percentage of relevant assets reporting the right logs rather than just total log volume, the organization ensures that its technical data serves a strategic defensive purpose.
What is your forecast for the future of SOC metrics?
I forecast a major industry shift away from “vanity metrics” that look impressive on a slide deck but offer zero insight into actual security resilience. We will likely see a move toward adversary-aligned metrics, where the success of a SOC is measured almost exclusively by its ability to detect and disrupt specific, documented attacker techniques during unannounced purple team exercises. Management will eventually stop asking how many thousands of tickets were closed and instead focus on the quality of true positive detections and the intellectual growth of their analysts. The future of the SOC lies in prioritizing human expertise and high-fidelity data over the sheer volume of logs and alerts, finally treating cybersecurity as a strategic craft rather than a manufacturing assembly line.
