New Rules and Threats Reshape Telecom Cybersecurity

With a career spanning the intersection of artificial intelligence and critical infrastructure security, Dominic Jainy is a leading voice in navigating the complex cyber threats facing the telecommunications industry. As attackers move away from common malware to sophisticated, stealthy techniques designed to manipulate the very core of network functions, his work focuses on building proactive, intelligence-led digital resilience. Today, Cairon Peterson sits down with Dominic to explore this evolving landscape. They will discuss how security teams can unmask nation-state actors hiding in plain sight, the challenge of proving resilience to regulators before a crisis hits, and why context-aware, rapid response has become the new benchmark for defending the networks that underpin our connected world.

Given that telecom-specific threats often manipulate core network functions rather than using common malware, how do security teams distinguish sophisticated malicious activity from normal operational behavior? Could you share a step-by-step example of how such a stealthy attack might be uncovered?

That’s the central challenge we face. It’s not about looking for a loud bang; it’s about hearing a faint, unusual whisper. We distinguish these threats by moving away from just looking for known malware signatures and instead establishing a deep, analytical baseline of what “normal” looks like in a specific network. Think of it like a living blueprint of data flows, protocol usage, and user activity. When 95% of the incidents we see are unique to this sector, that blueprint is everything. A real-world example of uncovering a stealthy attack starts with an alert from a behavioral analytics engine flagging an unusual provisioning request. Step one, the system notices a command to route a small, specific block of subscriber data to a new, unfamiliar endpoint. This doesn’t trigger a traditional antivirus alert because no malicious file is involved. Step two, an analyst immediately cross-references this with historical data. Has this type of routing ever happened for this subscriber group? Is the endpoint associated with a legitimate partner? The answer is no. Step three, they pivot to examine the credentials used. The account is valid, but it’s an administrative account that hasn’t performed this specific action in over a year and is doing so outside of normal business hours. Finally, step four, this contextual evidence allows us to isolate the compromised system and see that the actor was “living off the land,” using the network’s own tools against it. It’s this fusion of automation and expert human investigation that turns a seemingly benign action into a confirmed, sophisticated intrusion.

Nation-state actors frequently aim for long-term infiltration instead of immediate disruption. What specific vulnerabilities do they exploit to remain undetected for months, and what are the key, often subtle, indicators that an operator can look for to uncover these persistent threats early?

These advanced persistent threat groups, like APT41 or LightBasin, are masters of patience. They aren’t looking to smash the front door; they’re looking to pick the lock and then live inside the walls. They often exploit vulnerabilities in the seams of the network—the integration points between legacy billing systems and modern virtualized functions, or weaknesses in third-party supplier access. These are areas where security visibility is often fragmented. To stay hidden, they meticulously use the network’s own protocols and administrative tools, so their traffic looks almost identical to a network engineer’s. The key indicators are incredibly subtle and often dismissed as operational noise. An operator should look for things like a slight, unexplained increase in data exfiltration to an unusual geographic location, even if it’s just a few megabytes a day. Another indicator is a trusted administrative account suddenly accessing systems it has rights to but rarely uses, like a billing system account trying to query subscriber identity modules. It’s about spotting behavioral drift. We’re not looking for a single, blaring alarm; we’re searching for a pattern of quiet, anomalous actions that, when pieced together over time, tell the story of a silent intruder who has been there for months.

With new directives treating telecoms as critical national infrastructure, the focus has shifted to provable resilience. Beyond having controls in place, how can operators concretely demonstrate their preparedness and rapid response capabilities to regulators before a major incident even occurs?

This is a fundamental shift in regulatory thinking. It’s no longer enough to have a firewall and say you’re secure; you have to prove you can withstand and react to a sophisticated attack. Operators can demonstrate this in a few concrete ways. First is through continuous, intelligence-led penetration testing and red-teaming exercises that specifically simulate the stealthy tactics used by groups like Gallium. Instead of just checking for vulnerabilities, these exercises test the entire detection-to-response lifecycle. Can your team spot the subtle indicators we just discussed? How quickly can they move from detection to containment? Second, robust documentation is key. This isn’t just about technical logs; it’s about creating a clear, auditable trail of every decision made during a simulated incident. A regulator needs to see why an analyst escalated an alert, what evidence they used, and how the response was coordinated. This process-driven evidence is what transforms resilience from a theoretical concept into a demonstrable, measurable capability that satisfies bodies like Ofcom or the requirements of NIS2.

We’re seeing a significant trend toward “living-off-the-land” attacks that avoid traditional malware payloads. In a complex telecom environment mixing legacy and cloud-native systems, what are the biggest blind spots for conventional security tools, and how does this tactical shift challenge both defense and compliance?

The move to living-off-the-land techniques is a direct response to the industry’s investment in traditional defenses. Attackers know we have antivirus and sandboxing, so they’ve stopped bringing their own tools and started using ours. This creates massive blind spots. A conventional security tool is designed to spot a known piece of malware, a malicious file. But when the attacker is using PowerShell, a legitimate administrative tool, to move laterally through the network, that tool sees nothing wrong. Our own data from the Digital Universe Report H1 2025 showed that direct malware payloads accounted for exactly 0% of trending alerts—that number is jarring and speaks volumes. The biggest blind spot is context. A legacy system might not have the logging capability to show why a certain command was run, while a cloud-native function might produce so many logs that the malicious activity is lost in the noise. This tactical shift poses a huge challenge because it makes detection reliant on understanding intent and behavior, not just identifying a bad file. For compliance, it means that simply having signature-based tools is no longer a valid defense. Regulators now expect to see capabilities that can analyze behavior across this hybrid environment and connect the dots.

Effective response now seems to rely heavily on context and speed. How does a Managed Detection and Response (MDR) model help operators investigate threats in minutes, not hours, and how do teams document their decision-making process to satisfy stringent regulatory reporting requirements?

An MDR model is built for this exact challenge. Its power lies in combining sophisticated technology with expert human oversight, 24/7. When a behavioral analytics platform flags a potential threat, it’s not just thrown over the fence as another alert. An MDR analyst immediately picks it up, enriching it with sector-specific threat intelligence and historical network data. This provides the context needed to make a swift, accurate decision. This is how leading frameworks are achieving response times measured in minutes, not hours. Instead of an internal team spending half a day trying to figure out if an alert is a false positive, the MDR team has already investigated, correlated, and provided a recommended action. For regulatory reporting, the documentation is built into the workflow. Every step—from the initial alert, to the analyst’s queries, to the containment action—is logged in a case management system. This creates an immutable record that demonstrates not just what was done, but the rationale behind it. It answers the regulator’s core questions: Did you see it? Did you understand it? Did you act appropriately? That documented, context-rich narrative is precisely what is needed to prove due diligence and satisfy those stringent reporting obligations.

What is your forecast for digital resilience in telecoms?

My forecast is that digital resilience will become fully synonymous with operational resilience. For decades, the industry has excelled at building physical and logical redundancy to handle hardware failures or fiber cuts. The future is applying that same engineering discipline and mindset to cybersecurity. We will see a deeper integration of security operations into network operations, where threat intelligence directly informs network architecture and routing decisions in near real-time. Resilience won’t be a separate program; it will be an inherent quality of the network itself, built on a foundation of continuous monitoring, automated response, and deep, evolving intelligence about attacker behavior. The distinction between keeping the network running and keeping it secure will simply cease to exist.

Explore more

A Unified Framework for SRE, DevSecOps, and Compliance

The relentless demand for continuous innovation forces modern SaaS companies into a high-stakes balancing act, where a single misconfigured container or a vulnerable dependency can instantly transform a competitive advantage into a catastrophic system failure or a public breach of trust. This reality underscores a critical shift in software development: the old model of treating speed, security, and stability as

AI Security Requires a New Authorization Model

Today we’re joined by Dominic Jainy, an IT professional whose work at the intersection of artificial intelligence and blockchain is shedding new light on one of the most pressing challenges in modern software development: security. As enterprises rush to adopt AI, Dominic has been a leading voice in navigating the complex authorization and access control issues that arise when autonomous

How to Perform a Factory Reset on Windows 11

Every digital workstation eventually reaches a crossroads in its lifecycle, where persistent errors or a change in ownership demands a return to its pristine, original state. This process, known as a factory reset, serves as a definitive solution for restoring a Windows 11 personal computer to its initial configuration. It systematically removes all user-installed applications, personal data, and custom settings,

What Will Power the New Samsung Galaxy S26?

As the smartphone industry prepares for its next major evolution, the heart of the conversation inevitably turns to the silicon engine that will drive the next generation of mobile experiences. With Samsung’s Galaxy Unpacked event set for the fourth week of February in San Francisco, the spotlight is intensely focused on the forthcoming Galaxy S26 series and the chipset that

Is Leadership Fear Undermining Your Team?

A critical paradox is quietly unfolding in executive suites across the industry, where an overwhelming majority of senior leaders express a genuine desire for collaborative input while simultaneously harboring a deep-seated fear of soliciting it. This disconnect between intention and action points to a foundational weakness in modern organizational culture: a lack of psychological safety that begins not with the