We’re joined today by Dominic Jainy, an IT professional whose work at the intersection of AI, machine learning, and blockchain offers a unique perspective on technology’s role across industries. We’ll be exploring the critical but often overlooked challenge facing Communications Services Providers: how to govern the powerful AI systems being integrated into their core operations. Our conversation will touch upon the gap between AI adoption and real business value, the emerging risks from interconnected AI “agents,” and the architectural and security shifts needed to build trust in a future of automated decision-making.
With so many companies using AI but only a small fraction reporting mature strategies, how can Communications Services Providers bridge this gap between mere adoption and tangible impact? What initial governance frameworks are essential to establish trust in AI-driven operational workflows?
That gap you mentioned is the central challenge. The McKinsey data is stark: nearly 8 in 10 companies are experimenting, but only 1% feel they have a mature strategy. This tells us that simply deploying an AI tool isn’t the finish line; it’s the starting gun. For CSPs, bridging this gap means shifting focus from the technology itself to the trust in its decisions. The initial governance framework can’t be an afterthought. It must be built around explainability and policy-driven automation from day one. This means ensuring that when an AI touches something as critical as billing or service configuration, every action can be traced, explained, and justified against a clear commercial or regulatory boundary. It’s about building the operational guardrails before you let the car drive itself.
The “Agentic AI Mesh” concept involves multiple AI agents acting as a coordinated network. What are the most significant business risks from coordination failures between these agents in core BSS/OSS systems, and what practical steps can leaders take to ensure clear accountability?
The Agentic AI Mesh is a powerful concept, but it also introduces a terrifying new class of risk: systemic failure at machine speed. The most significant danger in a core BSS/OSS environment is the cascading error. Imagine one agent misinterprets a new network capability, another agent uses that faulty data to create a new service plan, and a third agent’s billing model misprices it. Before a human can even review a report, you could have thousands of customers on a non-existent, unprofitable plan. The coordination failure isn’t a single bug; it’s a series of misaligned decisions. To ensure accountability, leaders must stop thinking of AI as a black box. They need to implement systems that enforce clear swimlanes for each AI agent, with defined authority limits and immutable audit logs for every interaction between them. Accountability here isn’t about blaming an algorithm; it’s about tracing a decision back to the policy and the process that governed it.
When an AI autonomously misprices a service, the financial and regulatory risks can escalate quickly. Can you describe how a policy-driven automation framework would prevent such an error and outline the key components needed to ensure AI operates within set commercial boundaries?
This is where theory meets reality, and the consequences are immediate. An AI that misprices a service can cause more revenue leakage in minutes than a human team could in a quarter. A policy-driven automation framework acts as a non-negotiable supervisor. It doesn’t just suggest boundaries; it enforces them. For instance, a commercial policy might state that no B2C mobile plan can be priced below a certain floor or discounted more than 20%. If an AI, perhaps learning from anomalous data, tries to price a service outside that rule, the framework blocks the action instantly. The key components are a centralized policy engine where business leaders define the rules, a continuous observability layer that monitors every AI decision against these policies in real-time, and an exception-handling workflow that alerts a human supervisor when a rule is nearly breached or a new scenario emerges. It’s about building a system of automated checks and balances.
As CSPs combine various vendor and in-house AI models, a composable architecture offers a foundation for control. How does this approach specifically help maintain security and auditability across different systems, and what are the first steps a telco should take to implement it?
A composable architecture is fundamentally about avoiding monolithic black boxes. When you have different AIs—one for network optimization from a vendor, another for customer churn you built in-house—a composable approach allows you to wrap each one in a consistent governance layer. This means you can apply the same security rules, the same audit logging standards, and the same access controls to every model, regardless of its origin. This is critical for auditability because it creates a single, coherent story of how a decision was made, even if it passed through three different AI systems. The first step for a telco is to map its critical BSS/OSS workflows and identify the decision points. Then, instead of hard-coding a single AI into that process, they should build standardized APIs that allow different AI “agents” to be plugged in or swapped out. This architectural decision creates the flexibility and control needed to manage a multi-model future without compromising security.
Traditional telecom security focused on infrastructure and access control, which is insufficient for managing autonomous AI decisions. What specific new capabilities must security and operations teams develop to ensure the integrity of decisions that impact revenue, compliance, and customer outcomes?
This is a profound shift in mindset. For decades, telecom security was about protecting the pipes and the data centers. Now, it must be about protecting the integrity of the decisions flowing through them. The new capabilities required are less about firewalls and more about forensic accounting for automated systems. Security and operations teams need to develop skills in “decision integrity assurance.” This includes the ability to audit an AI’s logic, not just its output. They need tools for continuous observability to track how AI models adapt and change over time, flagging any drift that could lead to non-compliant or commercially unsound decisions. And critically, they need to build a rapid-response capability to investigate and remediate an automated error, understanding the full business impact—from revenue to regulation—of a single faulty decision.
What is your forecast for AI-driven telco operations over the next three to five years?
Over the next three to five years, I forecast a critical divergence in the industry. We will see a group of CSPs who successfully make the leap from isolated AI pilots to a truly trusted, automated operational core. These operators will have embraced governance not as a restriction but as a strategic enabler. Their “Agentic AI Mesh” will be a coordinated, resilient, and transparent system that drives efficiency and innovation. On the other hand, there will be a group that continues to deploy AI tools without the underlying governance frameworks. They will be plagued by opaque errors, regulatory fines, and revenue leakage, constantly struggling to explain why their automated systems made certain decisions. The winners won’t be the ones with the most AI models, but the ones who have mastered the art of trusted automation at scale.
