Is Agentic AI the Key to Faster Business Returns?

Dominic Jainy is a seasoned IT professional whose career has spanned the evolution of machine learning, blockchain, and now, the transformative rise of agentic artificial intelligence. With a background rooted in complex system architecture, he has witnessed firsthand how technology shifts from a peripheral tool to the central nervous system of a global enterprise. Jainy’s expertise lies in bridging the gap between experimental code and scalable business strategy, making him a sought-after voice for organizations looking to navigate the second wave of the AI revolution. In this conversation, we explore the transition from simple automation to autonomous agents, the financial realities of scaling these systems, and the regional nuances that dictate how AI is deployed across the globe.

Our discussion centers on the rapid maturation of AI agents, which are now being integrated into core business processes by over half of large organizations. We delve into why a small group of early adopters is seeing disproportionate financial returns and how they manage to move from concept to production in as little as three months. Jainy also addresses the critical hurdles of data security and system integration that remain top of mind for executives today.

Over half of large enterprises have deployed AI agents, with many running more than ten simultaneously. How do these organizations identify which workflows to automate first, and what specific steps ensure these agents transition from experimental tools to core business systems without disrupting current operations?

The most successful organizations don’t just look for “easy” tasks; they target the intersections of high-volume data and critical decision-making, such as customer service or marketing operations. Currently, about 52% of firms have already moved into this space, and the shift is driven by a move from “if” AI works to “how fast” it can be embedded. To ensure a smooth transition, the 39% of companies running ten or more agents often focus on redesigning the core business process rather than just layering a tool on top of an old way of working. This involves creating a clear roadmap where the agent moves from a planning stage to executing tasks with limited human input, effectively becoming a core engine for competitive growth. It’s a sensory shift for the organization, where employees stop seeing AI as a novelty and start relying on it as a reliable, high-performance teammate.

A small group of leaders allocates half their AI budget specifically to agentic systems, resulting in significantly higher ROI than their peers. What metrics define a successful rollout in these high-investment environments, and how do they effectively redistribute capital from traditional projects to fund these advanced agents?

These “agentic AI early adopters” make up about 13% of the market, and they are seeing an 88% return on investment in at least one use case, which is a stark contrast to the 74% average seen elsewhere. The primary metric for success in these environments is the speed to value, often measured by how quickly a concept can reach full operational deployment. To fund this, nearly half of the executives we observe are making the difficult but necessary choice to reallocate budgets away from non-AI projects that have stagnated. They are seeing that generative AI contributes to significant business growth, with 53% of those reporting gains estimating revenue increases between 6% and 10%. By championing AI as the central driver for revenue, these leaders find it easier to justify moving capital into systems that demonstrably improve productivity and time to market.

Different sectors prioritize unique agent tasks, from fraud detection in finance to quality control in retail. How do regional priorities—such as a focus on technical support versus customer service—shape the development of these tools, and what industry-specific challenges usually arise during the integration phase?

Geography plays a massive role in how these tools are architected and deployed, as local market demands dictate the agent’s primary mission. In Europe, we see a heavy emphasis on using agents for technical support to navigate complex infrastructure, while in the Asia-Pacific region, the focus shifts toward front-end customer service to manage high interaction volumes. Industry-specific challenges often stem from the “intelligence” requirement; for example, a telecommunications agent needs to understand network configuration, whereas a retail agent must master quality control nuances. The integration phase is frequently the most difficult part, as it requires embedding these intelligent systems directly into existing, often rigid, business architectures. When this integration is successful, however, it allows organizations to tackle complex, industry-specific tasks that were previously too labor-intensive to automate effectively.

With many firms seeing revenue growth between 6% and 10% from generative AI, the focus is shifting toward scaling production. What operational changes are necessary to move a concept to full production within a three-to-six-month window, and how can companies maintain this momentum as implementation costs decrease?

Moving from a pilot to production in under six months requires a radical shortening of the traditional development lifecycle, a feat now achieved by over half of the organizations surveyed. Operationally, this means breaking down the silos between IT and business units so that feedback loops are instantaneous and goals are perfectly aligned. As implementation costs decline, 77% of executives are actually increasing their spending on generative AI rather than pocketing the savings, reinvesting that capital into more sophisticated agentic capabilities. This momentum is maintained by a culture that prioritizes productivity gains and customer experience outcomes as the ultimate indicators of success. The shift from experimentation to scaling is no longer just a goal; it is a necessity for those who want to see that consistent revenue growth of up to 10% continue year over year.

Data privacy and system integration remain the primary hurdles for executives choosing AI providers. What foundational strategies should a company implement to secure its data architecture before deploying agents, and how can they ensure these systems remain compliant while interacting with sensitive internal workflows?

Foundational data security is the bedrock of the next chapter in the AI wave, and more than one-third of executives now rank it as their top priority when selecting providers. A company must first adopt a modern data strategy that emphasizes strong governance and clear visibility into where and how data flows through large language models. Before agents are given the “keys” to sensitive internal workflows, there must be a rigorous vetting of the infrastructure to ensure it can handle autonomous actions without compromising privacy. The goal for 2025 is to compound the successes of the previous year, and that is only possible if the systems are built on a secure, integrated architecture that prevents data leakage. By focusing on governance from the start, organizations can ensure that their AI agents remain compliant and trustworthy, even as they handle increasingly complex and sensitive business tasks.

What is your forecast for agentic AI?

My forecast is that by 2025, we will see a definitive separation between companies that use AI as a tool and those that operate as agentic-first enterprises. The conversation is moving away from the basic capabilities of large language models and toward the “compounding success” of systems that can plan, execute, and learn autonomously. I expect that the 13% of early adopters we see today will grow significantly, and those who have not integrated agents into at least ten core operations will find themselves at a massive competitive disadvantage. Ultimately, agentic AI will become the standard engine for business growth, where intelligence is not just an add-on, but the very fabric of how a company creates value and interacts with its customers.

Explore more

Salesforce Transforms Into an AI Operating Layer for Business

The modern enterprise landscape is currently witnessing a profound shift where software systems no longer merely serve as repositories for customer interactions but instead function as the primary cognitive engine for all corporate decision-making processes. This evolution marks the moment when traditional Customer Relationship Management tools transition into what industry experts describe as an AI operating layer, a centralized nervous

CoreWeave and Google Cloud Streamline AI Infrastructure

The high-stakes world of artificial intelligence is currently witnessing a decisive move away from the “walled garden” approach of legacy cloud environments toward a fluid, interoperable ecosystem. As of April 2026, the strategic alliance between CoreWeave and Google Cloud marks a transformative shift in how enterprises architect their AI foundations. By prioritizing connectivity over isolation, this partnership addresses a critical

Is Google’s Agentic Data Cloud the Future of Enterprise AI?

Enterprises currently find themselves at a critical junction where the value of digital information is no longer measured by its volume but by its ability to power autonomous decision-making processes. This shift represents a move away from the traditional model of data as a passive archive toward a dynamic ecosystem where information functions as a reasoning engine. For years, corporate

Is the Agentic Data Cloud the Future of Enterprise AI?

Introduction The architectural blueprint of modern enterprise intelligence is undergoing a radical transformation as data platforms evolve from passive repositories for human analysts into active environments for autonomous software agents. This shift reflects a move away from human-centric analytics toward a model where machines are the primary consumers of data. As these AI capabilities mature, the engineering of data ecosystems

How Is Google Cloud Powering the Shift to Agentic AI?

The traditional model of human-computer interaction, defined by a simple sequence of prompts and responses, is rapidly dissolving in favor of a sophisticated ecosystem where digital agents operate with a high degree of autonomy. These next-generation systems no longer wait for specific, granular instructions to complete a single task but instead possess the underlying logic to reason through complex goals,