Dominic Jainy is a seasoned IT professional with a profound command of artificial intelligence, machine learning, and blockchain technology. With years of experience navigating the intersection of complex data systems and industry-specific workflows, he has become a leading voice on how emerging technologies can reshape traditional business sectors. In this conversation, we explore the transition from legacy robotic process automation to the new era of agentic AI within the tax function. We delve into how these goal-oriented systems manage the unpredictability of financial data, the evolving role of the tax professional in a highly automated environment, and the strategic shifts necessary to maintain a competitive edge in a rapidly changing regulatory landscape.
RPA relies on predefined sequences to move data, but it often fails when encountering unforeseen data errors. How do autonomous agents interpret goals differently when handling messy source formats, and what specific steps are required to transition a tax team toward this goal-oriented approach?
The fundamental difference lies in how the technology perceives the task; while RPA follows a rigid map, an autonomous agent focuses on the destination. When an agent encounters a “messy” source format that wasn’t part of its initial training, it doesn’t simply crash or trigger an error code like a traditional bot would. Instead, it interprets the objective—such as standardizing data for a tax return—and autonomously determines the necessary logical steps to reach that goal. To transition a team, leaders must first move away from the “step-by-step” mindset and begin training staff on how to define clear outcomes and parameters for the AI. This shift requires establishing a framework where users feel comfortable letting the agent work out the “how” while they focus on validating the “what.”
Modern tools analyze trial balances and flag unusual items, shifting the tax professional’s role toward reviewing rather than manual entry. How does this change affect overall team productivity, and what are the specific risks of moving human intervention further down the process chain?
This shift is a massive catalyst for productivity because it shrinks processes that used to consume hours of tedious labor into tasks that take only a few minutes. By flagging unusual line items automatically, the software allows tax experts to bypass the data-entry grind and jump straight into the high-value analytical work. However, the primary risk of moving human intervention further down the chain is the potential for “review fatigue” or a loss of context regarding the underlying data. Because the expert is no longer touching every single data point, they must develop sophisticated oversight skills to ensure that the agent’s logic remains sound and that critical nuances aren’t smoothed over by the automation.
Automating research into cross-border structures and regulatory changes saves significant time during high-volume advisory work. Can you describe a scenario where an agentic tool handled a complex data challenge independently, and how does the human review process specifically validate those findings?
Imagine a scenario involving complex cross-border structuring where an agent is tasked with collating and cross-referencing shifting regulations across multiple jurisdictions. In such a case, the agent can independently identify relevant tax treaties, summarize regulatory changes, and prepare a draft response for a client. The human review process then acts as the final gate, where the expert scrutinizes the agent’s findings for legal accuracy and strategic alignment. It’s not just about checking for typos; it’s a deep-dive validation where the professional uses their years of experience to ensure the AI’s “independent” research holds up under the weight of current law. This collaboration allows the expert to spend significantly less time on document formatting or version control and more time on the actual advisory strategy.
Creating automation workflows through simple prompts allows non-technical staff to handle tasks previously reserved for software specialists. What training is necessary for tax teams to manage these tools effectively, and how does this change the cost-benefit analysis of maintaining legacy RPA platforms?
The training for modern tax teams is becoming less about coding and more about “prompt engineering” and logical process design. Staff need to learn how to communicate with AI products, such as Claude Cowork, using natural language to generate custom workflows without needing a background in software engineering. This democratization of technology completely upends the cost-benefit analysis of legacy RPA platforms, which often require expensive maintenance and specialized consultants to update. When a tax professional can update and re-test automation logic in real-time because the requirements changed, the overhead costs of the $27 billion RPA industry start to look very unattractive compared to lightweight, custom-built AI tools.
What is your forecast for agentic AI in the tax sector?
I believe we are entering a phase where the very definition of “compliance work” will be rewritten as agentic AI becomes the standard operating system for the industry. Within the next few years, the reliance on rigid, sequence-based bots will fade, and we will see a move toward “self-healing” workflows that adapt to new tax laws the moment they are published. The most successful firms will be those that view AI not as a replacement for human intellect, but as a way to position that expertise at the most critical decision-making stages. Ultimately, AI will handle the high-volume, repetitive tasks of the tax function, leaving the nuanced and complex advisory challenges to the humans who now have the bandwidth to solve them.
