With the AI mass adoption curve set to crest between 2026 and 2028, businesses face a critical inflection point. To navigate this transformative landscape, we sat down with Dominic Jainy, an IT professional and recognized expert in artificial intelligence and strategic organizational change. Dominic brings a wealth of experience in applying emerging technologies to reshape business models from the ground up.
Our conversation explores the urgent actions leaders must take to prepare their organizations for an AI-native future. We delve into the necessity of crafting a strategic vision beyond simple tool adoption, the monumental task of reskilling a workforce where nearly 60% of employees will require new training, and the often-overlooked, critical role of HR and ethical governance in this transition. Dominic provides a clear-eyed view of how to foster a culture of co-creation and what it will take to not just survive, but thrive in the next decade.
The article emphasizes creating an “AI North Star.” How can a leader craft this vision to guide cultural redesign and measure ROI, rather than just piloting new tools? Please share a step-by-step approach or an example of a company that has done this well.
That’s the absolute core of the issue. So many leaders get mesmerized by a flashy new AI tool and start a pilot without ever asking the fundamental question: “Why?” An AI North Star isn’t about technology; it’s a strategic declaration of how you will win in your market using AI. The first step is to get your leadership team in a room and refuse to talk about specific vendors or models. Instead, talk about your biggest business challenges and boldest customer promises. From there, you articulate a clear, outcome-focused vision. For example, instead of “We will pilot an AI chatbot,” a North Star sounds like, “We will leverage AI to provide instant, personalized support to every customer, reducing resolution time by 50% and increasing customer lifetime value.” This vision then becomes the filter for every decision—it guides which operational workflows you redesign, which talent you hire, and, most importantly, it gives you a concrete benchmark to measure your return on investment against.
With reports suggesting 59% of workers will need retraining by 2030, how should a company conduct a skills gap audit for an AI-enhanced future? Could you walk us through the process of mapping roles, identifying functional gaps, and then segmenting talent by their “reskillability”?
This is a massive undertaking, and it can feel paralyzing. The key is to stop looking at current job descriptions and start mapping future-state workflows. First, you take a core business function, say, marketing, and map out how it will operate with AI copilots and agents deeply integrated. You’ll immediately see that roles will shift from manual execution to strategic oversight, creative direction, and exception handling. This process reveals the real gaps—not just in technical skills, but in the most in-demand competencies like analytical thinking and resilience. Once you have that map, you can segment your talent. You’ll find a group that needs light upskilling, perhaps through micro-learning. Another group will require more substantial reskilling, maybe through an intensive bootcamp. And a third group may need a complete career transition pathway. This “reskillability” segmentation allows you to invest your L&D budget surgically instead of just offering generic courses and hoping for the best.
Given that only 21% of HR leaders are currently involved in AI strategy, what are the first critical steps HR should take to become a strategic partner? Please describe how they can architect AI talent pipelines and establish the ethical frameworks needed for workforce transformation.
That 21% figure is genuinely alarming because it positions HR as a downstream cleanup crew for a tech-led decision. The very first step is for HR leaders to build their own AI literacy. They must understand the technology enough to challenge assumptions and speak the language of the business. The second step is to demand a seat at the strategy table, armed with data about the workforce. They need to shift the conversation from “Which tool should we buy?” to “What talent, skills, and culture do we need to build to make any tool successful?” From that strategic position, they can begin architecting the future. This means creating internal mobility programs powered by AI to match employees with new roles, redesigning compensation to reward new skills, and, crucially, building the ethical guardrails. HR must lead the charge in establishing policies for how AI is used in hiring, performance reviews, and promotions to ensure fairness and transparency.
Ethical governance is crucial as AI becomes more agentic. Can you detail the ideal composition of a multidisciplinary AI governance board and outline the first three responsible use policies it should establish to ensure transparency and prepare for regulations like the EU AI Act?
You can’t treat AI governance as just an IT or legal problem; it touches every part of the organization. An effective multidisciplinary board must include leaders from HR, Legal, Risk, and Technology. You need HR to be the voice for the employee experience and fairness. You need Legal to navigate the complex and growing web of regulations like the EU AI Act. You need Risk to identify potential brand and operational damage. And you need Tech to explain what’s actually possible. The first three policies this group should establish are non-negotiable. First, a clear policy on data usage and privacy—what data can be used to train models and how is it protected? Second, a standard for transparency and explainability, ensuring that you can understand and justify any AI-driven decision. Third, a “human-in-the-loop” protocol that explicitly defines which decisions require human oversight and can never be fully automated.
The content suggests prioritizing 3-5 high-impact, agentic AI use cases. How should a leadership team identify these opportunities in areas like finance or customer service? Please detail a pilot process that effectively proves a measurable impact before a company decides to scale.
The biggest mistake is boiling the ocean. You have to be ruthless in your prioritization. The best way to identify these high-impact use cases is to look for areas with high volume, repetitive tasks, and significant potential for human error. In finance, think about automating the entire accounts payable process, not just one part of it. In customer service, think about an AI agent that can handle 80% of inquiries from start to finish. Once you have your 3-5 candidates, you launch a tightly-scoped pilot with a cross-functional team. The pilot’s goal is singular: prove a measurable impact. You set a clear hypothesis, like “This AI agent will reduce average call handling time by 40% while maintaining a 95% customer satisfaction score.” You run it for a set period, measure relentlessly, and only after you’ve proven that specific, needle-moving impact do you even begin the conversation about a full-scale rollout.
Employees are often using AI three times more than leaders realize. How can an organization best foster a culture of co-creation to harness this grassroots adoption? Share some specific incentives or metrics you’ve seen work to reward innovation through internal task forces or challenges.
That statistic is a gift! It means you have a curious, motivated workforce that is already experimenting. The leadership’s job is not to clamp down with rigid policies but to provide guardrails and incentives to channel that energy. One of the most effective things I’ve seen is the creation of “AI task forces” within business units. You give them a real business problem to solve, a small budget, and the autonomy to experiment. To fuel this, you create internal challenges—an “AI Innovator of the Quarter” award, for example. The prize isn’t just a gift card; it’s a budget and leadership support to scale their idea. You start measuring and celebrating things like the number of processes automated or the time saved through employee-led AI initiatives. This transforms AI from a top-down mandate that people resent into a bottom-up movement that people own.
What is your forecast for the competitive landscape in 2028, specifically regarding the companies that fail to prepare for the 2026 AI adoption curve?
By 2028, I believe we’ll see a stark bifurcation in the market. There will be two types of companies: the AI-native and the AI-naïve. The companies that spend the next two years strategically rewiring their culture, reskilling their talent, and integrating agentic AI won’t just be more efficient; they will operate with a speed and intelligence that is simply unattainable for the laggards. For those who fail to prepare, 2028 will be a harsh reality check. They will be battling higher costs, slower decision-making, and a critical talent drain as their best people leave for more dynamic, AI-powered organizations. The competitive gap won’t be about who has the fanciest algorithm. It will be a chasm defined by organizational agility and human-machine collaboration. The real story won’t be the technology itself, but the bold leadership and cultural change that harnessed it.
