Today, we’re joined by Ling-Yi Tsai, an HRTech expert with decades of experience helping organizations navigate the complexities of technological change. She specializes in the human side of technology, focusing on how tools for recruitment, onboarding, and talent management can be integrated to support, rather than displace, the workforce.
We’ll be exploring the significant disconnect between executive confidence and employee anxiety surrounding AI adoption. Our conversation will cover how leadership can build trust in this new era, the practical steps for balancing rapid innovation with a people-first approach, and what a truly effective employee upskilling program looks like when a company is serious about investing in its people.
With over two-thirds of workers concerned about AI’s impact, yet a majority of executives feeling very prepared for its adoption, what specific communication strategies can leaders use to bridge this perception gap? Please share a step-by-step example for rolling out a new AI tool transparently.
The core of this issue is a massive communication breakdown. Leaders see a tool for efficiency, but employees hear a threat to their livelihood, and that fear is palpable. To bridge this, transparency can’t just be a buzzword; it has to be a detailed, multi-stage process. First, before a single line of code is integrated, leadership must hold town halls explaining the why—not just the “what.” Instead of saying, “We’re implementing an AI for data analysis,” they should say, “We know you spend 10 hours a week on repetitive reporting. We’re exploring a tool to reduce that to one hour, freeing you up for higher-level strategic work.” Second, create a pilot group with a mix of skeptics and enthusiasts, and share their honest feedback—the good, the bad, and the clunky—with the entire organization. Third, during the rollout, over-communicate through weekly updates, Q&A sessions, and dedicated support channels. The goal is to demystify the technology and show a clear path for how it helps employees, not replaces them.
Given that over a quarter of employees lack trust in their company’s ability to implement AI fairly, how can leadership concretely demonstrate fairness and build psychological safety? Please share key metrics a company could use to track employee sentiment during this transition.
Building trust when more than a quarter of your workforce is already skeptical is an uphill battle, and it’s won through actions, not announcements. Fairness must be visible. For instance, leadership needs to establish and publish clear guidelines on how AI will be used for performance evaluation, promotion, or task allocation, and—this is critical—how humans will remain in the loop for final decisions. A powerful way to build psychological safety is to admit when you’ve fumbled. If an initial rollout causes confusion or a tool doesn’t work as promised, a senior leader needs to stand up and say, “We missed the mark on this, and here’s how we’re fixing it based on your feedback.” To track this, you can’t rely on annual surveys. You need pulse surveys with specific questions like, “On a scale of 1-10, how confident are you that the new AI tool is being applied fairly to your team?” You can also monitor traffic to internal resource pages about AI and track the number and type of questions being asked in anonymous feedback channels. A decrease in anonymous questions and an increase in open-forum discussions is a great sign that safety is growing.
The advice to “slow down” on AI adoption can conflict with market pressures to innovate quickly. What practical frameworks can leaders use to balance rapid deployment with a people-centric approach, ensuring employees are brought along effectively? Can you outline a successful change management plan?
The call to “slow down” isn’t about stopping progress; it’s about being intentional. The fastest way to fail is to rush a tool out the door that nobody trusts or uses correctly. A balanced framework starts with leadership alignment. Before anything is announced, the C-suite—from the CIO to the Chief People Officer—must agree on the communication strategy. They need one unified voice. A successful plan then follows three phases. Phase one is “Prepare and Listen”: for a month, you do nothing but talk to employees, understand their workflows, and identify their biggest anxieties. Phase two is “Pilot and Learn”: you roll out the AI to a small, controlled group for a quarter, gathering data and testimonials. This isn’t just about debugging the tech; it’s about debugging the human experience. Phase three is “Scale and Support”: you expand the rollout department by department, but only after you’ve created a dedicated support team and a center of excellence to guide the process. This measured approach feels slower, but it prevents the massive productivity loss and morale crash that comes from a rushed, top-down implementation.
Establishing a culture of trust is seen as essential for successful AI adoption. Beyond forming a center of excellence, what specific, daily actions can senior leaders from different departments take to foster genuine two-way communication and prove they are collaborating effectively on this major change?
A center of excellence is a great structural idea, but it’s an empty gesture if the underlying culture isn’t built on trust. Culture is shaped by the small, daily actions of leaders. For example, the CIO shouldn’t just send emails; they should co-host weekly “office hours” with the head of HR where anyone can ask blunt questions about the AI roadmap. When a department head sees their team struggling with a new AI-powered workflow, they shouldn’t just file a support ticket. They should publicly document the issue, share it with the project leads, and champion their team’s feedback. This shows they’re a shield, not just a mouthpiece for corporate. The most powerful action is senior leaders publicly disagreeing, debating, and then coming to a unified conclusion. Seeing leaders work through challenges together, rather than presenting a sterile, perfect plan, proves that collaboration is real and that it’s okay for things not to be perfect right away.
Since over one-fifth of workers feel their employers are not investing in them to thrive amid AI adoption, what does an effective reskilling and upskilling program look like? Please detail the critical components that make employees feel valued rather than just managed.
When over 20% of your people feel abandoned, you have a crisis of confidence. An effective program isn’t a link to a generic online course library. It’s a personalized growth plan. First, it must be directly tied to the new roles that AI will create. The company must be able to say, “We’re automating data entry, so we’re offering certified training in data analysis and visualization because that’s where we need your talent next.” It’s about a clear path forward. Second, it has to provide dedicated time. Employees should be given paid hours each week to focus on this training, signaling that the company sees this as part of their job, not extra homework. Finally, it should include mentorship from those who have already mastered the new skills. Pairing a newly trained employee with a seasoned data analyst, for instance, makes them feel supported and integrated into the future of the company, not just processed through a system. That’s the difference between feeling valued and feeling managed.
What is your forecast for enterprise AI adoption and its effect on the workforce?
My forecast is one of turbulent, but ultimately necessary, transformation. We will continue to see a surge in enterprise AI adoption, as the 38% of executives who felt prepared last year has already jumped to over half this year. However, the initial waves of implementation will be rocky. Companies that prioritize speed over people will face internal resistance, talent drain, and failed projects. The stark reality of AI-driven job cuts—with nearly 55,000 reported last year and likely more unreported—will continue to fuel workforce anxiety. The successful organizations of the next decade will be those that learn to treat AI adoption not as a tech project, but as a profound human change initiative. They will invest heavily in communication and reskilling, creating a culture where employees see AI as a collaborator for growth, not a competitor for their jobs.
