HR’s Growing AI Adoption Meets Widespread Mistrust

Today we’re speaking with Ling-Yi Tsai, an HRTech expert with decades of experience helping organizations navigate the complexities of technological change. Specializing in HR analytics and the integration of technology across the entire employee lifecycle, she offers a critical perspective on one of the most transformative forces in the workplace today: artificial intelligence. We’ll be exploring the paradox of AI in HR—a world where adoption is accelerating, yet fundamental trust is lacking. Our conversation will touch on the critical trust gap in AI-driven decision-making, the looming challenge of skill shortages, the impact of automation on the candidate experience, and the delicate balance leaders must strike between efficiency and authentic communication.

An overwhelming 98% of HR professionals report not trusting generative AI for major workforce decisions, yet many are comfortable using it for low-risk tasks like scheduling. What specific high-stakes risks are they concerned about, and what steps could technology vendors take to build that trust?

That 98% figure is staggering, and it speaks directly to the fear of the unknown. The primary concern is the “black box” nature of some AI when it comes to career-altering decisions like hiring, promotions, or terminations. HR leaders are worried about embedded biases, flawed data leading to discriminatory outcomes, and the legal and ethical fallout. They’re comfortable letting AI handle scheduling because the stakes are low—a scheduling error is an inconvenience, but a biased hiring decision can trigger a lawsuit and damage the company’s reputation. To build trust, vendors need to prioritize transparency. They must move beyond just selling a tool and instead provide clear insights into how their algorithms work, what data they’re trained on, and what safeguards are in place to mitigate bias. Offering customizable controls and robust validation features that allow HR to remain in the driver’s seat is the only way to close this trust gap.

With nearly half of HR leaders identifying AI skill shortages as a top challenge, and only 11% feeling confident in forecasting those needs, a significant gap exists. What practical methods can organizations use to better predict future skill requirements and start developing that talent internally?

This confidence crisis, with only 11% feeling secure in their forecasting ability, is a major roadblock. The irony is that the solution to an AI-driven problem often lies within AI itself. The real opportunity, as Dimitri Boylan of Avature noted, is in its application. A practical first step is for organizations to leverage AI-powered talent intelligence platforms to map the skills they currently have within their workforce. These tools can analyze employee profiles, project histories, and performance data to create a dynamic skills inventory. From there, the same technology can scan market trends and strategic business goals to predict which skills will be in high demand. This allows HR to move from reactive hiring to proactive talent development, creating targeted upskilling programs and internal mobility pathways to build the skills they need before the shortage becomes critical.

Given that nearly 3 in 4 job candidates say an AI-led interview would alter their perception of a company, what are the primary brand risks and rewards of automation in recruiting? Please share some specific examples of how to balance efficiency with a positive candidate experience.

The brand risk is immense. When nearly three-quarters of your potential hires might view you differently—and not always for the better—you’re playing with fire. An impersonal, glitchy, or biased AI interview can make a company feel cold, disconnected, and uncaring, effectively poisoning the talent pool. It screams, “We don’t value you enough for a human conversation.” The reward, of course, is efficiency, but it’s a hollow victory if you drive away top talent. The key is balance. Use AI for the right stages. For example, use AI chatbots for initial screening questions or 24/7 scheduling, which candidates often appreciate for its convenience. However, when it’s time for a substantive interview, ensure a human is leading the conversation. A hybrid approach that uses automation to streamline logistics but preserves human connection for meaningful evaluation is the only sustainable path.

Research shows employees view supervisors who heavily rely on AI for writing messages as less sincere. How can leaders leverage AI for efficiency without compromising authentic communication, and what specific guidelines would you recommend for maintaining team trust?

This is about authenticity. While over 8 in 10 employees are fine with a supervisor using AI for low-assistance tasks, that trust evaporates with heavy reliance. When a message about a team member’s success or a difficult project update feels like it was generated by a machine, it erodes the very foundation of leadership—connection. Leaders should use AI as a thought partner, not a ghostwriter. For instance, use it to brainstorm ideas, check for tone, or summarize meeting notes into a first draft. The guideline I always recommend is the “personal touch” rule: before hitting send, a leader must infuse the message with their own voice, specific anecdotes, and genuine emotion. The final product must sound like them. AI can build the frame of the house, but the leader has to do the interior decorating to make it feel like a home.

Many employees use AI tools in their personal lives but are unaware of their employer’s official implementation strategy. What operational and cultural challenges does this transparency gap create, and how can HR lead the charge in communicating a clear and fair AI policy?

This transparency gap, highlighted in the recent Gallup report, is a ticking time bomb. Operationally, it creates a “shadow IT” problem where employees use unsanctioned tools, potentially exposing sensitive company data. Culturally, it breeds an atmosphere of uncertainty and mistrust. Employees start to wonder, “How is the company using AI to monitor me? Is my job at risk?” This anxiety kills psychological safety and innovation. HR must lead the charge by moving AI out of the shadows and into the light. This means developing and clearly communicating a comprehensive AI policy that outlines acceptable use, data privacy standards, and the company’s philosophy on AI in decision-making. HR should host town halls, create FAQs, and provide training to demystify the technology and reassure employees that AI is a tool to augment their work, not replace them.

What is your forecast for the role of AI in HR over the next five years?

My forecast is that the next five years will be defined by a shift from experimentation to integration. Right now, we see fragmented adoption—a tool for scheduling here, a chatbot for recruiting there. In the future, AI will become the connective tissue of the entire HR ecosystem. We will see hyper-personalized employee experiences, from onboarding plans tailored to an individual’s skills gaps to career paths that dynamically adjust based on performance and aspirations. The key challenge won’t be the technology itself, but rather the human side of the equation. The most successful organizations will be those that invest heavily in closing the AI skills gap, establishing strong ethical guardrails, and mastering the art of using technology to enhance, not erase, human connection.

Explore more

FBI Dismantles Major Ransomware Forum RAMP

In the shadowy, high-stakes world of international cybercrime, a law enforcement seizure is typically a sterile affair of official seals and legalistic text, but the day the Russian Anonymous Marketplace went dark, visitors were greeted instead by the winking face of a beloved cartoon girl. On January 28, the Federal Bureau of Investigation executed a takedown of RAMP, the dark

Ruling Clarifies the High Bar for Forced Resignation

The experience of feeling trapped in a difficult work environment, where conversations with management feel less like support and more like pressure, is an increasingly common narrative in the modern workplace. Many employees in such situations feel they have no choice but to leave, believing their resignation was not a choice but a necessity forced upon them by their employer’s

Why Workplace Belonging Is a Core HR Metric

The modern professional environment presents a striking contradiction where the place employees turn to for a sense of community, second only to their own homes, is simultaneously where feelings of profound isolation are taking root. This growing chasm between the need for connection and the reality of disconnection has propelled “belonging” from a soft-skill aspiration to a critical, measurable component

AI Data Centers: Build New or Retrofit Old?

With the rise of artificial intelligence driving computational demands to unprecedented levels, the data center industry is at a critical inflection point. Power densities that were once theoretical are now a reality, pushing traditional cooling methods to their limits. To navigate this new landscape, we sat down with Dominic Jainy, a distinguished IT professional whose work at the intersection of

Trend Analysis: AI Data Center Financing

The race to build the digital bedrock for artificial intelligence has ignited a multi-trillion-dollar global construction boom, creating an almost insatiable demand for computing power that is reshaping capital markets. In this high-stakes environment, financing has emerged as the most critical bottleneck, a decisive factor that will ultimately determine which corporations gain supremacy in the AI revolution. The ability to