Ling-yi Tsai has spent decades at the intersection of human capital and emerging technology, helping global organizations navigate the often-turbulent waters of digital transformation. As an HRTech expert specializing in analytics and integrated recruitment workflows, she has a front-row seat to how automated tools are reshaping the relationship between employers and job seekers. Today, she joins us to discuss the growing tension in the recruitment landscape, where the promise of technological efficiency often clashes with the fundamental human need for transparency and fairness. From the alarming rates of candidate withdrawal to the subtle ways applicants are masking their personalities to please an algorithm, Ling-yi provides a deep dive into the current state of AI in hiring and what leaders must do to reclaim candidate trust.
The following conversation explores the data behind candidate dissatisfaction, the psychological shifts occurring during automated assessments, and the urgent need for robust auditing protocols and human oversight.
Many job seekers find themselves being evaluated by AI without prior notification, often discovering the technology’s presence only once the interview begins. How does this lack of transparency impact candidate trust, and what specific steps should companies take to integrate clear AI disclosure policies into their recruitment workflows?
The impact on candidate trust is immediate and often visceral; there is a distinct sense of betrayal when an applicant realizes mid-conversation that their “interviewer” is a set of algorithms rather than a person. According to recent data, a staggering 70% of candidates reported they were never told ahead of time that AI would be evaluating them, which creates an atmosphere of surveillance rather than collaboration. This lack of transparency leads to 1 in 5 job seekers discovering the AI’s presence only after the interview has already started, often leaving them feeling flustered and dehumanized. To fix this, companies must move beyond the current landscape where fewer than 1 in 5 employers have clear AI policies. Organizations should implement a “Transparency First” protocol that includes explicit notifications in the initial application confirmation and a clear explanation of what the AI is actually measuring before the candidate ever clicks “start.”
A significant portion of applicants currently withdraw from hiring processes when faced with pre-recorded video assessments or automated monitoring. What are the long-term risks to a company’s talent pipeline when using these tools, and how can hiring managers balance technological efficiency with the human-centric experience candidates expect?
The long-term risk is a hollowed-out talent pipeline where the most self-assured and high-demand candidates simply opt out, leaving the company with a limited pool of those who are merely willing to tolerate a cold process. We are already seeing the fallout, with 38% of U.S. candidates admitting they have withdrawn from a hiring process specifically because it included an AI interview. The rejection of these tools is particularly sharp when there is no human presence; 33% of job seekers abandon the process when faced with pre-recorded videos scored by AI. When 26% of applicants drop out due to invasive AI monitoring, the company loses not just a resume, but the potential for diverse perspectives and specialized skills that don’t fit into a rigid, automated box. To balance efficiency, hiring managers must reintroduce the “human touchpoint” early in the process, ensuring that AI serves as a support tool for a human recruiter rather than a replacement for one.
Candidates report perceiving similar levels of racial and age bias from AI as they do from human recruiters. In light of this, what auditing protocols are necessary to ensure these tools remain fair, and how should human oversight be structured to validate automated decisions before a final hiring choice is made?
It is a sobering reality that AI is often just “repackaging the same problem,” as candidates report feeling identical levels of bias from both machines and people. Specifically, 36% of applicants perceived age bias from both AI and human recruiters, while 27% noted bias regarding race or ethnicity in both scenarios. This suggests that without rigorous intervention, AI simply automates existing prejudices rather than eliminating them. To counter this, 29% of candidates are now demanding evidence that these tools are being audited for bias by independent third parties. Human oversight shouldn’t just be a rubber stamp at the end of the process; instead, 38% of job seekers want a structured review where a human professional validates the AI’s data points before any candidate is disqualified. We need to create a feedback loop where recruiters can challenge an AI’s score if it seems to disproportionately flag certain demographics, ensuring the “signal” we get is actually representative of talent.
Applicants are increasingly modifying their interview performance by emphasizing analytical traits and suppressing emotional intelligence to better “fit” automated scoring models. How does this behavior distort the data recruiters receive, and what strategies can be used to ensure an accurate assessment of a candidate’s true soft skills and intuition?
This behavioral shift creates a “performance mask” that makes it nearly impossible for HR managers to see the real person behind the screen. When candidates believe an algorithm is judging them, they intentionally downplay their intuition and emotional intelligence, choosing instead to project a hyper-analytical persona that they think the software prefers. This distortion means that the data recruiters receive is sanitized and artificial, often lacking the very soft skills—like empathy and complex problem-solving—that are critical for modern leadership. To peel back this mask, recruiters should use AI for high-volume administrative screening but reserve the assessment of “human” traits for live, interactive sessions. We must stop relying on automated scoring for nuances like “passion” or “cultural fit,” as 57% of job seekers believe these tools should be legally mandated to disclose exactly what they are looking for to prevent this cat-and-mouse game of performance hacking.
While most candidates do not want AI removed entirely, they are demanding safety measures such as the option to request a human interviewer. What are the operational challenges of offering these alternatives, and how do you define the specific metrics that should be shared with applicants regarding what AI measures?
The primary operational challenge is scalability; many companies turned to AI because their human recruiting teams were overwhelmed by an explosion of applications. However, ignoring the request for a human alternative is backfiring, as only 12% of candidates say they would sit through an AI interview if it were a strict requirement with no other choice. To meet this demand, companies need to define and share clear metrics, such as whether the AI is analyzing keywords, speech patterns, or facial expressions—details that 27% of candidates feel are currently hidden from them. We should offer an “opt-out” path for the 19% of candidates who want less AI involvement, even if it means a slightly longer wait time for a human-led screening. By being honest about the “black box” of AI, we can move away from a system that currently provides “more applications but less signal” and toward one that respects the candidate’s time and agency.
What is your forecast for AI in hiring?
My forecast is that we are heading toward a “Great Calibration” where the novelty of AI efficiency will be replaced by a demand for ethical accountability and human-in-the-loop systems. We will likely see a shift where 63% of candidates—a number that has already jumped 13 percentage points in just six months—become even more discerning about the companies they choose to engage with based on their tech stack. In the near future, the most successful organizations won’t be those with the fastest algorithms, but those that use AI to handle the “noise” while empowering human recruiters to focus on the “signal.” I expect to see a rise in “Hybrid Hiring” models where AI manages scheduling and basic skill verification, but the emotional and intuitive heavy lifting is returned to humans, ensuring that the 38% of candidates currently walking away from the process feel seen and valued once again.
