HR Leaders Need a Framework for Ethical AI

With decades of experience guiding organizations through technological change, HRTech expert Ling-Yi Tsai has become a leading voice on the thoughtful integration of artificial intelligence. As companies rush to adopt AI, with spending on generative AI soaring by 600% year-over-year, she champions a framework built on integrity and strategy. We sat down with Ling-Yi to discuss how HR leaders can move beyond a “speed-first” mindset to implement AI ethically. Our conversation explores how to anchor AI to meaningful goals, the critical importance of proactive data security, the necessity of cross-functional oversight, and why transparent, agile governance is not a constraint but a strategic advantage in building a modern, trusted workplace.

The data shows a massive 600% year-over-year increase in spending on generative AI. How are you seeing HR teams shift their thinking from a ‘speed-first’ mindset to a more responsible approach, and what specific metrics should they be using to measure success beyond just efficiency?

It’s a fascinating and necessary shift. That 600% figure feels palpable; there’s an intense pressure to adopt AI or risk being left behind. In the initial wave, from mid-2023 to early 2024, when we saw the number of HR leaders implementing AI double, the primary metric was speed—how many hours can we save? But now, the conversation is maturing. I see leading teams asking much deeper questions. Instead of just automating what’s easy, they’re measuring success by the quality of the employee experience. For example, instead of just tracking the time saved on answering benefits questions, they’re measuring if benefits literacy scores have improved or if employees report higher confidence in their selections during open enrollment. The best metrics are human-centric: a decrease in support tickets for complex issues, an increase in engagement with wellness programs, or positive feedback scores on AI-assisted onboarding. Success isn’t just doing things faster; it’s about making work better, clearer, and more supportive for your people.

Your framework emphasizes anchoring AI to meaningful goals. Could you walk us through the practical, step-by-step process an HR leader should follow to identify and scope an AI project that genuinely improves the employee experience?

Absolutely. This is the most critical stage, and it starts by completely ignoring the technology for a moment. The first step is to identify a genuine point of friction in the employee journey. Sit down with your team and employees and ask, “Where do people get stuck? What process feels confusing, slow, or impersonal?” Maybe it’s the complexity of leave policies or the frustration of sifting through benefits documents. Once you’ve identified that pain point, the second step is to define what a better experience would look like from a human perspective. Don’t say “faster,” say “clearer” or “more supportive.” For instance, a goal could be “to provide employees with instant, easy-to-understand answers about their parental leave options, reducing their anxiety during a major life event.” Only then, as a third step, should you explore how AI can serve that specific goal. The final step is to pilot the solution with a small, diverse group of employees and gather qualitative feedback. Ask them, “Did this make you feel more confident? Did this tool understand what you needed?” That feedback is infinitely more valuable than a simple efficiency report.

Given the incredible sensitivity of benefits and HR data, your framework highlights prioritizing data protection from the very start. What are the first three actions an HR leader should take to engage their security team on a new AI project, and what data vulnerabilities do you find are most frequently overlooked?

The moment an AI project is even a whisper of an idea, that’s when the security team needs a seat at the table. The very first step is to bring your Chief Information Security Officer or their delegate into the initial vendor vetting process. Don’t just present them with a chosen tool; make them a partner in the selection. The second step is to collaboratively create a data flow map. This sounds technical, but it’s a simple concept: clearly outline every piece of sensitive data the AI will touch—medical details, family status, salary—and justify exactly why that access is necessary. The third step is to insist on a pre-deployment risk assessment, treating the AI tool as a new piece of critical infrastructure. As for overlooked vulnerabilities, I often see a failure to properly scrutinize how third-party AI vendors handle data. People focus on their own walls but forget the data is being sent elsewhere. Another common blind spot is the secondary use of data for model training; employees need to know if their anonymized data is being used to “teach” the AI, and clear protocols must be in place to ensure that information can never be re-identified.

You recommend a cross-functional AI council for oversight. From your experience, what are the distinct roles that representatives from HR, legal, and IT should play on this council? Could you share an example of how this kind of collaboration helped prevent a potential crisis?

On a well-functioning AI council, each member has a distinct and vital lens. The HR representative is the “voice of the employee.” They own the use case, define the intended experiential outcome, and are responsible for ensuring the tool is equitable and supportive. The IT representative is the “guardian of the infrastructure.” They assess the tool’s technical viability, its security protocols, and how it integrates with existing systems without creating new vulnerabilities. The legal representative is the “guardian of trust and compliance.” They evaluate the tool against a landscape of evolving regulations, like BIPA, and assess the risk of unintended bias or discrimination. I remember one organization that wanted to use an AI tool to screen resumes for a high-volume role. HR was excited about the efficiency gains. However, during the council review, IT discovered the tool’s API was not secure, and legal pointed out that the algorithm, trained on historical company data, was systematically down-ranking candidates from non-traditional educational backgrounds. The council paused the rollout, preventing what would have become a discriminatory hiring practice and a potential class-action lawsuit. It was a perfect example of shared oversight catching a blind spot that one department alone would have missed.

The article stresses the importance of transparency, using the Wendy’s lawsuit as a cautionary tale. Beyond just disclosing that an AI tool is being used, how can HR teams effectively communicate the “why” and “how” of the technology and explain the data that informs its recommendations?

Transparency has to be more than a footnote in a privacy policy; it has to be an active, ongoing conversation. The Wendy’s case is a powerful reminder that “we didn’t know” is not a defense. To communicate effectively, you have to lead with the “why” from the employee’s perspective. Frame it as a benefit to them. For example, “We’re introducing a new scheduling tool powered by AI to help create more predictable and fair shifts based on your stated preferences.” When explaining the “how,” use simple, direct language. For a virtual benefits assistant, you might say, “Our AI assistant helps you compare plans by analyzing anonymized data on which options are most popular among employees with similar needs. It does not see your personal health history, and the final decision is always 100% yours.” It’s also about creating feedback channels. Give employees a clear way to ask questions or appeal a decision they feel was unfairly influenced by an AI recommendation. This builds a sense of agency and trust, showing that the technology is a guide, not a gatekeeper.

You note that governance must stay agile as technology evolves. Could you describe what a practical, recurring audit of an HR AI system looks like, including the process for checking for bias and deciding when it’s time to recalibrate or even retire a tool?

An agile governance model means AI is never “set it and forget it.” A practical audit should happen on a recurring basis, perhaps quarterly. The first step of this audit is a performance review: the HR team and stakeholders re-evaluate if the tool is still meeting its original, meaningful goal. Is it still improving the employee experience? The second, more technical step, involves partnering with data science or IT to test for “model drift” or bias. This means running sample data sets through the system to see if its outputs have become skewed over time. For example, if an AI tool that writes job descriptions starts producing language that consistently uses masculine-coded words for leadership roles, that’s a red flag. The third, and most important, step is gathering fresh employee feedback. A tool that was helpful a year ago might feel clunky or intrusive today. The decision to recalibrate is made when bias is detected or when the tool is no longer hitting its experiential targets. A tool should be retired when it becomes obsolete, when a better alternative exists, or if evolving regulations—or your own company values—make its continued use an unacceptable risk.

What is your forecast for ethical AI governance in HR over the next five years, especially as a third of workplace decisions are projected to involve AI-driven input?

My forecast is that within five years, ethical AI governance will cease to be a niche topic and will become a core, non-negotiable function of every successful HR department. That projection—that a third of workplace decisions will be AI-influenced—is staggering, and it means the stakes are incredibly high. We will see the formalization of roles like “HR AI Ethicist” or “People Technology Risk Officer” on leadership teams. Companies that invest in robust, transparent, and agile governance won’t just be mitigating legal risk; they will be building a powerful competitive advantage. In a world where AI is everywhere, trust will be the ultimate differentiator for attracting and retaining top talent. The organizations that get this right will foster cultures of innovation and psychological safety, while those that prioritize speed over substance will find themselves dealing with eroded trust, higher employee turnover, and a constant fear of reputational damage. Ethical AI isn’t just a guardrail; it’s the engine of a more human-centric future of work.

Explore more

Maryland Data Center Boom Sparks Local Backlash

A quiet 42-acre plot in a Maryland suburb, once home to a local inn, is now at the center of a digital revolution that residents never asked for, promising immense power but revealing very few secrets. This site in Woodlawn is ground zero for a debate raging across the state, pitting the promise of high-tech infrastructure against the concerns of

Trend Analysis: Next-Generation Cyber Threats

The close of 2025 brings into sharp focus a fundamental transformation in cyber security, where the primary battleground has decisively shifted from compromising networks to manipulating the very logic and identity that underpins our increasingly automated digital world. As sophisticated AI and autonomous systems have moved from experimental technology to mainstream deployment, the nature and scale of cyber risk have

Ransomware Attack Cripples Romanian Water Authority

An entire nation’s water supply became the target of a digital siege when cybercriminals turned a standard computer security feature into a sophisticated weapon against Romania’s essential infrastructure. The attack, disclosed on December 20, targeted the National Administration “Apele Române” (Romanian Waters), the agency responsible for managing the country’s water resources. This incident serves as a stark reminder of the

Zero-Click Exploits Redefined Cybersecurity in 2025

With an extensive background in artificial intelligence and machine learning, Dominic Jainy has a unique vantage point on the evolving cyber threat landscape. His work offers critical insights into how the very technologies designed for convenience and efficiency are being turned into potent weapons. In this discussion, we explore the seismic shifts of 2025, a year defined by the industrialization

Is Google Chrome The Worst Browser For Privacy?

Amid the digital cacophony of corporate rivalries, a newly surfaced independent analysis provides a stark, data-driven answer to one of the internet’s most debated privacy questions. For years, users have navigated a landscape filled with competing claims about data protection, often left to wonder whether the warnings from one tech giant about another are genuine concerns or simply strategic marketing.