Exploring the Intersection of AI, Employment Decisions and Anti-Discrimination Laws: A Case Study on the EEOC vs iTutorGroup Settlement

Now more than ever, employers should carefully evaluate the benefits and risks of using AI or machine learning in recruiting and employment decisions such as hiring, promotion, and terminations. The Equal Employment Opportunity Commission (EEOC) has recognized the significance of this issue and intends to bring more litigation in this area. The use of AI software, machine learning, and other emerging technologies has raised numerous concerns. In response, EEOC Chair Charlotte A. Burrows launched an agency-wide initiative in 2021 to ensure that their use complies with the federal civil rights laws enforced by the agency. Therefore, it is crucial for employers to understand the implications and potential liabilities associated with AI and machine learning in employment decisions.

The EEOC’s initiative on AI use highlights the need for compliance with civil rights laws. This agency-wide effort aims to address the concerns arising from the use of AI software and other emerging technologies. It is not limited to disparate impact and treatment claims for gender and race discrimination under Title VII of the Civil Rights Act of 1964. The EEOC is broadening its focus and taking a comprehensive approach to protect against discrimination in all forms.

One notable lawsuit filed by the EEOC involved iTutorGroup, a company accused of using AI programs that violated the Age Discrimination in Employment Act (ADEA). The discriminatory practice came to light when an applicant submitted two applications, with one including a more recent birthdate. This discovery revealed a potentially unlawful rejection based on age discrimination. On May 5, 2022, the EEOC filed a lawsuit in the Eastern District of New York against iTutorGroup, seeking justice on behalf of the affected applicants.

The case against iTutorGroup eventually reached a settlement on August 9, 2023, albeit after a contentious legal battle. Despite denying any wrongdoing, the company agreed to pay $365,000, which would be distributed as back pay and compensatory damages among the applicants who were allegedly unlawfully rejected based on their age. The settlement also required iTutorGroup to implement non-monetary measures, including adopting new anti-discrimination policies, conducting multiple anti-discrimination trainings, and ceasing to request birthdates from applicants. This case serves as a significant example of the potential consequences employers may face when using AI and machine learning in employment decisions without due diligence and compliance with federal laws.

The implications for employers using AI and machine learning software developed by outside vendors are also worth considering. Many employers may unknowingly be in violation of federal laws by relying on these technologies. This unknowing exposure to liability for discrimination claims can jeopardize a company’s reputation and financial standing. Therefore, it is crucial for employers to thoroughly evaluate the AI and machine learning tools they utilize and ensure that these tools adhere to federal employment laws.

In conclusion, the use of AI and machine learning in employment decisions carries both benefits and risks. Employers must carefully evaluate and understand the potential implications of these technologies, especially in relation to compliance with federal civil rights laws. The heightened focus of the EEOC on this evolving area of the law serves as a reminder for employers to prioritize fairness and nondiscriminatory practices in their hiring, promotion, and termination processes. By doing so, employers can mitigate the risk of legal action, protect their employees’ rights, and foster a diverse and inclusive workplace.

Explore more