suji@maierlawgroup.com

Blog

Employment Law News

Using AI in Hiring Called Out as Discriminatory

Research suggests that over two-thirds of U.S. employers, and almost all Fortune 500 companies, use some form of artificial intelligence (AI) to automate their hiring process. This practice, however, is now being challenged as inherently discriminatory, and is drawing increasing concern from employees and industry commentators alike. A February 21, 2023 federal complaint, for example, alleged that Workday Inc.’s hiring automation software incorporates discriminatory and subjective judgments in reviewing and evaluating applications for hire.

While incorporating AI into one’s hiring processes offers tremendous advantages in time and efficiency, it also may increase—and not reduce, as one might think—the chance of discriminatory decisions made by the AI tool. This is because AI makes conclusions by predicting outcomes based on the data it analyzes, such as the communication or behavioral patterns an applicant presents. So, for example, AI will note a correlation between how a person speaks and their ability to problem-solve. However, it does not account for nuanced speech differences or actual speech disabilities—factors that do not correlate to a candidate’s intelligence or mental cognition—but might disqualify them nonetheless. Similarly, as the plaintiff in the Workday case alleges, the employer can manipulate the AI tool to screen out certain characteristics from their applicant pool. Presumably, the AI hiring tool would discount and eliminate candidates for positions they may be perfectly qualified to perform.

This recent NPR article on using AI in hiring and this 2017 The Economist report on the use of AI to predict sexual identity, are two interesting reads to understand how, arguably, ethics and the human subtleties that differentiate one individual from another cannot be programmed into AI tools.

California is considering Assembly Bill 331, the Automated Decision Systems Accountability Act. AB 331, if passed, would control the use of machine-based systems in making “consequential” employment decisions such as compensation, promotions, hiring, termination, and automated task allocations. Specifically, AB 331 would: (1) mandate disclosure before AI is used in hiring, (2) require annual impact assessments, and (3) allow legal action against employers for the discriminatory impact of AI hiring tools.

The legal and moral dilemmas that AI poses in the employment space are no longer theoretical. AI has arrived in the employment screening process, and is likely here to stay. But until AI can fully understand subtle nuances, its use in the hiring process will likely generate decisions that are reductive and potentially harmful.


Authors: Diana Maier, Partner, and Emily Harrington, Associate

This article has been prepared for general informational purposes only and does not constitute advertising, solicitation, or legal advice. If you have questions about a particular matter, please contact the Maier Law Group directly.