The use of Artificial Intelligence (AI) in the recruiting and hiring process has seen increased popularity in recent years.  Many businesses, seeking to lower hiring costs and also reduce potential claims of discrimination (by taking human discretion out of certain aspects of the hiring process), have turned to AI to handle functions such as locating talent, screening applicants, performing skills-based tests, and even administering certain phases of the pre-hire interview process. 


LEARN MORE: 17 inventions that haven’t been invented yet (that people want)


While automating various aspects of hiring (and post-hire performance management) processes can be effective in eliminating the potential for intentional discrimination, this is not the only type of discrimination that federal and state anti-discrimination laws prohibit.  Under (1) Title VII of the Civil Rights Act of 1964 (which protects against discrimination on the basis of race, color, national origin, religion, and sex (and sex-related factors such as pregnancy, sexual orientation, and gender identity), (2) the Americans with Disabilities Act (ADA), which prohibits discrimination on the basis of actual, perceived, or historical disability, and (3) the Age Discrimination in Employment Act (ADEA), which protects individuals 40 years of age or older from discrimination, discrimination can also be found where employers use tests or selection procedures that, while intended to be neutral, have the effect of disproportionately excluding persons based on one or more of the above protected characteristics.  This is known as “disparate impact” or “adverse impact” discrimination.

Shannon Pierce is a director at Fennemore.

In the case of AI, if the AI tool that a business utilizes inadvertently screens out individuals with physical or mental disabilities (e.g., by assessing candidates based on their keystrokes and thereby excluding individuals who cannot type due to a disability), or poses questions that may be more familiar to one race, sex, or other cultural group as compared to another, this could yield a finding of disparate impact discrimination. 

Recent guidance from the U.S. Equal Employment Opportunity Commission (EEOC) – which is the federal agency responsible for administering anti-discrimination laws – confirms that rooting out AI-based discrimination is among the Commission’s top strategic priorities.  EEOC Guidance also confirms that where such discrimination occurs, the EEOC will hold the employer, not the AI vendor, responsible. That means that the employer could be held liable for many of the same types of damages as are available for intentional discrimination, including back pay, front pay, emotional distress and other compensatory damages, and attorneys’ fees.  

Due to the risks involved, businesses should consult with employment counsel before implementing AI tools in the hiring and performance management processes.  While not an exhaustive list, the following may be among the mechanisms counsel can use to help businesses mitigate risk. 

1.  Question the AI vendor about the diversity and anti-bias mechanisms they build into their products.  Many vendors boast that their AI tools actually foster, rather than hinder, diversity.  By selecting vendors that prioritize diversity, and by asking the vendor to explain how their products achieve this goal, businesses can potentially decrease the likelihood that their chosen AI solutions will yield a finding of discrimination.

2.  Understand what the AI product measures, and how it measures it.  As noted above, measuring typing speed or keystrokes, or using culturally biased hypotheticals, can increase the likelihood that an AI tool will be deemed discriminatory.  By questioning AI vendors in advance about the specific measuring tools that are built into the AI product, businesses can more easily distinguish between helpful – versus potentially costly – AI. 

3.  Ask for the AI vendor’s performance statistics.  Whether an AI-based technology causes a disparate impact involves a complex statistical analysis.  While not used in every occurrence, one rule of thumb that the EEOC uses in assessing potential disparate impact is known as the “four-fifths rule.”  This rule compares the percentage of candidates from one protected classification (e.g., men) who are hired, promoted, or otherwise selected through the use of the AI technology to the percentage of candidates chosen out of other protected classifications (e.g., women).  If the percentage of women who were chosen, when divided by the percentage of men chosen, is less than 80% (or four-fifths), this can be an indication that discrimination occurred.  While even a passing score of 80% or more does not necessarily immunize employers from liability, when choosing an AI product, businesses should learn whether their AI vendors have analyzed their AI products using the four-fifths rule and other statistical and practical analyses, and what the results of those analyses have shown. 

4.  Test the company’s AI results annually.  Just as businesses should question their AI vendors about their statistical findings before implementing an AI hiring solution, businesses should also self-monitor after the AI product is chosen and implemented.  At least annually, companies should consider running their own internal statistical analyses to determine whether, in the context of their unique business, the AI product yields fair, non-discriminatory results.

5.  Offer accommodations to disabled individuals.  Where a candidate discloses that they have a physical or mental disability that prohibits (or limits) their participation in AI-driven processes, the employer should work with the individual to determine whether there is another hiring or performance management process, or some other form of reasonable accommodation, that can be used in lieu of the AI at issue. 

6.  When in doubt, seek indemnification.  Since the AI vendor is ultimately in the best position to design AI tools in a manner that avoids both intentional and unintentional discrimination, businesses should consider building into the vendor agreement indemnity language that protects the business in the event the vendor fails to design their AI in a manner that prevents actual and/or unintended bias. 


Author: Shannon Pierce is a director at Fennemore. Licensed in California and Nevada, Shannon is on the cutting edge of both technology and the changing business culture. She has nearly 20 years of experience litigating on behalf of management concerning claims of employment discrimination, wrongful termination, leaves of absence, and other traditional employment and commercial litigation.