The United States Equal Employment Opportunity Commission (EEOC) has released tips on May 12, 2022, regarding the use of software, algorithms and artificial intelligence (AI) in the assessment of job applicants and employees. The EEOC guidelines explain how employers’ use of tools based on algorithmic decision-making can violate the Americans with Disabilities Act (ADA).
In its guidelines, the EEOC identifies different types of software using algorithmic decision-making that employers can use at different stages of the hiring process. This software includes resume scanning programs that prioritize job applications using certain keywords, employee monitoring software that rates employees based on their keystrokes or other factors, video interview that assesses candidates based on facial expressions and speech patterns; and testing software that provides “fit for the job” scores regarding personality, aptitudes, or cognitive skills. The EEOC also explains that an employer may still be liable under the ADA for the use of these tools, even if they are designed or administered by another entity.
Common Ways AI and Algorithmic Tools May Violate the ADA
The EEOC guidelines highlight a number of ways the use of AI or algorithmic tools can violate the ADA. For example, an algorithmic decision-making tool that “excludes” a person due to a disability, even though that person could perform the job with reasonable accommodations, may violate the ADA. According to the EEOC, elimination occurs when a qualified applicant or employee loses an employment opportunity because a disability prevents them from meeting a selection criterion or diminishes their performance.
Filtering can occur if a person’s disability prevents the algorithmic decision-making tool from measuring what it intends to measure. For example, the EEOC states that video interviewing software that analyzes candidates’ speech patterns to assess their problem-solving abilities will not fairly score a candidate who has a speech impediment that causes significant differences in speech patterns. If such a contestant was screened out due to a low score caused by their speech impediment, the contestant may have been wrongfully screened out.
According to the EEOC, disqualification due to disability is unlawful if the person can perform the essential functions of the job with reasonable accommodation if legally required. For example, some employers assess applicants and employees using “gamified” tests in which video games are used to measure abilities, personality traits, and other qualities. If an employer requires a certain score on a game memory assessment, a blind candidate will not be able to see the screen to play these games. However, the candidate may still have a very good memory and be fully capable of performing the essential functions of a job requiring a good memory.
Another way algorithmic decision-making tools could violate the ADA is if the employer fails to provide “reasonable accommodation necessary for a job applicant or employee to be fairly and accurately assessed by the employer.” ‘algorithm”. In its guidelines, the EEOC gives the example of a job applicant whose manual dexterity is limited due to a disability. Such a candidate may have difficulty passing a knowledge test that requires the use of a keyboard or touchpad. Therefore, this type of test would not accurately measure the knowledge of that particular candidate. According to the EEOC, the employer in this scenario should provide an accessible version of the test as a reasonable accommodation, such as a test that allows oral responses, unless it would cause undue hardship.
Finally, an employer may violate the ADA if it adopts an algorithmic decision-making tool that constitutes a disability-related medical investigation or examination before making a conditional offer of employment to an applicant, even if the applicant has not of disability. According to the guidelines, an assessment includes “disability-related inquiries” if it asks job applicants or employees questions that may elicit information about a disability or asks directly if they have a disability. An assessment is considered a “medical examination” if it seeks information about a person’s physical or mental impairments or health.
However, not all algorithmic decision-making tools that request health-related information are “disability-related medical inquiries or examinations”. The EEOC states, for example, that a personality test does not make “disability-related surveys” because it asks if the individual is “described by friends as ‘generally optimistic,'” even though that might be linked in one way or another to certain mental health problems. diagnoses.
Tips for preventing discrimination
The EEOC guidelines recommend several “promising practices” that can reduce the likelihood that an algorithmic decision-making tool or AI will violate the ADA:
- Use tools designed to be accessible to people with as many different types of disabilities as possible. If you use tools designed by a vendor, confirm with the vendor whether they developed the tool with people with disabilities in mind.
- Make it clear that reasonable accommodations, including alternate formats and alternative tests, are available for people with disabilities and provide clear instructions for requesting reasonable accommodations.
- Prior to the assessment, provide individuals with as much information as possible about the tool, including information about the traits or characteristics being measured, the methods by which they will be measured, and any disabilities that may reduce the results of the assessment .
- Ensure that the tools only measure the abilities or qualifications actually needed for the job and that these abilities or qualifications are measured directly rather than through simply correlated scores.