The California Fair Employment and Housing Council (FEHC) recently took a major step toward regulating the use of artificial intelligence (AI) and machine learning (ML) in decision-making. employment. On March 15, 2022, the FEHC published Draft Employment Regulation Changes Regarding Automated Decision Systemsthat specifically incorporate the use of “automated decision systems” into existing rules governing employment and hiring practices in California.
The proposed regulations seek to make it illegal to use automated adjudication systems that “reject or tend to screen out” applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless it is demonstrated that they are job-related. and consistent with business needs. The proposed regulations also contain significant and onerous record-keeping requirements.
Before the proposed regulations come into effect, they will be subject to a 45-day public comment period (which has not yet begun) before FEHC can move forward with final regulations.
“Automated decision systems” are broadly defined
The draft regulations define “automated decision systems” broadly as “[a] computer process, including a process derived from machine learning, statistics or other data processing techniques or artificial intelligence, that screens, evaluates, categorises, recommends or makes a decision or facilitates human decision making that has an impact on employees or candidates”.
The proposed Regulations provide the following examples of automated decision systems:
- Algorithms that filter CVs for particular terms or patterns;
- Algorithms that use facial and/or voice recognition to analyze facial expressions, word choices and voices;
- Algorithms utilizing game-based tests used to perform predictive assessments of an employee or candidate, or to measure characteristics, including but not limited to dexterity, reaction time, or other abilities or physical or mental characteristics; and
- Algorithms that use online tests intended to measure personality traits, abilities, cognitive abilities and/or cultural fit.
Similarly, “algorithm” is usually defined as “[a] process or set of rules or instructions, usually used by a computer, to perform a calculation, solve a problem or make a decision.”
Notably, the scope of this definition is quite broad and will likely cover some applications or systems that can only be indirectly linked to employment decisions. For example, the term “or facilitates human decision-making” is ambiguous. A broad reading of this term could potentially enable the regulation of technologies designed to aid human decision-making in small or subtle ways.
The proposed rule would make it illegal for any covered entity to use automated decision-making systems that “screen or tend to screen” applicants or employees on the basis of a protected characteristic, unless it is shown that they are related to the job and conform to the need of the company.
The proposed regulations would apply to employer (and covered third party) decision-making throughout the employment life cycle, from pre-employment recruitment and selection, through employment decisions, including compensation, promotion, discipline and termination. The proposed regulations would incorporate the limitations of automated decision systems to apply to features already protected by California law.
- For example, an automated decision system that measures a candidate’s reaction time can illegally eliminate people with certain disabilities. Unless an affirmative defense applies (for example, an employer demonstrates that a quick reaction time when using an electronic device is job-related and consistent with business necessity ), employment actions that are based on decisions made or facilitated by this automated decision-making system may constitute unlawful discrimination.
- In addition, an automated decision system that analyzes a candidate’s tone or facial expressions during a videotaped interview may unlawfully exclude people based on their race, national origin, gender or ethnicity. a number of other protected characteristics. Again, unless an affirmative defense applies to such use, employment actions that are based on decisions made or facilitated by this automated decision-making system may constitute unlawful discrimination.
The precise scope and scope of the proposed regulations are ambiguous in that key definitions define automated decision-making systems as systems that screen “or tend to screen” applicants or employees based on a protected characteristic. No clear explanation of the scope of the phrase “tend to eliminate” is offered in the proposed regulations, and the inherent ambiguity of the language itself presents a real risk that these regulations will extend to certain systems or processes. which are not involved in filtering. applicants or employees on the basis of a protected characteristic.
The proposed regulations apply not only to employers, but also to “employment agencies,” which could include vendors that provide AI/ML technologies to employers as part of employment decision-making. .
The proposed regulations apply not only to employers, but also to “covered entities”, which include any “employment agency, labor organization[,] or an apprenticeship training program. In particular, “employment agency” is defined to include, but is not limited to, “any person who provides automated decision-making systems or services involving the administration or use of such systems on behalf of An employer “.
Therefore, any third party vendor that develops AI/ML technologies and sells those systems to third parties using the technology for employment decisions is potentially liable if its automated decision system excludes or tends to exclude a candidate or employee. on the basis of a protected characteristic.
Proposed Regulations Require Significant Record Keeping
Covered Entities are required to maintain certain personnel or other employment records relating to any employment benefits or any applicants or employees. Under FEHC’s draft regulations, these record-keeping requirements would be increased from two to four years. And, as relevant here, those records would include “machine learning data.”
Machine learning data includes “all data used in the process of developing and/or applying machine learning algorithms that are used as part of an automated decision system”. This definition expressly includes data sets used to train an algorithm. It also includes data provided by individual applicants or employees. And it includes the data produced from the application of an automated decision system operation (i.e. the output of the algorithm).
Given the nature of algorithms and machine learning, this definition of machine learning data could require an employer or supplier to retain data provided to an algorithm not just four years back, but to retain all data (including training datasets) ever provided to an algorithm and extending for a period of four years after the last use of that algorithm.
The regulations add that anyone who engages in the advertising, sale, provision or use of any screening tool, including but not limited to an automated decision system to an employer or another Covered Entity, must maintain records of the “assessment criteria used by the Automated Decision System for each employer or Covered Entity to which the Automated Decision System is provided.”
In addition, the proposed settlement would add causes of action for aiding and abetting where a third party provides unlawful assistance, unlawful solicitation or encouragement, or unlawful advertising where such third party advertises, sells, provides or uses an automated decision-making system that unlawfully limits, screens out, or discriminates against applicants or employees based on protected characteristics.
The draft regulations are still in a public workshop phase, after which they will be subject to a 45-day public comment period, and may undergo changes before final implementation. Although the official comment period is not yet open, interested parties can submit their comments now if they wish.
Given what we know about the potential for unintended bias in AI/ML, employers cannot simply assume that an automated decision system produces objective or bias-free results. Therefore, California employers are advised to:
- Know where and how automated decision systems are used in employment decision-making to prepare for these potential new regulations;
- Strive to understand the specific inputs and assessments made by the algorithms that underpin automated decision systems;
- Be prepared to demonstrate why the results (automated decision systems that screen or tend to screen applicants or employees based on protected characteristics) relate to a job-related objective and are consistent with business necessity ; and
- Review agreements with vendors who provide automated decision systems.