Artificial city

States and localities begin to focus on the use of artificial intelligence – New technology

As artificial intelligence (AI) becomes increasingly embedded in products, services, and business decisions, state and local lawmakers have considered and passed a series of AI laws. These vary from laws that promote AI to more regulatory approaches that impose obligations on AI in specific areas. In a development parallel to the evolution of privacy laws, states and localities have taken initiatives themselves. However, unlike privacy, where a range of legislative approaches have been debated for years, approaches to AI management have been much more varied and dispersed. This kind of patchwork approach, if continued, can create regulatory compliance management issues for many uses of AI in all jurisdictions.

States and localities are starting to move forward with a piecemeal approach to AI

In 2021, five jurisdictions — Alabama, Colorado, Illinois, Mississippi, and New York City — enacted legislation specifically focused on the use of AI. Their approaches varied, from creating bodies to study the impact of AI to regulating the use of AI in settings where governments were concerned about the increased risk of harm to individuals.

Some of these laws have focused on promoting AI. For example, Alabama law establishes a council to review and advise the governor, legislature, and other interested parties on the use and development of advanced technologies and AI in the state. Mississippi law implements a mandatory K-12 curriculum that includes instruction in AI.

Conversely, some laws are more regulatory and skeptical of AI. For example, Illinois has passed two AI laws – one that expands a working group to study the impact of emerging technologies, including AI, on the future of work and another that warrants notice, consent and reporting obligations for employers using AI for hiring. Under existing Illinois law, an employer who requires candidates to record video interviews and uses AI analysis must: (1) inform the candidate that AI may be used to analyze the video interview of the candidate and examine the suitability of the candidate for the position; (2) provide each candidate with information explaining how the AI ​​works and the general types of characteristics the AI ​​uses to assess candidates; and (3) obtain the applicant’s consent. The law also limits the sharing of videos and extends to candidates a right to delete videos. A 2021 amendment imposes reporting requirements on an employer who relies solely on an AI analysis of a video interview to determine whether a candidate will be selected for an in-person interview. The state’s Department of Commerce and Economic Opportunity is required to analyze certain reported demographic data annually and report to the governor and the General Assembly if the data reveals racial bias in the use of AI.

Colorado law takes a sector-by-sector approach, to prohibit insurers from using any source of information and any predictive algorithm or model in a way that produces unfair discrimination. Unfair discrimination includes “the use of one or more external consumer data and information sources, as well as predictive algorithms or models using external consumer data and information sources, which have a correlation with race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression, and that the use results in a result disproportionate negative for that classification(s), which negative result exceeds the reasonable correlation with the underlying insurance practice, including losses and underwriting costs.” This law is in addition to the comprehensive law on the protection of the Colorado Privacy Act, which comes into effect on July 1, 2023, which provides consumers with the right to opt out of the processing of their personal data for the purpose of targeted advertising, the sale of personal data or automated profiling in the context of decisions producing legal or similar effects.

End of 2021, New York in particular promulgated a specific algorithmic liability law, becoming the first jurisdiction in the United States to require algorithms used by employers for hiring or promotion to be audited for bias. New York City law prohibits AI hiring systems that do not pass annual audits to check for discrimination based on race or gender. The bill would require developers of such AI tools to disclose the qualifications and job characteristics that will be used by the tool and provide job applicants with the option to choose an alternative process allowing employers to review their candidacy. The law imposes fines on employers or placement agencies of up to $1,500 per violation.

California privacy regulations may also target AI

The California Privacy Shield Agency (CPPA), the new agency charged with rule-making and enforcement authority for the California Privacy Rights Act (CPRA), is expected to issue regulations governing AI by 2023. The law specifically addresses the consumer’s right to understand and opt out of automation from decision-making technologies such as AI and machine learning. In particular, the agency is responsible for “[i]enact regulations governing access and opt-out rights with respect to companies’ use of automated decision-making technology, including profiling and requiring companies’ response to access requests to include information information about the logic involved in these decision-making processes, as well as a description of the likely outcome of the process as it relates to the consumer. »

In September 2021, CAPP issued an Invitation for Preliminary Comments on the Proposed Rulemaking (Invitation) and accepted comments until November 8, 2021. CAPP’s Invitation to Comment asked four questions regarding the interpretation of the agency’s automated decision-making authority:

  1. What activities should be considered to constitute “automated decision-making technology” and/or “profiling”?

  2. When should consumers be able to access information about companies’ use of automated decision-making technology and what processes consumers and companies should follow to facilitate access

  3. What information companies must provide to consumers in response to access requests, including what companies must do in order to provide “meaningful information about the logic” involved in the automated decision-making process

  4. The extent of consumer opt-out rights with respect to automated decision-making and the processes consumers and businesses should follow to facilitate opt-outs.

While the law calls for the final rules to be adopted by July 2022, at a February 17 CAPP board meeting, executive director Ashkan Soltani announced that the draft regulations would be delayed. As we have already mentioned, this effort in California to regulate certain automated decision-making processes could open the door to greater regulation of AI and should be watched closely.

Even as the federal government takes a closer look at AI, some states and localities seem poised to forge ahead. Indeed, many more states continue to debate AI proposals in 2022. Companies developing and deploying AI should continue to monitor this area as the regulatory landscape develops.

© 2022 Wiley Rein LLP

The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.