Artificial system

Artificial Intelligence Briefing: Monitoring AI Regulations and Legislation | Faegre Drinker Biddle & Reath LLP

As more organizations use artificial intelligence and algorithms to drive decision-making processes, decision-makers are beginning to address concerns about these tools, including their lack of transparency and their potential to generate bias and bias. unintentional discrimination. In our inaugural AI briefing, we provide an overview of recent AI regulatory and legislative developments in the United States that should be a priority for any organization using AI or algorithms.

Regulatory and legislative developments

  • On Capitol Hill, the senses. Ron Wyden (D-OR) and Cory Booker (D-NJ) and Rep. Yvette Clarke (D-NY) introduced the Algorithmic Accountability Act of 2022, which updates a 2019 version of the bill that didn’t failed to gain ground. Among other things, the updated bill would affect algorithms relating to education and professional training; use; utilities and transportation; family planning and adoption; financial services; health care; lodging; legal services; and other matters determined by the Federal Trade Commission (FTC) through rulemaking. The bill would require certain companies using algorithms to conduct impact assessments on bias and other issues and submit reports to the FTC.
  • Also in Congress, on February 4, the House of Representatives passed the America Creating Opportunities for Manufacturing, Pre-Eminence in Technology, and Economic Strength Act of 2022 (America COMPETES Act), which is now before the Senate for consideration. The bill includes an amendment proposed by Rep. Ayana Pressley (D-MA) that directs the National Institute of Standards and Technology (NIST) to create a new office dedicated to studying biases in the use of intelligence artificial. Additionally, it compels NIST to publish guidance to reduce the disparate impacts that artificial intelligence can have on historically marginalized communities. NIST is already working on an AI risk management framework, which will be the subject of a two-part workshop in March.
  • Rhode Island General Assembly is considering a bill that would restrict insurers’ use of external consumer data, algorithms and predictive models. The bill, which would mirror Colorado legislation enacted last year, would direct the director of business regulation, in consultation with the health insurance commissioner, to engage in a process with stakeholders and adopt implementing regulations. On February 9, the House Corporations Committee recommended that the bill be retained for further study.
  • The New York City Council recently passed a bill regulating the use of AI in employment decisions. When it takes effect on January 1, 2023, the city law will require employers to perform bias audits for any AI processes they use to screen applicants for jobs or promotions, and it states that these audits must be carried out no more than one year before the AI ​​process is used. The law also requires employers to post information about these IA processes on the employer’s website and provide candidates and employees with 10 days’ notice, including information about the process, before the process begins. of AI cannot be used. Candidates and employees have the right not to participate in the IA process and to force the employer to use another process to assess the candidate or employee who objects.
  • The Illinois General Assembly last year passed amendments to its Artificial Intelligence Video Interviewing Act, originally passed in 2019. The amendments, which came into force on January 1, 2022, impose data collection and reporting obligations on “[a]an employer who relies solely on an artificial intelligence analysis of a video interview to determine whether a candidate will be selected for an in-person interview. The law requires employers to annually collect and report to the Department of Commerce and Economic Opportunity the race and ethnicity of applicants who do or do not receive in-person interviews after using AI analytics – as well as the race and ethnic origin of all candidates hired. candidates. The law also requires the Department of Commerce and Economic Opportunity to “analyze reported data and report to the governor and the General Assembly by July 1 each year if the data reveals racial bias in the use of intelligence.” artificial”.
  • To the National Association of Insurance Commissioners, the Accelerated Underwriting Task Force (A) has posted an updated version of its educational document for comment through Feb. 11. The task force has been working on the document since last year, which aims to help regulators better understand how accelerated underwriting is used by life insurers and makes recommendations for evaluating accelerated underwriting. Among other things, the revised draft includes a revised definition of “accelerated underwriting” which includes specific references to big data and artificial intelligence. The working group will meet at 4 p.m. ET on February 17 to discuss the draft and comments received.

What we read

  • AI and life insurers: Azish Filabi and Sophia Duffy of the American College of Financial Services wrote a recent article on the use of AI by life insurers. Among other things, the document proposes a self-regulatory body that can work with the National Association of Insurance Commissioners to develop standards and oversee certification and auditing processes.
  • Transatlantic AI Regulation. A recent article from the Brookings Institution summarizes AI regulatory developments in the EU and US – and outlines “steps these great democracies can take to align with reducing the harms of AI”.

Main upcoming events

  • February 17: Colorado Stakeholder Session on SB 169, which limits insurers’ use of external data, algorithms and predictive models.