Artificial selection

EU Artificial Intelligence Act – Amendments proposed by the EU Committee of the Regions

The European Committee of the Regions (the “CER”) has published proposed amendments to the EU Artificial Intelligence Law (the “AI Law”). These proposed changes are yet to be reviewed by the European Commission, but provide insight into how the AI ​​law could change. Here we highlight a selection of the significant changes resulting from the ECR proposals.

These amendments are distinct from those proposed by the EU’s Committee on Culture and Education which we have written about separately. For a recap on AI law, see our articles “Artificial intelligence – The European Commission publishes a draft regulation”, “EU Artificial Intelligence Act – what’s happened so far and what to expect next” and “The European law on artificial intelligence – recent updates”.

Proposed additions to the AI ​​Act are included in bold and italic.

What is AI? Is the definition appropriate?

How AI law defines AI matters. The definition will ultimately determine the scope of the law, and what the AI ​​law affects (and does not affect).

The AI ​​Act offers a “single, enduring definition of AI” to help achieve uniform application of the AI ​​Act. Stakeholders who took part in the consultations on how the AI ​​law should be drafted called for a “narrow, clear and precise definition of AI”.

The ECR considers that the definition of AI can be broadened and improved:

  • it should be clear that the list of AI techniques and approaches in the AI ​​Act is not exhaustive and should be “regularly updated” (potentially allowing regulators or courts to broaden the definition);
  • AI is not just about techniques and approaches – it is also part of social practices, identity and culture;
  • algorithms developed by other algorithms should be subject to AI law.

Proposed Amendment to Article 5 (Prohibited AI Practices)

artificial intelligence system” (AI system) means software developed with one or more of the listed techniques and approaches (not exhaustive) in Annex Icombined with social practices, identity and cultureand this can, for a given set of human-defined goals, by observing its environment through the collection of data, the interpretation of structured or unstructured data collected, the management of knowledge or the processing of information resulting from this datagenerate outputs such as content, predictions, recommendations or decisions influencing the environments with which they interact;

Annex I – AI techniques and approaches

a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning;

b) Logical and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deduction engines, (symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, research and optimization methods.

Damage includes economic damage…for some high-risk AIs

The AI ​​Act also seeks to prohibit the marketing or commissioning of any AI that exploits the vulnerabilities of specific groups or uses subliminal techniques to distort a person’s behavior and cause harm to that person. person or to another person. But what types of damage are covered?

The ECR proposes to expand the types of harm that are likely to be caused by high-risk AI systems that use subliminal techniques to distort a person’s behavior. We wrote recently about how the EU Committee for Culture and Education (the “ECCE”) has also proposed changes to the scope of the “AI Act”. There are notable differences in approach between the ECCE and the ECR:

  • the ECR proposes to prohibit AI systems that use subliminal techniques that have or are likely to have a detrimental effect on (in particular) consumers, including (but not limited to) “monetary loss or economic discrimination”. In contrast, the amendments proposed by the ECCE 1) instead refer to “economic harm” (not monetary loss or economic discrimination), and 2) without limiting it to consumers (i.e. that such high-risk AI systems would be prohibited for other groups if they cause or are likely to cause economic harm).
  • ECCE’s proposals to address risks of monetary loss or economic discrimination relate to high-risk AI systems that use subliminal techniques (Article 5(1)(a)), but not those that exploit vulnerabilities of specific groups of people to materially distort their behavior (Article 5(1)(b)). Why the proposed amendment is for one but not the other is unexplained. On the other hand, the ECR proposes to include “economic damage” in the two types of high-risk AI to be prohibited.

Proposed Amendment to Article 5 (Prohibited AI Practices)

The following artificial intelligence practices are prohibited:

(a) the placing on the market, commissioning or use of an artificial intelligence system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior d in a way that causes or is likely to cause that person or another person harm or psychological harm, infringes or is likely to infringe the fundamental rights of another person or group of persons, including their physical or psychological health and safety, has or is likely to have an adverse effect on consumers, including including monetary loss or economic discrimination, or undermines or is likely to undermine democracy and the rule of law;

Human intervention of high-risk AI is (sometimes) required by the government

AI law requires monitoring high-risk AI systems: “High-risk AI systems are designed and developed in such a way that they can be effectively monitored by natural persons during the period of use of the AI ​​system, including with appropriate human-machine interface tools.

However, the ECR is concerned that sometimes monitoring is not enough. Some decisions that could be made only by high-risk AI should require intervention. Two of these high-risk AI systems are those used by public agencies:

5. Access to and enjoyment of essential private services and public services and benefits:

(a) AI systems intended to be used by public authorities or on behalf of public authorities to assess the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke or recover these benefits and services;

(b) AI systems intended to be used to assess the creditworthiness of natural persons or establish their credit score, with the exception of AI systems commissioned by small providers for their own use

Proposed addition to Article 14 (Human surveillance of high-risk AI)

Any decision taken by the AI ​​systems referred to in points (a) and (b) of Annex III(5) shall be subject to human intervention and based on a diligent decision-making process. Human involvement in these decisions must be guaranteed.

This article is not automatically generated. There was human intervention. While we’d like this article to be entirely original, our comment on the EU’s Culture and Education Committee’s proposed amendments to the AI ​​Act is also applicable here:

The AI ​​law was always going to be debated and amended. We now see specific proposals as to what those changes should be. This does not mean that they will be accepted, but they do give an indication of the areas of greatest risk and concern, as well as instances where AI law might not be drafted as some see fit (e.g. for more precision or flexibility). In other words, watch this space.

{

The Commission’s goal of making the EU a global leader in the responsible and human-centred development of AI can only be achieved if local and regional authorities play an important role. Local and regional authorities are best placed to help create the right environment to drive investment in AI in the years to come and trust in AI