The EU’s Culture and Education Committee (the “Committee”) has proposed changes to the scope of the EU Law on Artificial Intelligence (the “AI Law”) ). high-risk AI systems and amend the provisions relating to proposed prohibited AI systems. These proposed changes are yet to be reviewed by the European Commission, but provide insight into how the AI law could change. Here we highlight a selection of the main changes that will result from the Committee’s proposals.
These amendments are distinct from those proposed by the EU Committee of Regions which we have written about separately. For a recap on AI law, see our articles “Artificial intelligence – The European Commission publishes a draft regulation”, “EU Artificial Intelligence Act – what’s happened so far and what to expect next” and “The European law on artificial intelligence – recent updates”.
Proposed additions to the AI Act are included in bold and italic while the wording that is proposed to be deleted appears underlined, for example [Proposed deletion:…].
Publicly accessible spaces include virtual spaces
As more of our lives and work take place online – a trend likely to continue given developments in the metaverse – it’s no surprise that public spaces can be considered physical or virtual, in both cases. “regardless of whether certain access conditions may apply”.
The AI Act prohibits the use of “real-time” biometric identification systems in publicly accessible spaces for law enforcement purposes, unless strictly necessary for specific purposes (e.g. , the search for victims of crime or the prevention of a specific, substantial and imminent threat to life)
The Committee makes a number of proposals regarding this section, removing the exceptions and including prohibiting (with or without exceptions) the inclusion of biometric identification systems, whether real-time or not.
The point that we find particularly interesting is that the current AI law states that the use of real-time biometric information does not cover online spaces because they are not physical spaces. However, the Committee is clearly concerned that “real-time” biometric information systems could be used in the virtual world and should be prohibited in virtual space as well (nb: there is no explanation as to why the proposed “virtual” is preferred over the proposed “online” deleted).
The message is that AI can pose a risk of harm whether or not that harm occurs in public online or virtual spaces, whether or not there are conditions of access.
Proposed Amendments to Section 5 (Prohibited AI Practices)
For the purposes of this Regulation, the concept of space accessible to the public must be understood as referring to any physical space or virtual place accessible to the public, whether it is a private or public place. Therefore, the notion does not cover places which are of a private nature and normally not freely accessible to third parties, including law enforcement authorities, unless such parties have been specifically invited or authorized, such as homes, private clubs, offices, warehouses and factories. [Proposed deletion: Online spaces are not covered either, as they are not physical] The same principle should apply to publicly accessible virtual networks the spaces. However, the mere fact that certain conditions of access to a particular space may apply, such as entrance tickets or age restrictions, does not mean that the space is not accessible to the public within the meaning of this regulation. Therefore, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping malls are normally also accessible to the public. The accessibility of a given space to the public must, however, be determined on a case-by-case basis, taking into account the specificities of the individual situation in question.
Damage includes economic damage
The AI Act also seeks to prohibit the marketing or commissioning of any AI that exploits vulnerabilities of specific groups or uses subliminal techniques to distort a person’s behavior and cause harm to that person. person or to another person. But what types of damage are covered?
The Committee proposed to amend the CEW Act so that the harms in this case:
- are both psychological damage and economic damage;
- explicitly can include both material and immaterial damages.
Proposed Amendments to Section 5 (Prohibited AI Practices)
The following artificial intelligence practices are prohibited:
(a) the placing on the market, putting into service or use of an AI system which deploys [Proposed deletion: subliminal] techniques [Proposed deletion: beyond a person’s consciousness in order to] with the effect or probable effect of materially [Proposed deletion: distort] distorting the behavior of a person in a way that causes or is likely to cause that person or another person material or immaterial damage, including physical [Proposed deletion: or]psychological or economic damage;
(b) placing on the market, putting into service or using an AI system that exploits any of the vulnerabilities of a [Proposed deletion: specific group of persons] person because of his known or foreseeable personality or his social or economic situation or because of their age, physical or mental [Proposed deletion: disability] capacityin order to materially distort a person’s behavior [Proposed deletion: pertaining to that group] in a way that causes or is likely to cause that person or another person material or immaterial damage, including physical, psychological or economical prejudice;
Machine-generated information is high risk
The AI Act identifies specific types of AI systems as high risk. These include AI systems for the management and operation of critical infrastructure, education and professional training, law enforcement and the administration of justice and democratic processes. . High-risk AI would be subject to specific obligations under AI law, such as being subject to appropriate human oversight and minimal technical specifications and documentation.
The Committee proposes an additional high-risk AI system: machine-generated information. The message here is that the list of high-risk AI systems is not static; the list will need to be updated over time as AI systems (and the market in which they are used) change.
Proposal to add machine-generated news as a high-risk AI system
AI systems used in media and culture, especially those that create and deliver machine-generated news stories and those that suggest or prioritize audio-visual content, should be considered high-risk because these systems can influence society, spread disinformation and misinformation, negatively impact elections and other democratic processes, and impact cultural and linguistic diversity.
The AI law was always going to be debated and amended. We now see specific proposals as to what those changes should be. This does not mean that they will be accepted, but they do give an indication of the areas of greatest risk and concern, as well as instances where AI law might not be drafted as some see fit (e.g. for more precision or flexibility). In other words, watch this space.
Overall, the rapporteur welcomes the European Commission’s proposal; however would like to suggest a few changes aimed primarily at expanding the list of high-risk AI applications in education, media and culture under Annex III and amending some provisions on prohibited practices under under section 5.