European Parliament to vote on new draft of the Artificial Intelligence Act

Submit a Feature

Legal

European Parliament to vote on new draft of the Artificial Intelligence ActThe new draft is the work of the European Parliament's committees for Civil Liberties, Justice, and Home Affairs (LIBE) and for Internal Market and Consumer Protection (IMCO), and contains extensive amendments to the draft originally proposed by the European Commission back in April 2021.

The European Parliament will consider the new draft (the Parliament Committees' Draft) during its June plenary session. If adopted, this draft will serve as the basis of the European Parliament's negotiating position when it enters into the "trilogue" negotiations with the European Commission and the Council (which adopted its own position in late 2022).

This article takes a look at the most significant developments in this latest draft.

Defining AI

One of the key challenges facing the EU legislators drafting the AI Act is to craft a definition of the key underlying concept, AI, which is, as far as possible, future-proof and technology-neutral. To this end, the Parliament Committees' Draft defines an AI system as:

"a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments."

This definition represents a further distillation of the key subject matter, following from the definition in the European Commission draft, which defined an AI system as software developed with one or more of an annexed list of technical/mathematical techniques and approaches, and the subsequent formulation of the definition in the Council position draft, which still hinged on underlying techniques and approaches (albeit that these techniques and approaches were addressed more broadly). The definition in the Parliament Committees' Draft is thus more succinct, but also arguably broader in scope.

Addressing new technologies

Of course the most significant development in this space since the European Commission's original draft has been the recent widespread adoption of new AI technologies trained on massive datasets, including generative AI such as ChatGPT. The Parliament Committees' Draft includes new proposed obligations in relation to such technologies, developing on the approach taken in the Council position draft.

The Parliament Committees' Draft addresses these technologies with the terms "foundation model", "general purpose AI system" and "generative AI".

A "foundation model" is defined as meaning

"an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks".

Under the Parliament Committees' Draft, providers of foundation models would be subject to certain specific new obligations, including to:

  • demonstrate the identification, reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law
  • train the foundation model only on datasets which are subject to appropriate data governance measures (in particular for suitability and to mitigate biases)
  • register the foundation model in an EU database

A "general purpose AI system" is defined as meaning

"an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed".

The recitals explain that a general purpose AI system can be an implementation, or reuse, of a foundation model; the more stringent obligations would apply in relation to foundation models because of this central and fundamental role they play, as a foundation on which further "downstream" uses can be based.

Foundation models that are used as "generative AI", i.e. used to generate content such as complex text, images, audio, or video, would be subject to further bespoke obligations. These include further transparency requirements, an obligation to ensure safeguards in training and developing the model against generating content which is in breach of EU law, and an obligation to document, and make publicly available, a summary of any training material used which is subject to copyright.

The risk based approach

The Parliament Committees' Draft retains, but notably amends, the risk-based approach provided for in the previous drafts. Essentially, the AI Act addresses AI systems according to the risks they present, in particular, those which present unacceptable risk, which are outright prohibited, and those which are high risk, which are subject to the most extensive obligations.

Prohibited AI systems

The list of AI practices which present unacceptable risk and are thus outright prohibited has been revised significantly. Changes proposed in the Parliament Committees' Draft include:

  • the prohibition on subliminal techniques or manipulative and deceptive techniques to distort a person's behaviour has been maintained, but subject to a new exception for "AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them"
  • a new prohibition on biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics, which is also subject to a similar therapeutic exception
  • a new prohibition on AI systems that predict and/or assess the risks of individuals (re)committing criminal offences
  • a new prohibition on AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
  • another new prohibition on AI systems that infer emotions of natural persons in the areas of law enforcement, border management, in the workplace and in educational institutions

High risk AI systems

As under the previous drafts, AI systems which are considered "high risk" are not prohibited, but are subject to extensive obligations. These are addressed in the Annexes: Annex II lists EU harmonisation legislation, and any AI system which is a product covered by such legislation, or which is intended to be used as a safety component of such product, will be considered high risk. Annex III provides for a list of critical areas and use cases. The approach here has been updated since the Commission draft: whereas before inclusion in the list automatically indicated high risk, the Parliament Committees' Draft includes an extra "hurdle", in that an AI system will be considered high risk to the extent it falls under one or more of the listed categories and in addition poses "a significant risk of harm" to the health, safety or fundamental rights of individuals, or (in relation to AI systems for the management and operation of critical infrastructure like energy grids or water management systems) to the environment.

All AI systems

The Parliament Committees' Draft sets out some general principles which should apply to all AI systems. The principles provide that AI systems should be:

  • subject to human agency and oversight, so that they can be appropriately controlled and overseen by humans
  • technically robust and safe
  • developed and used in accordance with existing privacy and data protection law
  • transparent – including that individuals who interact with such a system should be made aware that it is an AI system
  • developed in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases
  • developed and used in a sustainable and environmentally friendly manner

Next steps

The European Parliament's plenary session is scheduled for the 13 June. Should the European Parliament adopt the Parliament Committees' Draft, the legislation will likely go through numerous more iterations during the trilogue process before its final form is agreed upon. Industry stakeholders and civil society representatives will be expected to make their voices heard in ensuring that the legislation strikes the right balance between fostering innovation in this space and safeguarding the fundamental rights and freedoms of individuals and society. Stakeholders and industry players will likely watch the development of this ground-breaking legislation very carefully.

Article by A&L Goodbody LLP