Blog Image

Publications

The European Commission's proposal for "trustworthy AI" regulation

Home Insights The European Commission's proposal for "trustworthy AI" regulation

Contributed by:

Contributed by: Liz Blythe, Louise Taylor and Vaash Singh

Published on:

Published on: April 28, 2021

Share:

The European Commission (EC) has now released the official version of its proposed regulation (Regulation) on the use of artificial intelligence (AI) in the European Union (EU) which follows last week's leak of a draft version to the media. Our initial comments on the key rules proposed in the unofficial draft can be found here.  

The Regulation would have extraterritorial effect and wide-ranging implications for any organisations developing, supplying or using AI applications (or the outputs of those applications) in the EU (irrespective of whether that organisation is established in the EU).  

The EC considers that AI can bring a "wide array of economic and societal benefits across the entire spectrum of industries and social activities". By addressing the risks associated with some forms of AI and taking a "human centric" approach to regulation, the EC hopes to promote public trust in AI applications and, in turn, boost the uptake of such technology.

Who does the Regulation apply to?

The Regulation will generally apply to:

  • providers of AI systems which are sold or made available in the EU, regardless of whether those suppliers are established within the EU;
  • users of AI systems in the EU; and
  • providers and users of AI systems located outside the EU, where the output produced by the system is used in the EU.

Given its extraterritorial effect, the Regulation is likely to have a flow-down impact on the use, development and supply of AI solutions internationally, in a similar way to the global impact that the GDPR has had on organisations' privacy practices.

Key rules

Some of the key rules contained in the Regulation include:

  • Ban on certain AI systems: The Regulation bans a small number of AI applications that are categorised as unsafe or which are considered to violate fundamental human rights. Notably, these include:

    • the use of AI to deploy subliminal techniques to materially distort behaviour in a manner that can cause physical or psychological harm;
    • any AI system that exploits any vulnerable group; and
    • the use of real-time biometric ID systems in publicly accessible spaces for law enforcement purposes, except where broad exemptions apply (e.g. the targeted search for specific victims of crime).
 
  • "High-risk" AI systems: Certain applications of AI are categorised to be "high-risk" under the Regulation. Both suppliers and users of high-risk AI will have to comply with certain rules. These include systems for:

    • biometric identification and categorisation of humans;
    • management and operation of critical infrastructure (e.g. operation of road traffic and supply of utilities);
    • education and vocational training (e.g. assessing participants in tests required for admission);
    • employment recruitment practices and evaluating employee performance;
    • assessing eligibility for benefits;
    • law enforcement (e.g. to predict offending or reoffending);
    • migration, asylum and border control management (e.g. conducting "lie detection" tests and making decisions regarding applications for asylum or residency); and
    • administration of justice and democratic processes (e.g. assisting judicial decisions).

 
Providers of high-risk AI systems would need to comply with various requirements including:

  • maintaining a risk management system for the AI system;
  • implementing appropriate data governance and management practices;
  • maintaining technical documentation to demonstrate compliance with the Regulation;
  • ensuring transparency about the use of the AI system;
  • maintaining human oversight (including "kill switch" functionality that enables human intervention through a "stop" button or similar procedure); and
  • registering the high-risk AI system on a newly established publicly accessible high-risk AI register managed by the EC.  

The requirements above can also apply to importers, distributors and users of high-risk AI systems in certain circumstances. Users of high-risk AI systems also need to comply with user-based rules and restrictions regarding AI system monitoring, the use of input data and the storing of logs automatically generated by the AI system.

  • Transparency requirements for general AI: AI systems intended to interact with humans – such as chatbots and deepfakes – will be required to be designed and developed in a way that makes users aware that they are interacting with such AI systems.

Penalties, administration and codes of conduct

  • Penalties: The proposed penalties for infringement of the rules regarding banned AI systems include fines of up to 6% of global annual turnover or €30 million (whichever is greater) and, for non-compliance with the rules related to high-risk AI systems, fines of up to the greater of either 4% of global turnover or €20 million.

  • Governance and administration: The Regulation will establish the European AI Board and processes for local oversight and management through EU Member State local authorities. The Regulation also provides for the establishment of a regulatory sandbox to facilitate the development and testing of compliant AI systems.

  • Codes of conduct: In addition to the rules set out above, the Regulation also requires the EC and EU Member States to encourage and facilitate the preparation of individual codes of conduct for AI systems that are not considered high-risk (e.g. in areas such as environmental sustainability and disability access). Codes of conduct may be drafted by providers of AI and other AI ecosystem participants.

 
The Regulation is likely to undergo some changes as feedback is received from the European Parliament and EU Member States. We will keep you updated with any material developments as they occur.

The full text of the Regulation can be accessed here.
 

Talk to one of our experts:
Related Expertise