The European Commission (EC) has now released the official version of its proposed regulation (Regulation) on the use of artificial intelligence (AI) in the European Union (EU) which follows last week's leak of a draft version to the media. Our initial comments on the key rules proposed in the unofficial draft can be found here.
The Regulation would have extraterritorial effect and wide-ranging implications for any organisations developing, supplying or using AI applications (or the outputs of those applications) in the EU (irrespective of whether that organisation is established in the EU).
The EC considers that AI can bring a "wide array of economic and societal benefits across the entire spectrum of industries and social activities". By addressing the risks associated with some forms of AI and taking a "human centric" approach to regulation, the EC hopes to promote public trust in AI applications and, in turn, boost the uptake of such technology.
Who does the Regulation apply to?
The Regulation will generally apply to:
- providers of AI systems which are sold or made available in the EU, regardless of whether those suppliers are established within the EU;
- users of AI systems in the EU; and
- providers and users of AI systems located outside the EU, where the output produced by the system is used in the EU.
Given its extraterritorial effect, the Regulation is likely to have a flow-down impact on the use, development and supply of AI solutions internationally, in a similar way to the global impact that the GDPR has had on organisations' privacy practices.
Some of the key rules contained in the Regulation include:
Providers of high-risk AI systems would need to comply with various requirements including:
- maintaining a risk management system for the AI system;
- implementing appropriate data governance and management practices;
- maintaining technical documentation to demonstrate compliance with the Regulation;
- ensuring transparency about the use of the AI system;
- maintaining human oversight (including "kill switch" functionality that enables human intervention through a "stop" button or similar procedure); and
- registering the high-risk AI system on a newly established publicly accessible high-risk AI register managed by the EC.
The requirements above can also apply to importers, distributors and users of high-risk AI systems in certain circumstances. Users of high-risk AI systems also need to comply with user-based rules and restrictions regarding AI system monitoring, the use of input data and the storing of logs automatically generated by the AI system.
Penalties, administration and codes of conduct
Penalties: The proposed penalties for infringement of the rules regarding banned AI systems include fines of up to 6% of global annual turnover or €30 million (whichever is greater) and, for non-compliance with the rules related to high-risk AI systems, fines of up to the greater of either 4% of global turnover or €20 million.
Governance and administration: The Regulation will establish the European AI Board and processes for local oversight and management through EU Member State local authorities. The Regulation also provides for the establishment of a regulatory sandbox to facilitate the development and testing of compliant AI systems.
Codes of conduct: In addition to the rules set out above, the Regulation also requires the EC and EU Member States to encourage and facilitate the preparation of individual codes of conduct for AI systems that are not considered high-risk (e.g. in areas such as environmental sustainability and disability access). Codes of conduct may be drafted by providers of AI and other AI ecosystem participants.
The Regulation is likely to undergo some changes as feedback is received from the European Parliament and EU Member States. We will keep you updated with any material developments as they occur.
The full text of the Regulation can be accessed here.