Contributed by: Liz Blythe, Louise Taylor and Vaash Singh
Published on: April 20, 2021
A leaked draft of the European Commission's (Commission) rules on the use of artificial intelligence (AI) in the European Union (EU) proposes tough new "human-centric" rules for "high risk" AI. The proposed rules further seek to ban certain types of AI applications - including those used for mass surveillance and social credit scores - and to regulate the use of others.
Similar to the GDPR, the rules are proposed to have extra-territorial effect, potentially having a significant impact on the global AI ecosystem. New Zealand companies that develop or sell AI in the EU could be fined up to 4 per cent of their annual turnover for non-compliance.
The rules' objectives are to ensure that AI in the EU is transparent, has appropriate human oversight, and meets the EU's high standards of privacy. While an official version of the proposed rules is yet to be released, the leaked draft has already been heavily criticised for taking a simplistic approach to determining "high-risk" AI, with the potential to stifle innovation.
Some key rules that have been reported ahead of their official release are:1 2
Partner, Technology and Digital
Special Counsel, Technology and Digital
Data Protection and Privacy
Technology and Digital