Artificial intelligence: UK and EU take legislative steps - convergence or divergence?
The market be watching to see how closely the UK's AI strategy aligns with the EU framework
In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy).
The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI—including through the development of a legislative framework for AI which will promote public trust and a level playing field.
Shortly after the UK government’s announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission’s overall “AI package”. The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases.
- Here's our list of the best cloud computing services around
- We've built a list of the best cloud storage services right now
- Check out our list of the best cloud analytics services out there
This article is authored by Morgan Lewis partner Mike Pierides and associate Charlotte Roxon
EU regulation
The EU Regulation proposes to protect users “where the risks that the AI systems pose are particularly high”. The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.
Much of the EU Regulation is focused on imposing prescribed obligations in respect of such high-risk use cases, including obligations to undertake relevant “risk assessments”, to have in place mitigation systems such as human oversight, and to provide transparent information to users. We expect that as well as driving AI policies within providers and users of AI, many of these obligations will be flowed down by customers to their contracts with AI providers.
The European Union has banned AI use cases which it considers to pose an “unacceptable” threat to the safety, livelihoods, and rights of people. These cases include the use of real-time remote biometric identification systems for law enforcement purposes in publicly accessible spaces (unless exceptionally authorised by law) and the use of systems which deploy subliminal techniques to distort a person’s behaviour, or exploit “vulnerabilities” of individuals, so as to cause or be likely to cause physical or psychological harm.
The EU Regulation has also defined “low-risk” AI use cases (e.g. use in spam filters) where no specific obligations are imposed, although providers of low-risk AI are encouraged to comply with an AI code of conduct to ensure that their AI systems are trustworthy.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Non-compliance with the regulation could mean heavy GDPR-style fines for companies and providers, with proposed fines of up to the greater of €30m or 6% of worldwide turnover.
The EU Regulation has extra-territorial application, meaning that AI providers who make their systems available in the European Union or whose systems affect people in the European Union or have an output in the European Union, irrespective of their country of establishment, will be required to comply with the new EU Regulation.
UK Strategy: Legislative framework
From a legislative perspective, the United Kingdom’s starting point on AI legislation is similar to the European Union’s, as protection of individuals’ data is predominantly legislated for within GDPR and that legislation’s emphasis on putting the rights of individuals first. Post-Brexit, the United Kingdom is showing some signs of wanting to diverge from the “European approach” enshrined in GDPR, as was announced by Digital Secretary Oliver Dowden in early March, although the details of any such divergence remain unclear.
This could signal that the United Kingdom’s AI legislative framework will consciously diverge from the proposed EU Regulation, most likely in order to be less prescriptive with respect to obligations placed on providers and users of what the EU Commission has labelled as “high-risk” AI use cases. However, at this time this is merely conjecture. One challenge the United Kingdom will face, as it does with GDPR, is the extra-territorial impact of the EU Regulation and the need to ensure that data flows between the European Union and United Kingdom continue relatively unaffected by Brexit.
Next steps
In the United Kingdom, the government has begun engaging with AI providers and consumers on the AI Council’s Roadmap, which will continue throughout the year to develop the UK Strategy.
In the European Union, the European Parliament and EU Member States will need to adopt the EU Commission’s proposals on AI in order for the EU Regulation to become effective.
With the substantive detail of the UK Strategy still unknown, market participants will be watching intently to see how closely the United Kingdom’s new strategy will align with the legislative framework proposed by the EU Commission.
- Here's our list of the best cloud backup services right now
Mike Pierides is a partner at law firm Morgan Lewis