Artificial intelligence: UK and EU take legislative steps - convergence or divergence?

digital transformation
(Image credit: Shutterstock / Ryzhi)

In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy).

The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI—including through the development of a legislative framework for AI which will promote public trust and a level playing field.

Shortly after the UK government’s announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission’s overall “AI package”.  The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases.

About the authors

This article is authored by Morgan Lewis partner Mike Pierides and associate Charlotte Roxon

EU regulation

The EU Regulation proposes to protect users “where the risks that the AI systems pose are particularly high”.  The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.

Much of the EU Regulation is focused on imposing prescribed obligations in respect of such high-risk use cases, including obligations to undertake relevant “risk assessments”, to have in place mitigation systems such as human oversight, and to provide transparent information to users.  We expect that as well as driving AI policies within providers and users of AI, many of these obligations will be flowed down by customers to their contracts with AI providers.

The European Union has banned AI use cases which it considers to pose an “unacceptable” threat to the safety, livelihoods, and rights of people.  These cases include the use of real-time remote biometric identification systems for law enforcement purposes in publicly accessible spaces (unless exceptionally authorised by law) and the use of systems which deploy subliminal techniques to distort a person’s behaviour, or exploit “vulnerabilities” of individuals, so as to cause or be likely to cause physical or psychological harm.

The EU Regulation has also defined “low-risk” AI use cases (e.g. use in spam filters) where no specific obligations are imposed, although providers of low-risk AI are encouraged to comply with an AI code of conduct to ensure that their AI systems are trustworthy.

Non-compliance with the regulation could mean heavy GDPR-style fines for companies and providers, with proposed fines of up to the greater of €30m or 6% of worldwide turnover.

The EU Regulation has extra-territorial application, meaning that AI providers who make their systems available in the European Union or whose systems affect people in the European Union or have an output in the European Union, irrespective of their country of establishment, will be required to comply with the new EU Regulation.

UK Strategy: Legislative framework

From a legislative perspective, the United Kingdom’s starting point on AI legislation is similar to the European Union’s, as protection of individuals’ data is predominantly legislated for within GDPR and that legislation’s emphasis on putting the rights of individuals first.  Post-Brexit, the United Kingdom is showing some signs of wanting to diverge from the “European approach” enshrined in GDPR, as was announced by Digital Secretary Oliver Dowden in early March, although the details of any such divergence remain unclear. 

This could signal that the United Kingdom’s AI legislative framework will consciously diverge from the proposed EU Regulation, most likely in order to be less prescriptive with respect to obligations placed on providers and users of what the EU Commission has labelled as “high-risk” AI use cases.  However, at this time this is merely conjecture.  One challenge the United Kingdom will face, as it does with GDPR, is the extra-territorial impact of the EU Regulation and the need to ensure that data flows between the European Union and United Kingdom continue relatively unaffected by Brexit.

Next steps

In the United Kingdom, the government has begun engaging with AI providers and consumers on the AI Council’s Roadmap, which will continue throughout the year to develop the UK Strategy.

In the European Union, the European Parliament and EU Member States will need to adopt the EU Commission’s proposals on AI in order for the EU Regulation to become effective.

With the substantive detail of the UK Strategy still unknown, market participants will be watching intently to see how closely the United Kingdom’s new strategy will align with the legislative framework proposed by the EU Commission.

Mike Pierides is a partner at law firm Morgan Lewis

Read more
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
EU
I read the EU’s AI Act so you don’t have to - here are 5 things you need to know
A person holding out their hand with a digital AI symbol.
UK government releases new AI code of practice to help protect companies
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
An AI face in profile against a digital background.
The truth about GenAI security: your business can't afford to “wait and see”
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
Latest in Pro
Homepage of Manus, a new Chinese artificial intelligence agent capable of handling complex, real-world tasks, is seen on the screen of an iPhone.
Manus AI may be the new DeepSeek, but initial users report problems
healthcare
Software bug meant NHS information was potentially “vulnerable to hackers”
Hospital
Major Oracle outage hits US Federal health record systems
A hacker wearing a hoodie sitting at a computer, his face hidden.
Experts warn this critical PHP vulnerability could be set to become a global problem
botnet
YouTubers targeted by blackmail campaign to promote malware on their channels
A computer screen showing a spreadsheet in use.
This entire nation's public health department was found to be running on a single Excel spreadsheet
Latest in News
Lego Mario Kart – Mario & Standard Kart set on a shelf.
Lego just celebrated Mario Day in the best way possible, with an incredible Mario Kart set that's up for preorder now
TCL QM7K TV on orange background
TCL’s big, bright new mid-range mini-LED TVs have built-in Bang & Olufsen sound
Homepage of Manus, a new Chinese artificial intelligence agent capable of handling complex, real-world tasks, is seen on the screen of an iPhone.
Manus AI may be the new DeepSeek, but initial users report problems
Google Maps
Nightmare Google Maps glitch is deleting timelines, and there isn't a fix yet
Twitter social media application change logo to X. Elon Musk CEO of twitter rebranded Twitter to 'X'. Social media application technology concept.
X is down again – Elon Musk confirms 'massive cyberattack' as former Twitter site hit by fourth outage today
Joe Goldberg and Kate Lockwood sitting at a table and looking at the camera in You season 5.
Netflix releases a killer new trailer for You season 5 but my favorite character is missing from Joe's final chapter