New EU rules would allow it to shut down AI before it got dangerous
European Union's Artificial Intelligence Act is one of the first attempts to regulate AI globally
Artificial Intelligence is everywhere: the rise of "thinking" machines has been one of the defining developments of the past two decades – and will only become more prominent as computing power increases.
The European Union has been working on a framework to regulate AI for some time, starting way back in March 2018, as part of its broader Digital Decade regulations.
Work on AI regulations has been relatively slow while the EU focuses on the Digital Markets Act and the Digital Services Act, which focus on reigning in the American tech giants, but the work definitely continues.
EU AI tools
Any worthwhile legislative process should be open to critique and analysis and the EU's AI Act is undergoing a thorough treatment by the UK-based Ada Lovelace Institute, an independent research institution working on data policy.
The full report (via TechCrunch) includes a lot of detail on the pros and cons of the regulation, which is a global first, with the main takeaway is that the EU is setting itself up to have some pretty powerful tools at its disposal.
The EU plans to create and empower oversight bodies that can, theoretically, order the withdrawal of an AI system that is deemed high risk, before requiring the model be retrained.
The draft AI Act has been under a lot of scrutiny – and has received a fair amount of criticism – and will likely still fall short of the EU's most expansive goals: creating the conditions for "trustworthy" and "human-centric" AI.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The next battleground
The Ada Lovelace Institute report is, first and foremost, a critique of the AI Act, mainly because the proposal is still in its draft stages when changes can be made.
But that isn't to say that the AI Act is a failure. "In short, the AI Act is itself an excellent starting point for a holistic approach to AI regulation," says the report. "However, there is no reason why the rest of the globe should unquestioningly follow an ambitious, yet flawed, regime held in place by the twin constraints of the New Legislative Framework ... and the legislative basis of EU law."
As computational power increases, AI will become even more integral to the lives of people around the world than it is today.
Many of the algorithms and code that power AI are black boxes with little to no recourse for oversight, even when things go wrong. In America, for example, efforts to regulate AI that automatically cut benefits or send police to Black and Hispanic neighbourhoods have been impossible to regulate.
As the most powerful tech-sceptical governmental body in the world, it makes sense of the EU to be leading the charge on regulating AI – and fulsome critiques will only make the end result better.
Max Slater-Robins has been writing about technology for nearly a decade at various outlets, covering the rise of the technology giants, trends in enterprise and SaaS companies, and much more besides. Originally from Suffolk, he currently lives in London and likes a good night out and walks in the countryside.