I read the EU’s AI Act so you don’t have to - here are 5 things you need to know
What will the regulation mean for you?

AI is everywhere. It’s on the news, it's at your job, it's on your phone - you can't escape it, or at least that’s how it feels.
What you might’ve been able to avoid so far, is the incoming EU’s AI Act- a pioneering piece of legislation that ‘ensures safety and compliance with fundamental rights, while boosting innovation’.
If you are in the EU, and use AI in any capacity in your organization, as providers (or those who develop the systems), users, importers, distributors, or manufacturers of AI systems - you will need to make changes based on this new legislation. The regulations apply even if your company is not established in the EU as long as it serves the EU market.
The price of non-compliance
There are penalties for non-compliance - and they’re not small. The fines for infringements on the legislation range from €750,000 to €35,000,000 or from 1 % to 7% of the company's global annual turnover - whichever is higher and depending on the severity of the violation.
Obviously, a 35 million Euro fine is a pretty eyewatering amount, so you’d better make sure you know the legislation inside and out - and quick.
Why do we need it?
As with any regulation, to get your head around it, it’s important to understand the spirit of the legislation and what it's trying to achieve.
AI is evolving undeniably fast, and the existing regulation is pretty non-existent. Despite its infancy, AI has had its fair share of controversy. From the use of copyrighted materials to train Large Language Models, to chatbots ‘hallucinating’ offering factually incorrect answers - AI needs guidance.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The AI Act looks to establish a legal framework to ensure the ‘trustworthy development’ of AI systems - and prioritises safe use, transparency, and ethical principles.
Risk-based approach
The regulation will categorise systems into four categories; Unacceptable risk, High risk, Limited risk, and Minimal risk.
Unacceptable risk systems are those which the EU deem to pose a risk to people. The systems are outright banned, and face the highest fines for non-compliance, as they violate fundamental EU values. Examples of this type of system include social scoring, behavioural manipulation such as the use of subliminal techniques, the exploitation of vulnerabilities of children, and live remote biometric identification systems (with narrow exceptions).
High risk systems are not prohibited, but are subject to strict conformity requirements, as they have the potential to impact people’s fundamental rights and safety. Examples of this include credit, health, insurance, or public service eligibility evaluations, employment or education access systems, border control, or anything that profiles an individual. Developers and users of these high-risk models have a number of obligations, including human oversight, risk management, conducting data governance, instructions for use, and record keeping - amongst others.
Limited risk systems require transparency, and must be developed to ensure that users are fully aware that they are interacting with an AI model. Examples include chatbots and generative AI systems like image, video, or audio editors.
Minimal risk systems are those that don’t fall into any of the above categories - and therefore aren’t subject to any requirements. These typically include things like spam filters and AI enabled video games.
Exceptions
There are a few exceptions to the prohibited systems. Law enforcement has a list of biometric identification systems (RBI) that are accepted and with very narrowly defined situations. ‘Real time’ RBI can only be deployed under a strict set of safeguards - but as a general rule, for businesses not affiliated with law enforcement the technology will be banned.
Your responsibility
If your businesses use AI in any way, you’ll have some work to do before the regulations are fully implemented. First of all, update (or create) your AI policies and procedures - if anything goes wrong, these will come under scrutiny, so make sure internal and customer-facing policies are renewed to reflect the AI Act values, like transparency, non-discrimination, and fairness.
Make sure you do a full audit of any AI systems and create an inventory. Identify all the models you use, assess their risk, and develop mitigation strategies so you can continue using them in the EU market. Compliance plans and strategies are key, so make sure you have a plan in place on how to comply, bias audits, risk assessments, etc.
Training your staff and boosting awareness. Staff who use these systems will be affected, and some staff will certainly be required to carry out the human oversight section of the regulation, and risk management will be much easier to audit if everyone understands the dangers.
The Act will almost certainly be pretty fluid, and will change - especially given the dizzying rate at which AI is evolving. Make sure you keep a close eye on it and adapt your policies accordingly.
You might also like
- Check out our list of the best malware removal software around today
- Innovation in AI is in danger of outpacing governance
- We've also rounded up the best identity theft protection on offer right now
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.