Europe’s sweeping new law on artificial intelligence (“AI Act”) kicked into gear this month. While it’s still early days, the new rules are already reshaping how companies think about building and using AI in the EU.
The law is complicated, but at a high level, it does three things: bans certain types of AI altogether, puts strict controls on high-risk systems, and tries to make sure people understand what AI is doing.
On February 2, a key part of the AI Act went into effect. Specifically, it’s the set of rules that bans AI systems considered too dangerous or intrusive to be allowed at all, such as real-time facial recognition in public or AI that manipulates vulnerable groups.
The rest of the law is being implemented in phases. Over the next couple of years, companies will have to adjust to broader obligations, including mandatory AI training programs, transparency rules for generative models, and stricter oversight for AI used in high-risk areas like healthcare, policing, and hiring. But for now, the focus is on stopping what the EU calls “unacceptable” uses of the technology.
What’s Banned Right Now
Some of the most headline-grabbing rules are the bans. These kicked in immediately and target what the EU calls “unacceptable risk” systems. That includes stuff like:
- Social scoring, such as assigning people scores based on behavior or socioeconomic status.
- Emotion recognition at work or school, unless there’s a solid medical reason.
- AI that manipulates vulnerable people, including kids or those with disabilities.
- Real-time biometric surveillance in public places, except in some serious criminal cases, and even then, law enforcement needs a court order.
- Scraping facial images from the web or security footage to build massive biometric databases.
The penalties are steep: companies could face fines of up to €35 million or 7% of their global revenue. But those won’t start hitting until August.
AI Literacy Is Now Mandatory
Another part of the law that is already live is making sure people working with AI, whether they’re developers, managers, or even just users, actually understand it. The law calls this “AI literacy.”
It’s not just a suggestion. Companies need to run training, document it, and prove their teams know how to spot bias, follow the rules, and explain how their systems work. This isn’t just for tech teams either, it includes decision-makers and anyone else in the loop.
The EU’s Risk Ladder for AI
The AI Act doesn’t treat every system the same. It divides AI into four risk levels:
- Unacceptable Risk: Banned, as covered above.
- High Risk: This covers AI in areas like healthcare, policing, hiring, critical infrastructure, and legal systems. If an AI system could affect someone’s safety or fundamental rights, it probably falls here. These systems have to be registered, audited before launch, and continuously monitored.
- Limited Risk: This is where generative AI like ChatGPT comes in. These systems aren’t banned, but they have new requirements: label content as AI-generated, prevent illegal use, and disclose the type of data used to train them.
- Minimal Risk: Most consumer AI tools, like spam filters or recommendation engines, land here. They’re mostly left alone.
A New Bureaucracy and a Few Helpful Tools
To oversee all this, the EU is setting up a central AI Office, plus national regulators in each country. There’s also going to be an advisory board made up of scientists to weigh in on big-picture risks, especially from general-purpose AI models.
On the flip side, startups and smaller companies get access to “regulatory sandboxes”, basically safe zones where they can test out new ideas under less pressure.
Timeline: What’s Next?
The rollout is phased. Here’s how it breaks down:
- February 2025: Banned practices go into effect; companies must start AI training programs.
- August 2025: The fines start; rules for general-purpose AI kick in.
- 2026–2027: Full rules apply to high-risk AI systems, including audits and real-world performance checks.
Big Picture: Europe Is Betting on Guardrails
This is the first major AI law of its kind anywhere in the world. Even companies outside the EU have to comply if they’re doing business in the region, kind of like GDPR all over again, but for AI.
The EU wants to walk a fine line: keep people safe and protect their rights, while still encouraging innovation. Whether that works is an open question. Some critics argue the rules are too strict and could scare off startups. Others say the safeguards don’t go far enough.
Either way, the law is here, and it’s going to force some hard choices.
What’s Still Unclear
There are still loose ends. No one’s sure how “systemic risk” will be defined for general-purpose AI. The auditing process is still fuzzy. And coordinating enforcement across 27 countries with very different tech ecosystems? Not going to be easy.
Companies also have to figure out how this meshes with existing rules like the GDPR. For example, if your AI system uses personal data to make decisions, you’re now dealing with two sets of regulators and possibly two sets of fines.
The AI Act is a big bet that democracy can keep up with technology. Whether or not that pans out, the next few years will show just how much control governments and societies can realistically have over AI.