At a Glance

  • On December 8, EU legislators provisionally agreed on the EU Artificial Intelligence Act, which is expected to put in place a fully functional AI regulatory system across the EU by mid-2026.
  • Provisions focus on transparency for the lowest risk applications and grow more robust around “high-risk” systems that impact health, safety, fundamental rights, the environment, democracy, and the rule of law.
  • AI use bans include biometric categorization, scraping to create facial recognition databases, emotion recognition in the workplace and educational institutions, social scoring, manipulative or exploitative AI systems, and predictive policing without human assessment.
  • Companies may face hefty fines at 1.5% to 7% of global turnover for non-compliance— with enforcement coordinated through a newly created European Commission AI Office.
  • The official text remains unavailable, limiting information while technical experts translate the deal into law, but businesses should watch for detailed analysis and key implications once technical and legal experts release the text to the public.

What It Is—and Isnʼt

“We have an EU AI Act!” Although, strictly speaking, we don’t. We don’t have the words on the page, and we won’t for several weeks at least. Fortunately, we do have an outline of what legislators tell us they have agreed.

The European Parliament, Council of Ministers (EU Member States), and the European Commission reached a provisional agreement on the EU Artificial Intelligence Act on Friday, December 8. Now, technical experts will begin work to translate the deal into law, and then it will be formally adopted by the Parliament and Council. The uncertain timing of this process feeds into uncertainty about the exact timelines for compliance, but bans on prohibited uses of AI will start six months after formal adoption, rules for companies developing foundation models after one year, and the remaining elements after two years. In other words, the EU AI regulatory system will likely be fully up and running by mid-2026.

What It Covers

The European Commission's initial strategy, which centered on assessing AI risks in a tiered approach, remains intact. While many observers focus on what will be banned or considered high risk, starting instead with the rules that affect the lowest risk AI applications gives a different view of the EU approach—many of these obligations focus on transparency. Users will have to be told when they are interacting with various types of AI-powered systems including chatbots and emotion recognition systems. Deep fakes and AI-generated content will have to be labeled and detectable.

Much of the debate has been about foundation models and so-called general-purpose AI. These had not fully emerged in 2021, when the European Commission proposed the Act. Developers of foundation models—and the AI systems built using them—will have to keep records, meet EU copyright law, and provide information about training data. The most powerful foundation models, defined by the computing power used in their training and therefore defined as posing significant systemic risk, will have to meet stricter rules, and it is not currently clear whether OpenAI’s GPT-4 meets the definition. The rules will not bite on models in R&D, prototypes or pre-commercial use, and they do not apply to open-source models.

AI systems deemed “high-risk”—those with the potential to impact health, safety, fundamental rights, the environment, democracy, and the rule of law—will be subject to further requirements, including a compulsory fundamental rights impact assessment. Citizens will have the right to file complaints and obtain explanations for decisions that affect their rights and are made by high-risk AI systems.

The EU has taken the step of banning some AI uses, including: categorization based on biometric features to infer political, religious, or philosophical beliefs, sexual orientation or race; the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (often mentioned alongside Clearview AI); emotion recognition in the workplace and educational institutions; social scoring based on social behavior or personal characteristics; AI systems that manipulate human behavior to circumvent free will; and AI which exploits people’s vulnerabilities (age, disability, social, or economic situation). Predictive policing will be banned unless it is supplemented with a human assessment and facts. There are various law enforcement and national security exceptions, and the hotly contested use of facial recognition systems in public places will be allowed on certain conditions.

What Businesses Should Be Watching

Enforcement of the law will be coordinated by a new European Commission AI Office, with fines for failure to comply ranging from 1.5% to 7% of a company’s global turnover, depending on the infringement. Since companies that want to deploy their AI systems in the EU will have to comply with the AI Act, the EU will be banking on achieving both credibility and influence by moving early on regulation. Some say this regulation will stifle innovation in Europe or discourage AI companies from deploying there, even though there are provisions allowing regulatory sandboxes and system testing in real-world conditions. Others wonder whether the UK will benefit from being just outside the bloc but being more lenient in its approach to AI regulation. Still others fear that the EU rules will be weaker in their final version than they sounded in recent announcements and that users’ rights will be significantly eroded. There will be a rush to further analysis and comment when technical experts and lawyers finish their work on the detail.