Understanding the EU AI Act: What You Need to Know

How Europe's Landmark Legislation Reshapes AI Development Globally

With the EU AI Act, coming into force on the 1st of August 1 2024, it marks a turning point in global AI regulation and development. 

This guide explains what the Act entails, why it’s significant worldwide, and what steps companies and individuals — both within and outside the EU — should take to prepare for its far-reaching implications.

This guide explains

  • What the Act entails

  • Why it’s significant worldwide

  • What steps companies and individuals — both within and outside the EU — should take to prepare for its far-reaching implications.

What is the EU AI Act?

The Act is the world's first comprehensive legal framework for AI, aiming to:

  • Address risks associated with AI technologies

  • Foster innovation and competitiveness in the EU's internal market

  • Protect fundamental rights, safety, and democratic values

Its significance lies in its potential to set global standards for AI regulation, balancing technological advancement with ethical considerations.

Key Features of the Act

1. Risk-Based Approach

The Act categorises AI systems based on their potential risk level:


EU AI Act Risk Levels

EU AI Act Risk Levels

Unacceptable Risk - Prohibited

Systems that threaten people's safety, livelihoods, and rights, including:

  • Social scoring systems used by governments

  • AI-powered toys encouraging dangerous behaviour in children

  • Cognitive behavioural manipulation of people or vulnerable groups

  • Real-time remote biometric identification in public spaces (with some law enforcement exceptions)

High Risk - Strictly Regulated

Systems allowed but subject to strict requirements, including those used in:

  • Critical infrastructure (e.g., transport)

  • Education or vocational training (e.g., exam scoring)

  • Employment (e.g., CV-sorting for recruitment)

  • Essential private and public services (e.g., credit scoring)

  • Law enforcement

  • Migration, asylum, and border control management

  • Legal interpretation and application

AI systems integrated into products under EU safety legislation (e.g., toys, aviation, cars, medical devices) are also considered high-risk.

Limited Risk - Transparency Requirements

Systems posing specific transparency risks, such as:

  • Chatbots (users must know they're interacting with a machine)

  • Deepfakes (must be labelled as artificially created)

  • Emotion recognition systems

  • Biometric categorisation systems

Minimal or No Risk - Freely Usable

The majority of current AI systems, including:

  • AI-enabled video games

  • Spam filters

  • Inventory management systems

  • Manufacturing robots

  • Smart home devices

2. General Purpose AI Models

Large-scale AI models, including generative AI like ChatGPT, must comply with transparency requirements and EU copyright law. This includes:

  • Disclosing AI-generated content

  • Preventing illegal content generation

  • Publishing summaries of copyrighted training data

3. AI Governance

Organisations developing or deploying high-risk AI systems must implement robust governance structures to ensure compliance.

Timeline and Implementation

  • April 2021: European Commission proposal

  • May 2024: Council approves act

  • August 1, 2024: Official enforcement begins

Phased application:

  • February 2025: Prohibited AI practices rules take effect

  • August 2025: General Purpose AI regulations apply

  • August 2026: Most high-risk AI systems must comply

  • August 2027: Compliance for some high-risk AI systems in regulated products

An AI Office within the European Commission will oversee implementation, providing guidance and support.

Business Implications and Compliance Steps

If your company develops or uses AI systems:

  1. ✅ Conduct an AI inventory: Identify and assess risk levels of all AI systems in your organisation.

  2. ✅ Implement compliance measures for high-risk systems:

    • Ensure data quality

    • Maintain thorough documentation

    • Implement human oversight

    • Ensure system robustness

  3. ✅ Establish AI governance structures: Develop policies for responsible AI development and use.

  4. ✅ Prepare for conformity assessments: Be ready to demonstrate compliance for high-risk systems.

  5. ✅ Stay informed: Keep up with updates and guidance from the EU AI Office.

Penalties for Non-Compliance

  • Up to €35 million or 7% of global annual turnover for prohibited practices

  • Up to €15 million or 3% for other breaches

  • Up to €7.5 million or 1% for misinformation to authorities

Enforcement and penalties will be established by individual EU Member States, considering factors like infringement nature, duration, and company size.

The Act requires minimising administrative burdens for SMEs.

Potential Impacts and Debates

Innovation vs Regulation

  • Concern: Some argue that strict requirements might stifle innovation, particularly for smaller companies.

  • Counterpoint: Others believe it will foster trust in AI technologies, ultimately driving adoption and growth.

Global Influence

The Act is expected to have a significant impact beyond the EU's borders, potentially influencing AI regulations worldwide. This phenomenon, known as the "Brussels effect," could lead to companies adopting EU standards globally to maintain market access, effectively making the Act a de facto international standard.

Enforcement Challenges

Questions remain about how effectively the Act will be enforced across different EU member states.

Consistency in interpretation and application will be crucial, requiring robust cooperation mechanisms between national authorities and the central EU AI Office to ensure uniform implementation.

Looking Ahead

As the EU AI Act comes into force, we can expect to see:

  • Increased investment in AI compliance technologies and services

  • A growing focus on developing AI systems that prioritise transparency, explainability, and fairness

  • The emergence of the EU as a global leader in 'trustworthy AI' development

For individuals, the Act promises:

  • Stronger protections against potential AI-related harms

  • Greater transparency in AI-driven decisions

  • A framework for addressing concerns about AI systems

The European Commission will review the Act after four years, potentially considering additional exemptions for small-scale providers or corporate-use AI.

Conclusion

The EU AI Act represents a significant milestone in the regulation of artificial intelligence. While it presents challenges for businesses and developers, it also offers an opportunity to build trust in AI technologies and establish a framework for responsible innovation.

As the EU AI Act continues to shape the global AI landscape, staying informed is crucial. As a Europe-based AI Consulting company, this is of particular importance to us and If you have specific questions about how these regulations might affect your AI initiatives, don't hesitate to get in touch. For ongoing updates and insights, follow us on social media (LinkedIn and X).

Further Reading

Previous
Previous

AI's Got Some Explaining to Do

Next
Next

AI Hallucinations: Where Artificial Intelligence Meets Artificial Imagination