The European AI Act is proposed as legislation with a risk-based approach: the higher the risk of an AI system, the stricter the rules. In theory, this sounds like a balanced and proportionate method. In practice, however, this approach falls short in crucial respects. An analysis of the regulation's structure reveals a rigid, all-or-nothing system that creates legal uncertainty, potentially imposes disproportionate burdens on companies, and may unintentionally inhibit innovation.
The AI Act's risk pyramid
The AI Act classifies AI systems into four levels of risk, with the goal of keeping the regulatory burden proportionate:
- Unacceptable risk: Systems that pose a clear threat to people's security and fundamental rights, such as social scoring by governments or manipulative techniques, are completely banned.
- High risk: These are AI systems used in critical sectors and applications listed in Annex III of the regulation. Think of AI in medical devices, recruitment, credit assessment or law enforcement. These systems must meet strict requirements around risk management, data quality, transparency, human oversight and robustness.
- Limited risk: Systems such as chatbots or deepfakes are included. They are mainly subject to transparency obligations: users must know that they are interacting with an AI or watching manipulated content.
- Minimal or no risk: The vast majority of AI applications (e.g., spam filters, AI in video games) fall into this category. They are not subject to specific obligations under the AI Act.
A critical analysis of the 'risk-based' approach
While this classification appears clear, its elaboration has fundamental weaknesses. Its claim to be "risk-based" is undermined by several design choices in the regulation. At the heart of the problem is that the AI Act is built according to the logic of traditional product safety legislation, whereas AI is a dynamic and context-dependent technology.
1. The "all-or-nothing" trap.
For most companies, risk classification amounts to a binary choice. Systems with minimal risk are exempt from obligations, while limited-risk systems are mainly subject to transparency obligations. The real divide is between "limited risk" and "high risk.
An AI system classified as "high risk" is subjected to a burdensome and costly regime of obligations. Think conformity assessments, quality management systems, extensive documentation and strict human supervision. If a system falls just outside that category, the obligations are suddenly drastically lighter.
This "all-or-nothing" approach creates a strong, and sometimes undesirable, incentive for companies to avoid the "high risk" category at all costs. This can lead to innovative functionalities not being offered on the European market purely to avoid the heavy regulatory burden.
2. The lack of a general proportionality test.
One of the biggest shortcomings of the AI Act is the lack of a general obligation to take measures that are proportionate to the specific risk. This is the case with the General Data Protection Regulation (GDPR), where the provisions of Articles 24 and 32 GDPR require controllers to implement technical and organizational measures that are "appropriate" to the risk.
There is no such general clause in the AI Act. This has two adverse consequences:
- It creates a hard transition between "limited" and "high risk. A system that is just short of high risk but still poses significant risks escapes the strict rules.
- AI applications that pose significant risks but are not explicitly listed as high risk (Annex III) fall outside the heavy regime, even though the potential impact is significant. Consider, for example, an advanced AI therapist or psychologist, an application that could deeply impact mental health but is not currently cataloged as high risk.
3. Rigid, predefined categories
Unlike a true risk-based model, which would assess risk on a case-by-case basis, the AI Act takes a top-down approach with a closed list of high-risk applications in Appendix III. This approach inevitably leads to over- and under-regulation. Simple tools may unnecessarily fall under the heavy regime, while innovative systems with new risks slip through the cracks until the list is updated.
Fundamental flaws in the risk approach
The problems are not merely practical; they stem from more fundamental choices made by the legislature. An analysis by Professor Martin Ebers in the European Journal of Risk Regulation, exposes several deeper weaknesses.
1. No balanced risk-benefit analysis
A truly risk-based approach weighs not only the potential disadvantages (risks) but also the potential benefits of a technology. The AI Act focuses almost exclusively on mitigating harm to health, safety and fundamental rights. The great societal and economic benefits, the opportunity costs of not using AI, and the tradeoffs in dilemma situations (e.g., the accuracy of a "black box" medical AI versus transparency) are barely taken into account.
2. Risk categories based on politics, not evidence
The lists of prohibited and high-risk AI systems are not the result of a thorough, empirical and objective risk analysis. Their creation was a political process, a compromise reached at some point. There is no transparent methodology or externally verifiable evidence justifying why certain uses are listed in Appendix III and others are not. This leads to a certain arbitrariness.
3. An overly broad definition of AI
The regulation's definition of an "AI system" is so broad that it potentially encompasses almost any modern software system, including systems that are predictable and deterministic. Yet these simpler systems, when used in a high-risk sector, are subject to the same strict and complex obligations that are actually intended for unpredictable, self-learning machine learning models. This violates the principle of proportionality and the principle of technology neutrality.
4. Legal double-tracking and uncertainty
The AI Act does not replace existing legislation, but comes on top of it. This creates a complex web of overlapping obligations including the GDPR, the Medical Device Regulation (MDR) and the new Machinery Regulation. These overlaps inevitably lead to legal uncertainty, conflicts of interpretation, and duplicative compliance burdens for companies, which runs counter to efficient, risk-based regulation.
These fundamental flaws are a major reason why it is doubtful that the AI Act will cause a "Brussels Effect," with the rest of the world adopting European legislation. The regulation is too complex, too intertwined with other specific EU regulations and, at its core, not flexible enough to serve as a universal model.
What this specifically means
- For AI developers and providers: The rigid classification creates legal uncertainty. Your system may not be high risk today, but a modification of Annex III may make it subject to the strict rules tomorrow. Focusing solely on avoiding the "high-risk" stamp is a dangerous strategy. A proactive approach, where you voluntarily implement a robust risk management framework (such as the NIST AI RMF or ISO 42001), is the only way to future-proof your business and demonstrate that you are acting with care.
- For companies deploying AI systems: You cannot blindly rely on a vendor's "limited risk" classification. The absence of a general proportionality test in the AI Act does not mean that you as a user do not bear liability for damage caused by the system. Your own critical risk assessment of the AI tools you implement remains essential.
FAQ (frequently asked questions)
Is my AI system "high risk" if it is not literally in Appendix III of the AI Act?
In principle, no. The qualification as high-risk depends on the list in Annex III. However, the European Commission has the power to supplement this list via delegated acts. Moreover, under general liability law, you remain responsible for any damage caused by your AI system.
Why doesn't the AI Act contain a general rule like the GDPR to take "appropriate measures"?
The AI Act is structured as product safety legislation, operating with clear, predefined categories and requirements. This contrasts with the more principles-based approach of the GDPR, which imposes general accountability that is proportionate to the risk.
Should I now wait for further guidance from the European Commission to know what to do?
Waiting is a risky strategy. Indeed, the AI Act provides for the Commission to issue directives and delegated acts. Given rapid technological developments, especially in Generative AI, this list is expected to be updated regularly. However, the ambiguity and gaps in the regulation make it wiser to proactively establish an internal AI governance model and robust risk management framework now.
Conclusion
The AI Act is an ambitious and necessary first step in the regulation of artificial intelligence. However, the claim of a purely risk-based approach is not fully realized in practice. Its rigid categories, lack of an overall proportionality obligation and politically driven basis create a complex and sometimes unpredictable legal framework. For companies, it is crucial not only to follow the letter of the law, but also to understand its spirit and develop a proactive, thoughtful risk strategy.



