The introduction of the European AI Act does not only mean a new set of rules for companies, but also a restriction of the Belgian legislator's powers. By opting for maximum harmonization, the EU has taken control of the policy area surrounding artificial intelligence. This means that the Belgian legislature—both federal and regional—will largely lose the power to set its own, different rules for AI systems, even if the EU itself hardly regulates those systems.
Maximum harmonization and priority
In the media and politics, attention is mainly focused on the substantive obligations of the AI Act: prohibited applications, rules for high-risk systems, and transparency requirements. However, one important aspect is often overlooked: the impact on the distribution of powers.
To understand the impact on the Belgian legal system, we need to look at the legal basis of the AI Act. The regulation is based on Article 114 TFEU, which focuses on the functioning of the internal market. The aim is not only to regulate, but above all to prevent market fragmentation between Member States.
EU law applies the principle of primacy. Once the Union legislates in a particular area, Member States are excluded from acting in that same area. The European legislator has made it clear—including in Recital 1—that the AI Act prevents Member States from imposing restrictions on the development, marketing, and use of AI systems, unless the regulation explicitly allows this. This creates a legal cap: national rules may not be stricter, but also not different, unless a specific opening has been left.
Partial regulation
However, there is an imbalance between the scope of the AI Act and the actual rules in the regulation.
- The scope of application: The definition of “AI system” in Article 3(1) is very broad. It covers almost all software that generates outputs through machine learning or logic. As a result, virtually every modern AI application falls under the exclusive jurisdiction of the EU.
- The regulated area: The actual obligations (compliance, conformity assessment, risk management) apply only to some of these systems, namely the “high-risk” systems and prohibited practices.
For the vast majority of AI systems (those that are not high-risk, such as spam filters, simple chatbots, games, and optimization software), the EU does not impose any substantive obligations. At the same time, the EU prohibits member states such as Belgium from creating their own rules for these systems.
The European legislator has therefore implicitly determined that these low-risk systems should be subject to the free market, without any restrictions imposed by national legislators. A Belgian bill that, for example, would impose additional ethical requirements on AI in video games or additional requirements on simple AI chatbots would therefore immediately be in breach of EU law, as Belgium no longer has the power to regulate the “use” of AI systems.
National initiatives are not possible
It is inevitable that this restriction of national autonomy will lead to conflicts. After all, politicians want to respond to societal concerns about AI, such as deepfakes. Recent examples from our neighboring countries demonstrate the tension:
- Germany (Bavaria): A proposal was made to criminalize malicious deepfakes (such as non-consensual pornography) under criminal law.
- France: A bill sought to require that every AI-generated image be explicitly marked as such.
Although the intentions are legitimate, such initiatives are likely to conflict with the AI Act. After all, regulating the “use” of AI systems falls exclusively within the competence of the EU. If the EU has decided to regulate deepfakes solely through transparency obligations (as in Article 50 of the AI Act), a Member State cannot unilaterally decide to impose more far-reaching prohibitions or technical obligations. Even criminal law provisions may fall within the scope of EU law if they fragment the internal market for AI services.
Where does the Belgian legislator still have room for maneuver?
Harmonization is maximal, but not total. There are specific openings and exceptions where national or regional legislators in Belgium still retain jurisdiction.
1. Labor law and employee protection
Article 2(11) of the AI Act explicitly states that the regulation does not prevent Member States from maintaining or introducing laws that are more favorable to workers.
AI systems for recruitment or employee monitoring are considered “high-risk” under the AI Act. They must meet technical requirements. However, Belgium may decide that such systems are prohibited in certain contexts, or that the works council has a right of veto that goes beyond European rules. In this case, the AI Act functions as minimum harmonization.
2. National security and defense
Article 2(3) excludes AI systems used exclusively for military or national security purposes from the scope of the regulation.
This exception is strict. Many systems are “dual use” (suitable for both civilian and military use). As soon as a system is also used for civilian purposes, it falls under the AI Act. Moreover, the line between law enforcement (police = AI Act applies) and national security (intelligence services = AI Act does not apply) is often blurred in practice.
3. Biometric identification
The use of remote biometric identification in real time in public spaces was a political bone of contention. Article 5(1)(h) of the Act allows this under strict conditions for law enforcement purposes. However, Article 5(5) explicitly gives Member States the right to apply stricter rules or to prohibit this use altogether. Belgium therefore retains full sovereignty to ban this technology.
Research and development
There is often confusion about the status of R&D. A crucial distinction lies in the development phase. Here, we must differentiate between purely scientific research and commercial product development (“regular” R&D).
- Scientific research: AI systems that have been specifically developed and put into use for the sole purpose of scientific research and development are completely exempt from the AI Act (Article 2(6)).
- Commercial development (prior to market launch): An important “safe harbor” also applies to companies. As long as an AI system is in the research, testing, and development phase in a controlled test environment and has not yet been placed on the market, the obligations of the AI Act do not apply (Article 2(8)).
Does this mean that Belgium can then fill in these areas excluded from the AI Act with its own rules? No. In Recital 1 of the AI Act, the European legislator has explicitly stated that Member States may not restrict the development, placing on the market, and use of AI systems. This recital does not exclude scientific use. Within its exclusive competence, the European legislator has therefore decided that any form of development and any form of scientific use must be free from rules. A Belgian law that would impose strict conditions on the training or development of AI systems in Belgian labs or AI systems for scientific research is not permitted.
Strategic implications for businesses and governments
The shift in powers has direct consequences for your compliance strategy and risk management.
- Legal certainty for businesses: If you market an AI system that complies with European standards, a Belgian regulator or local authority cannot, in principle, impose additional technical requirements. This reduces the administrative burden for companies operating across borders.
- Be careful with indirect effects: The AI Act affects sectors in which Belgium traditionally has jurisdiction, such as education and justice. Because the AI Act classifies certain applications in education (e.g., AI for marking exams) as high-risk and regulates them, these applications are implicitly “authorized” by the EU. A Flemish decree that would completely ban the use of such AI in schools may conflict with the EU's principle of primacy.
- Limited enforcement: Although legally speaking the EU has jurisdiction, it is expected that the European Commission will be cautious in the initial phase when it comes to infringement proceedings against member states that nevertheless make their own rules. This could lead to a period of legal uncertainty in which national rules that are actually illegal remain in place until the Court of Justice intervenes.
Frequently Asked Questions (FAQ)
Can a Belgian municipality prohibit the use of certain AI cameras?
In principle, the regulation of the marketing and use of AI systems is harmonized at the European level. A local ban that contravenes the authorizations granted under the AI Act could be upheld if it is specifically justified on grounds of public order, but a general ban on a technology that is permitted by the EU is problematic due to the principle of primacy.
Can a Belgian city ban “smart cameras” if the AI Act allows them?
If the cameras use biometric identification, Belgium (and therefore also lower authorities via national legislation) may be stricter and ban them. However, if other AI applications (e.g., traffic flow analysis) that are permitted by the EU are involved, a local ban on the technology may be in violation of the AI Act. A ban based on the GDPR (data protection) remains possible.
Does the AI Act also apply to AI systems developed in Belgium for scientific research?
No, the AI Act contains specific exceptions for research and development. The regulation does not apply to AI systems that are specifically developed and put into service for the sole purpose of scientific research. In this way, the EU does not want to hinder innovation within the Union.
What about AI systems that do not pose a high risk? Can Belgium establish ethical rules for them?
No, not in principle. Because the EU has occupied the entire field of “AI systems” but chose not to substantially regulate low-risk systems, there has been a deliberate deregulation. National ethical rules would create new barriers in the internal market, which is exactly what the AI Act aims to prevent.
Does the AI Act also apply to AI models (such as GPT-5) or only to systems (such as Chat GPT)?
The AI Act makes a distinction. Most rules apply to systems. However, there is a specific regime for “General Purpose AI Models” (GPAI). Although Recital 1 of the AI Act only states that Member States may not adopt rules for “AI systems” (the principle of EU law primacy), it is legally very likely that this prohibition extends to AI models. The EU is assuming this power to prevent the internal market from becoming fragmented by 27 different national laws for AI models.
Conclusion
The message is clear: the EU AI Act is more than a list of obligations; it is an instrument that protects the internal market from fragmentation. For companies, this offers legal certainty: in principle, you do not have to worry about 27 different national AI laws within the EU.
However, the line between what can and cannot be regulated at the national level will be the subject of legal battles in the coming years. National political pressure will undoubtedly lead to attempts to regulate locally after all.



