Italy has become the first EU member state to pass a comprehensive national law on Artificial Intelligence (AI) that goes beyond European AI Act. This law, which enters into force on Oct. 10, 2025, does not implement the European regulation - it is not necessary as a regulation - but supplements it with specific rules for crucial societal areas such as health care, employment and copyright. It provides a fascinating blueprint for how other countries, including Belgium, can address the practical and legal challenges of AI.
The AI Act as a foundation, not an endpoint
It is crucial to understand the legal context correctly. The European AI Act (Regulation (EU) 2024/1689) is, like the GDPR, a regulation. This means it is directly applicable in all EU member states. Businesses and citizens can directly invoke the articles of the AI Act without the need for a national transposition law.
The AI Act creates a harmonized framework based primarily on the risks posed by AI systems. The Italian law does something different. It uses the limited space that the AI Act leaves for national interpretation (protection of workers, the use of remote biometric identification systems and the reuse of personal data for AI training) and, in addition, addresses a series of new problems caused by AI that the European legislator has not (yet) addressed.
The Italian approach: pragmatic rules for a digital reality
The Italian law is remarkably concrete and addresses today's hot topics. It lays a foundation for a "correct, transparent and responsible" use of AI from an "anthropocentric" perspective.
Protecting the individual: from minors to victims of deepfakes
The law strengthens citizen protections on several fronts:
- Minors: In line with GDPR principles, the age limit for consent is clearly defined. Parental consent is required for AI services to process personal data of children under 14 years of age. From the age of 14, a young person can give valid consent himself.
- Criminal approach to deepfakes: Italy makes the malicious distribution of AI-manipulated content (images, videos, voices) a specific crime. Anyone who distributes such content without permission to cause harm risks a prison sentence of one to five years. This is a strong signal against disinformation and the attack on personal integrity.
Healthcare: innovation with clear ethical boundaries
The healthcare sector is identified as a priority area for AI innovation, while being subject to strict ethical safeguards:
- No discrimination: The law explicitly prohibits AI from being used to select or restrict access to health care based on discriminatory criteria.
- Human final decision: AI systems in healthcare are solely a support tool. The final medical decision on diagnosis, treatment or therapy remains at all times the preserve of the physician or healthcare professional.
- Legal basis for data: Crucially, the law creates an explicit legal basis for the secondary use of health data for the development of AI models. This is done under the heading of "substantial public interest," an exception provided for in the AGDPR (Art. 9(2)(g)), which should facilitate innovation within a strict legal framework.
Labor law: AI as ally, not controller
In the workplace, AI must be used to improve working conditions, protect workers' psychophysical integrity and increase productivity. The law emphasizes that the use of AI must never violate human dignity or invade privacy. Employers are explicitly required to inform employees about the use of AI systems.
Professions and justice: human decision remains sacrosanct
For intellectual professions, such as that of an attorney, the law states that AI may only be used for instrumental and supporting tasks. The core of intellectual performance must remain human. Moreover, the professional must inform his client in a clear and complete manner about the AI systems used.
Within the judiciary, this line continues: AI may be used for organizational and administrative support, but any decision on the interpretation of the law, the evaluation of facts and evidence, and the final verdict is always reserved for the magistrate.
👉 Also read our blog on the judge's use of AI
Economy and innovation: strategic choices for the future
The law is not only protective, but also contains a clear economic and industrial strategy:
- Tenders: When purchasing AI systems, the government should give preference to solutions that process and store data in Italy, and in particular to SMEs. This is a clear move toward technological sovereignty.
- Investments: Up to one billion euros will be released for investment in start-ups and scale-ups in the sectors of AI, cybersecurity and quantum computing.
- Institutional framework: Several new bodies, such as an Observatory for AI in the Workplace, are being established to monitor the impact and devise a national strategy.
Knowledge and creation: copyright and education in the AI era
Finally, the law addresses two fundamental cultural and social domains:
- Copyright: The law takes a clear stand on one of the most debated AI topics. Italian copyright law is amended to specify that only "human" creations ("opere dell'ingegno umano") are eligible for protection. Works generated entirely autonomously by AI are thus excluded from copyright. At the same time, it confirms that AI training via text and data mining is in principle allowed under the existing legal exceptions
- Education: The law enshrines the need for AI literacy in the education system, with a focus on the development of STEM (Science, Technology, Engineering, and Mathematics) competencies.
Conclusion: a call to action for the Belgian legislature
The Italian AI law is an impressive and pragmatic exercise in modern legislation. It shows that beyond the general European framework, there is a clear need for national rules that respond to concrete problems and make strategic choices. From criminalizing deepfakes to clarifying copyright to protecting the human touch in the workplace; Italy provides a thoughtful blueprint. For the Belgian legislator, here lies the challenge and opportunity not to wait, but to proactively create a coherent national framework that offers legal certainty to citizens and businesses in the AI era.
For companies offering AI applications in the European market, this situation creates a new strategic reality. Waiting for the deadlines of the AI Act is no longer enough. The emergence of national legislation, such as that in Italy, means anticipating country-specific obligations. The most robust approach is to adopt an "EU-plus" strategy: a compliance model that combines the core requirements of the AI Act with the stricter or additional rules from the most progressive national laws. Only in this way can the risks of a fragmented legal landscape and unexpected enforcement actions in individual member states be mitigated.



