Artificial Intelligence (Artificial Intelligence) is nowadays unavoidable in everyday life. Governments, businesses and consumers alike are confronted with AI applications and also increasingly aware of its advantages and disadvantages. In particular, policymakers are coming to the realization that AI can have far-reaching consequences and are considering regulations to prevent or mitigate undesirable effects of AI.
In 2019, the Flemish government launched a AI action plan and the federal government in 2022 a national convergence plan for the development of AI. In addition, the industry has for some time AI4Belgium established.
Below we give an overview of the legal framework in Belgium in the field of AI. In any case, there is not much Belgian legislation in this area. It is mainly the (European) AI Act which has the ambition to create a uniform legal framework within the European Union for AI.
What is the legal definition of AI?
As of today, Belgian law does not have a definition of artificial intelligence.
A commonly used definition is that of the European Commission, which defines Artificial Intelligence as “systems that exhibit intelligent behavior by analyzing their environment and taking action - to some extent independently - to achieve specific objectives. KI-based systems can consist solely of software and operate in the virtual world (e.g., voice-activated assistants, image analysis software, search engines and voice and facial recognition systems), but KI can also be integrated into hardware devices (e.g., advanced robots, self-driving cars, drones or applications of the Internet of Things)". Also, the report of AI4Belgium uses this definition.
What is the essence of the AI Act?
The AI Act introduces a legal framework for AI based on a risk analysis approach. In particular, the AI Act seeks to impose obligations on providers of AI systems that vary according to the risks posed by their AI technology. AI systems are divided into a number of risk levels:
- Unacceptable risks: AI systems that pose a threat to individuals will be banned. Social scoring systems as well as real-time and remote biometric identification systems are cited as examples.
- High risk: AI systems that have a major impact on safety or fundamental rights. Such AI systems must be subject to a thorough assessment before they can be introduced to the market, and will be subject to continuous monitoring throughout their lifetime.
- Generative AI: AI systems that create specific content, such as ChatGPT, must comply with transparency requirements. This includes disclosing that the generated content was generated by AI, taking measures to prevent illegal content from being generated, and publishing summaries of copyrighted data used for training.
- Limited risk: AI systems with limited risk must meet minimum transparency requirements so that users can make informed decisions. After interacting with these applications, users can choose whether to continue using them. Users should also be aware that they are engaging with AI systems, particularly in the case of systems that generate or manipulate image, audio or video content (e.g., deepfakes).
What rules apply to "faulty" AI systems?
What if AI systems are defective and unsafe? Since Belgium does not yet contain a specific framework for AI systems, one must fall back on the general rules applicable to defective products or services.
Particular reference should be made here to rules in the Civil Code (Book 6, extra-contractual liability) on liability for defective products that holds the producer of a product liable for damages resulting from their product. If an AI system can be considered a product (which is the case when it is incorporated into a corporeal movable) and is defective, then the producer can be held liable for the damage caused.
Of course, flawed AI systems can also be viewed from the perspective of consumer laws, such as the provisions of the Old Civil Code relating to sales to consumers. Digital content or services (including AI systems) must meet objective conformity requirements (being in line with what the general public expects from similar AI systems) and subjective conformity requirements (being in line with what has been specifically agreed with the consumer). If an AI system is deemed to be non-compliant, then consumers can demand that the AI system be made compliant, request a price reduction or terminate the contract.
Finally, it should also be pointed out that AI systems often process personal data and must comply with the requirements of the GDPR. AI producers must therefore comply with rules when processing personal data, and when flaws in AI systems would result in unauthorized access to personal data, data breaches or other violations of the GDPR, individuals can seek redress against the AI producer.
Can AI systems claim intellectual property rights?
Under Belgian law, an AI system cannot be designated as the inventor in a patent application. Indeed, the inventor must be a human being, a natural person. This was also confirmed by the European Patent Office (EPO) in 2021 in the case DABUS.
The same can be said for copyright. Under Belgian law, only a human being, a natural person, can be considered an author (Article XI.170 Code of Economic Law). Thus, AI systems cannot claim protection on creations they allegedly made. Read more about this on our page about AI and copyright.
