The AI Act

On August 1, 2024, the European AI Act entered into force, making the European Union the first in the world to introduce a comprehensive legal framework for artificial intelligence (AI). This European regulation has significant implications for Belgian companies that develop, implement or use AI systems. Below we discuss the key points of the AI Act and the steps companies must take to comply with the new regulations. Our lawyers will be happy to assist you in this regard.

I. The AI Act: a risk-based approach

The AI Act aims to support the free movement of AI systems within the European Union while ensuring that AI systems are developed and deployed in a safe, reliable and human-centered manner. It emphasizes the protection of health, safety and fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union. It aims to improve the functioning of the internal market by creating a uniform legal framework for AI systems.

The AI Act uses a risk-based approach, with obligations on AI providers and users depending on the risk the AI system may pose to society. It also provides for some exceptions, such as AI systems developed specifically for scientific research or military applications, which fall outside the scope of the AI Act.

II. The operation of the AI Act

A. Scope

The AI Act is a regulation and has immediate effect throughout the European Union. EU member states (such as Belgium) are therefore not required to transpose the new rules into their own national legislation.

The AI Act has a broad scope and establishes rules for the development, commercialization and use of AI systems in all industries.

The definition of what is considered an AI system is intentionally broad and technology-neutral in order to respond to rapid developments within AI. An AI system is defined as: a machine-based system that is designed to operate with varying levels of autonomy and can exhibit adaptability once deployed and, for explicit or implicit purposes, derives from received inputs how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments. Thus, core concepts of an AI system are autonomy, inference capability and adaptability.

The Act applies to all providers of AI systems operating in the European Union, as well as to providers and users outside the EU, if their systems are used within the Union. This is intended to ensure a uniform level of protection for users in the EU, regardless of where the AI systems were originally developed.

B. Risk qualification

Like the GDPR, the AI Act takes a risk-based approach. This classification is based on the potential risk that an AI system may pose to the health, safety or fundamental rights of individuals. The AI Act distinguishes between four levels of risk:

  1. Unacceptable risk: AI systems that pose a serious risk and whose use is prohibited.
  2. High risk: AI systems that pose significant risks and therefore require strict regulation.
  3. Limited risk: AI systems with specific transparency requirements, but without strict compliance obligations.
  4. Minimal risk: AI systems with minimal risk that are not subject to additional legal obligations.

C. Prohibited AI systems

Under the AI Act, certain AI applications considered an unacceptable risk are prohibited. This includes systems that use subliminal techniques to manipulate behavior, AI systems that exploit vulnerabilities of specific groups, such as children or the elderly, and government social scoring applications. Real-time biometric identification in public places will also be banned, with exceptions for specific cases, such as serious crime fighting.

D. AI systems with limited risk

For AI systems that fall under the "limited risk" category, there are mainly transparency requirements. This means that users must be informed when interacting with an AI system. Examples include chatbots and synthetic media generated by AI, such as deepfakes. Users must be informed about the artificial nature of such applications so that they can make informed decisions in their interaction with the system.

E. AI general purpose models.

The AI Act also provides specific rules for generic AI models, known as "Generative Pre-trained AI" (GPAI), such as large language models. These GPAIs are used in a variety of contexts and for a variety of purposes and therefore must comply with the transparency and information rules that also apply to high-risk AI systems. GPAI providers are required to provide detailed information about the data and methodology they have used to train and evaluate these models.

F. Interaction with other legislation

The AI Act does not stand alone, but operates within a broader legal context and interacts with other European regulations, such as the GDPR and product safety directives. For example, AI systems processing personal data must comply with both the requirements of the AI Act and the GDPR to ensure privacy protection. In addition, the AI Act may overlap with sectoral regulations, such as the Medical Device Regulation, when AI is deployed for health purposes. These cross-links mean that companies must consider multiple legal frameworks when developing and implementing AI applications, and an integrated approach to compliance is necessary.

For products incorporating AI systems, the AI Act can work in conjunction with the directives under the so-called New Legislative Framework (NLF), which provide a harmonized framework for product safety. This means that companies developing AI systems for vehicles or medical devices, for example, must consider both product safety regulations and the specific AI requirements.

G. Temporal scope

The AI Act provides for a phased implementation, giving businesses and government agencies time to adjust to the new regulations. The main timelines are as follows:

  • February 1, 2025: The ban on certain AI practices, such as social scoring by governments and the use of manipulative techniques, comes into effect.
  • August 1, 2026: As of this date, high-risk AI systems must meet all specific requirements, including conformity assessments and risk management processes.
  • Aug. 1, 2027: AI components integrated into physical products, such as medical devices or vehicles, must comply with the AI Act from this date in addition to existing product regulations within the NLF.

This phased approach allows companies to make the necessary adjustments and align their AI processes with the new requirements. At the same time, leading up to these deadlines, companies can benefit from support and advice within the testing environments established by the AI Act (regulatory sandboxes).

III. High-risk AI (HRAI) under the AI Act.

A. What is a high-risk AI system?

An AI system is considered "high risk" (High Risk AI or HRAI) when it is deployed in sectors where it has the potential to have significant impacts on the health, safety and fundamental rights of individuals. The AI Act defines specific industries and applications that fall within this category, such as:

  • Healthcare: AI applications that support diagnosis or treatment decisions.
  • Education and vocational training: AI systems affecting access to education and subsequent professional opportunities.
  • Public and private services: AI models making decisions about access to essential services, such as social assistance and insurance.
  • Law enforcement and justice: AI applications in law enforcement operations or legal proceedings.

In addition to the identified sectors, the list of high-risk AI systems may be expanded by the European Commission as new technologies emerge that pose similar risks. The categorization as high risk specifically targets AI systems that directly affect humans in critical situations.

B. What must an HRAI itself comply with under the AI Act?

High-risk AI systems are subject to specific technical and organizational requirements to ensure their security and transparency. The main obligations are as follows:

  1. Risk Management: Providers must develop and implement a detailed risk management process, with ongoing assessments throughout the life cycle of the AI system. This process includes identifying, analyzing and mitigating all potential risks to health, safety and fundamental rights.
  2. Data governance: HRAI systems must train and operate with high-quality data. The data sets must be representative and free of bias to avoid unfair or harmful outcomes. Data governance also includes specific checks on the accuracy and completeness of the data used.
  3. Technical Documentation: Detailed technical documentation must be available to demonstrate system compliance with the AI Act. This documentation describes the operation of the AI system, the datasets used, risk management measures, and the results of all tests and evaluations.
  4. Logging: HRAI systems should include log features that record data on usage and performance. This allows errors or incidents to be identified and analyzed, and ensures that there is a complete record of the AI system's operation over time.
  5. Transparency: Users of HRAI systems must be well informed about the operation, limitations, and intended purposes of the AI system. This transparency requirement ensures that users understand how the system arrives at decisions and what factors influence the output.
  6. Human supervision: The AI system must be designed to enable effective human supervision. This means that operators can intervene when a system operates outside its intended scope or when its output could adversely affect individuals.
  7. Accuracy, robustness and cyber security: HRAIs must perform consistently within the established accuracy and reliability criteria. In addition, appropriate cyber security measures must be in place to protect the AI system from attack and tampering.

IV. Obligations of parties in the value chain of an HRAI.

1. Obligations of providers of an HRAI.

Providers of HRAIs have significant obligations under the AI Act. Their responsibilities include:

  • Conformity Assessment: Providers must conduct a conformity assessment to prove that their system meets the requirements of the AI Act. This process is similar to CE marking and requires technical documentation and extensive testing to demonstrate compliance.
  • Quality Management: Providers must implement a quality management system that complies with the AI Act and focuses on the safety and reliability of the AI system.
  • Technical Documentation: Providers must make available detailed technical documentation documenting the design, development and operation of the system. This documentation must remain accessible to regulatory bodies throughout the life of the AI system.
  • Corrective measures: If a system no longer meets the requirements, providers must take corrective action. This may include modifications to the system, reassessments of compliance, or a temporary suspension of the system.
  • CE mark: Providers of HRAIs must affix the CE mark to the system to indicate that the system complies with the AI Act and conforms to regulatory requirements.

2. Obligations of importers of an HRAI.

Importers of HRAIs placing an AI system from outside the EU on the European market also have a number of specific obligations. They are responsible for ensuring compliance with the AI Act before the system is placed on the market:

  • Verify conformity: Importers must verify that the system meets the requirements of the AI Act and has the necessary CE markings and documentation.
  • Name and contact information: Importers must ensure that their name, registered trademark and contact information are visible on the AI system.
  • Corrective measures: If a system is non-compliant, importers must take the necessary corrective measures, withdraw the system from the market or recall it if necessary.

3. Obligations of distributors of an HRAI.

Distributors of HRAIs who supply or market the system to end users must verify that the system contains the appropriate markings and compliance documents:

  • Control of CE marking and conformity documents.: Distributors must verify that the system has the required markings and documentation before it is marketed.
  • Cooperation with supervisors: Distributors must allow regulators access to documentation and cooperate with audits and investigations.
  • Corrective measures: If a system does not meet the legal requirements, distributors are required to take corrective action or withdraw the system from the market.

4. Obligations of persons responsible for the use of an HRAI.

Usage managers have obligations to ensure that HRAIs are used in accordance with legal requirements. They should:

  • Use as directed: Users should use AI systems according to the provider's instructions and ensure that the input data is relevant and representative of the intended use.
  • Incident reporting: If incidents occur that affect the safety or performance of the system, users are required to report them to the provider and the appropriate authorities.
  • Training and supervision: Usage managers must provide adequate supervision and ensure that employees working with the AI system are adequately trained to use it safely and efficiently.

5. Obligations of other parties in the AI value chain

The AI Act also recognizes the role of other parties who may be involved in HRAIs. These may include, for example:

  • Integrators: Parties that modify or integrate an HRAI into another product may need to conduct a new conformity assessment to ensure that the system still complies with the AI Act.
  • Maintenance Parties: Companies or individuals providing technical support or maintaining the system must also comply with the safety and compliance requirements set forth by the AI Act.

Through this distribution of obligations within the value chain, the AI Act aims to create shared responsibility for ensuring the security, reliability and transparency of high-risk AI systems.

V. Governance, oversight & sanctions under the AI Act

A. AI Office and Codes of Practice.

The AI Act provides for the establishment of a central AI agency at the EU level, which is responsible for monitoring compliance with the AI Act and developing codes of practice. This office works with national supervisory authorities and plays a coordinating role in implementing the AI Act across member states. The AI Bureau provides guidance and makes templates available to assist companies in complying with and implementing the AI Act.

Codes of Practice, prepared by the AI Bureau in collaboration with sectoral stakeholders, serve as guidelines for specific AI applications and help companies interpret and apply the legislation. These codes of practice are not binding but are recommended as a supplement to the compliance requirements of the AI Act.

B. AI Committee (AI Board).

The AI Committee, composed of representatives from each EU Member State, serves as a coordinating body to ensure the consistent application of the AI Act. The AI Committee's role is to facilitate cooperation between national authorities and jointly address interpretative questions and compliance issues. This committee works closely with the AI Bureau and supports member states in implementing the legislation by harmonizing enforcement practices within the EU.

C. Advisory Forum and Scientific Panel.

The Advisory Forum consists of representatives from various stakeholders, including business, consumer organizations, labor unions and civil society. The purpose of this forum is to provide input on practical aspects of the AI Act and to keep stakeholders informed of developments within the legislation.

In addition, there is a Scientific Panel established, consisting of technical and legal experts in the field of AI. This panel provides scientific and technical support to the AI Office and the AI Committee. The panel acts as a knowledge platform and provides advice on complex issues, such as the evaluation of high-risk applications and ethical issues within AI.

D. National competent authorities and AI regulatory test environments.

Each EU Member State designates national supervisory authorities responsible for enforcing the AI Act within their own jurisdiction. These national competent authorities are tasked with verifying the compliance of AI systems, handling complaints and conducting compliance investigations. They are authorized to conduct inspections and audits of companies and to impose penalties for violations of the AI Act.

In addition, the AI Act introduces the concept of regulatory sandboxes or AI test environments, in which companies can test their AI systems under controlled conditions. These testing environments are set up and managed by national regulators and allow companies to develop and evaluate AI applications in compliance with the AI Act, without having to immediately comply with all the requirements. These sandboxes are especially useful for SMEs and start-ups looking to innovate and validate their AI technologies.

E. Sanctions

The AI Act employs a robust penalty framework similar to the GDPR. Violations of the AI Act can result in significant fines, depending on the nature and severity of the violation. The AI Act provides for three levels of penalties:

  1. Unacceptable AI systems and serious breaches: For the deployment of prohibited AI systems or for serious violations of high-risk requirements, such as lack of risk management or transparency, a fine can reach 7% of global turnover or a maximum of 35 million euros, whichever is higher.
  2. Breaches of less critical requirements: Lower fines, ranging from 2% to 4% of global turnover, can be imposed for violations of less stringent regulations, such as administrative deficiencies or insufficient documentation.
  3. Non-compliance with transparency requirements: AI systems that violate transparency requirements, such as failing to report the use of synthetic media, can also incur fines, although these are typically less severe than those for prohibited AI or high-risk requirements.

It is important to note that the AI Act sets fines proportionately based on the size and capacity of the enterprise. A modified fine structure is used for small and medium-sized enterprises (SMEs) to keep the economic impact proportionate to their scale.

VI. How can we help you?

Our law firm provides specialized legal support for companies seeking to prepare for the AI Act. Our attorneys help you with:

  • Risk assessment: Identification of risk categories of your AI systems.
  • Conformity Assessment: Guidance through the conformity assessment process, including support with required technical documentation.
  • Establishing Transparency and Oversight Procedures.: Design and implementation of processes that meet transparency and human oversight requirements.
  • Data Governance and Compliance: Advice on data quality compliance and establishing a robust data management process.
  • Legal Representation and Training: Training for your team on the legal obligations under the AI Act and legal assistance in case of audits or enforcement.

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics