Artificial Intelligence and the employment relationship: legal framework and practical Implications

Introduction

Artificial intelligence (AI) has now found its place within labor relations and can support both employers and employees in making work processes more efficient, from hiring to day-to-day functioning within the enterprise. While AI offers numerous benefits, its use within the employment context also raises important legal questions. A set of rules and legislation provides the framework within which AI applications must operate in the employment relationship.

Below we discuss the legal framework applicable to AI systems in labor relations, with particular attention to the European AI Act, the protection of personal data and the involvement of consultation bodies within the company.

1. What is an AI system?

The European Regulation 2024/1689 (the "AI Act"). Defines an artificial-intelligence system as:

"A machine-based system that is designed to operate with varying levels of autonomy and can exhibit adaptability once deployed, and which, for explicit or implicit purposes, derives from received inputs how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments."

This broad definition includes many applications already in use in the employment context, such as recruitment software, performance monitoring systems and chatbots for internal communication.

2. The AI Act: A risk-based approach

2.1 Background to the AI Act

With the AI Act, the European Union has created a regulatory framework to ensure that AI systems are safe and respect the fundamental rights of citizens, in line with European values. It entered into force on Aug. 1, 2024, and is being implemented in stages.

The AI Act takes a risk-based approach: the greater the risk posed by the AI system, the stricter the regulations. As a result, providers and users of AI systems are not all affected in the same way by the new regulations.

2.2 Classification of AI systems according to risk level

Unacceptable Risk

Systems with unacceptable risk have been banned since Feb. 2, 2025. These are AI systems that violate EU norms and values and lead to flagrant violations of fundamental rights, such as:

  • Systems that exploit personal vulnerabilities, employ manipulation or use subliminal techniques
  • Social scoring for public or private purposes
  • Predictive policing targeting individuals based solely on profiling
  • Untargeted collection of images from the Internet or surveillance cameras to build facial recognition databases
  • Emotion recognition in the workplace and educational institutions (except for medical or safety reasons)
  • Biometric categorization to draw inferences about race, political views, union membership, religious or philosophical beliefs or sexual orientation
  • Remote real-time biometric identification in public places by law enforcement agencies (subject to strictly limited exceptions)

High risk

Schedule III of the AI Act identifies eight domains in which the use of AI may be particularly sensitive. An AI system is classified as "high risk" if it is intended for use in any of these contexts.

Specifically for employment and workforce management, AI systems used for:

  • Recruitment or selection of individuals, including posting targeted job openings, analyzing and filtering applications and reviewing candidates
  • Making decisions on terms and conditions of employment, promotion or termination of the employment relationship
  • Assigning tasks based on individual behavior or personal characteristics
  • Monitoring and evaluating performance and behavior of individuals in the context of the employment relationship

These systems can have significant impacts on career opportunities, livelihoods and workers' rights. They can also lead to discrimination or violations of workers' fundamental rights regarding data protection and privacy.

Suppliers must subject high-risk AI systems to conformity assessment before placing them on the EU market. They must demonstrate that the system meets the mandatory requirements for reliable AI (including data quality, documentation, transparency, human oversight).

Users of these systems must take technical and organizational measures to ensure that the system is used in accordance with the instructions provided.

Employers must inform employee representatives and affected employees that they will be subject to the use of such systems prior to commissioning a high-risk AI system.

These obligations take effect on Aug. 2, 2026.

Limited risk

This concerns AI systems with specific transparency needs, such as customer service chatbots. Users should know they are communicating with a machine and should have the option to speak to a human instead. The key principles here are transparency, informed consent and opt-out capabilities.

Minimal Risk

The lowest risk level includes applications such as video games or AI-based spam filters. No special obligations apply to these systems.

2.3 Phased implementation of the AI Act

The AI Act will take effect in phases:

  • As of Feb. 2, 2025: Prohibition of AI systems with unacceptable risk. Obligation for providers and users to ensure adequate level of knowledge among personnel working with AI systems.
  • As of Aug. 2, 2025: Obligations for general AI models (such as ChatGPT) take effect. Penalties become applicable, including fines of up to €35 million or up to 7% of annual global revenue for companies using AI systems with unacceptable risk.
  • As of Aug. 2, 2026: All provisions of the AI regulation become applicable, especially the rules for high-risk AI systems from Annex III, including recruitment, personnel management and workplace monitoring systems.
  • As of Aug. 2, 2027: Rules for high-risk AI systems from Annex I (toys, radio equipment, medical devices, etc.) take effect.

3. AI Act and the General Data Protection Regulation (AVG).

The AI Act and the AVG (Regulation (EU) 2016/679) complement each other to ensure lawful, fair and transparent processing of personal data in the context of AI systems.

The Belgian Data Protection Authority (GBA) has published guidelines to help employers integrate AI into their HR services while respecting the AVG. Some recommended practices include:

  • Ensuring transparency: Clearly explain in the data protection statement how personal data are collected, used and stored in the AI system.
  • Ensuring minimal data processing: Only collect and use the minimum amount of personal data necessary for the AI system.
  • Ensuring storage limitation: Keep Personal Data no longer than necessary in accordance with the legitimate purposes for which it is processed.

For more information on the AVG in relation to AI systems, please visit the guidelines from the Belgian Data Protection Authority consult.

4. AI in the different phases of the employment relationship

4.1 In recruitment and selection

AI systems are increasingly being used during the recruitment phase. The applications can take different forms:

  • Systems that analyze candidates, including resume screening, interview analysis and even facial expressions during video interviews
  • Predictive analytics that generate data on education level, family status or behavior to determine which profiles best fit the business

It is important to note that such systems are usually classified as "high risk" under the AI Act because of their potential impact on career opportunities and risk of discrimination.

4.2 During the employment relationship

AI systems also find applications during the employment relationship:

  • Monitoring and evaluation: Technological developments and the increase in telecommuting have led employers to seek new methods for monitoring work performance. AI systems have proven useful, but their use also raises legal and ethical questions.
  • Improving well-being at work: AI can help reduce occupational accidents and psychosocial risks by relieving workers from repetitive or stressful tasks, for example by deploying chatbots.

4.3 Upon termination of the employment relationship

AI can also play a role in ending employment relationships:

  • Automatic generation and possible signing of resignation letters
  • Systems that measure employee performance and generate reports that can lead to employment termination decisions

Clearly, existing and future legislation sets limits on the use of AI systems to protect fundamental rights and prevent abuse.

5. Role of consultative bodies in AI implementation.

Implementing AI systems in the employment relationship in many cases requires the involvement of the consultation bodies within the company.

5.1 The Works Council (OR)

In accordance with Article 9 of CLA No. 9 the employer "must, at the request of the employee representatives, inform the works council of the rules followed regarding personnel policy."

The Works Council must also be informed of "the designs and measures that may modify one or more elements of personnel policy," including rules on recruitment, selection, promotion and information and communication systems in the company.

When the use of AI in the workplace can be considered "a new technology that has significant collective impact on employment, work organization or working conditions," is CLA No. 39 APPLICABLE. This collective bargaining agreement requires employers with an average of at least 50 employees to:

  • At least 3 months before the implementation of the new technology, provide written information on the nature of the technology, its justification and its social implications
  • Consult with employee representation on the social consequences of introducing the new technology

If these procedures are not complied with, the employer cannot unilaterally terminate the contract of the employees concerned, except for reasons foreign to the introduction of the new technology.

5.2 The trade union delegation

The trade union delegation must be informed in advance of changes that may alter contractual or customary conditions of employment and remuneration, excluding information of an individual nature.

Depending on the purpose for which the AI application is used, it may be necessary to inform the trade union delegation.

5.3 The Committee for Prevention and Protection at Work (CPBW).

The use of AI may have an impact on well-being at work and therefore create the obligation to seek the prior opinion of the CPBW. This applies to all projects, measures and resources that may, directly or indirectly, immediately or eventually, have an impact on well-being at work.

Conclusion

Artificial intelligence offers significant opportunities for optimizing work processes and improving labor relations. At the same time, the use of AI systems brings legal challenges, particularly in the areas of data protection, nondiscrimination and employee rights.

The European AI Act provides a comprehensive regulatory framework that seeks to balance innovation with the protection of fundamental rights. It is essential for employers to become familiar with the classification of AI systems and their obligations, as well as the role of consultative bodies in implementing AI in the workplace.

Our law firm has specialized expertise in the areas of labor law and legal aspects of AI implementation. We can assist you in navigating this complex regulatory landscape and ensure that your use of AI technology is fully compliant with applicable laws.


Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics