Transparency under the AI Act: why saying “I'm a chatbot” is not enough

Many companies live under the assumption that the transparency obligations in the European AI Regulation (AI Act) limit themselves to the simple rule that a chatbot must identify itself as such. This is a dangerous misconception. Transparency rules extend much further and apply to any AI system that interacts directly with humans, as well as systems that are deployed in the workplace or that make far-reaching decisions. A mere mention in the general terms and conditions is rarely enough.

Direct interaction: more than just chatbots

Article 50(1) of the AI Act imposes a clear obligation: providers must ensure that AI systems intended for direct interaction with natural persons are designed so that the user knows he is communicating with a machine.

The focus is often on customer service chatbots, but the legislation covers a much broader spectrum of applications. Consider, for example:

  • Intelligent access terminals or self-service kiosks: Physical points where customers or visitors check in.
  • AI-based intake tools: Systems that collect information via adaptive questionnaires (e.g., “What is your date of birth?”).
  • Content moderation: Interfaces that scan uploads and provide immediate feedback to the user.

The law provides an exception when it is obvious to a “normally informed and reasonably prudent and observant natural person” that he is interacting with AI. Be careful with this exception, however. Although everyone recognizes ChatGPT as AI, this does not necessarily apply to your specific enterprise application. Users may mistakenly think they are dealing with a classic, rule-based software system or even a human operator.

The notification must be timely, clear and contextual. Hiding this information in a privacy statement or terms and conditions is not enough.

AI in the workplace: a specific information requirement

For Belgian employers, the AI Act entails specific obligations that affect social dialogue. When you, as an employer (“deployer”), deploy a high-risk AI system in the workplace, there is a strict information obligation (Art. 26(7) AI Act).

Even before the system goes into operation, both employees and their representatives (works council or union delegation) must be informed that they will be subject to the use of such a system. This applies even if employees do not operate the system themselves but are affected by it (e.g. AI tools for performance analysis or recruitment). This obligation is in addition to the existing rules around information and consultation in Belgian labor law.

Decision-making and the right to explanation

The most onerous transparency obligations apply to high-risk AI systems that make (or help make) decisions that have legal consequences for individuals. Examples include credit ratings, educational evaluations or social benefit decisions.

Two crucial rules apply here:

  1. Notification requirement: The deployer must inform the natural person concerned that an AI system is being applied to him (Art. 26(11) AI Act)
  2. Right to explanation: Any person subject to a decision based on a high-risk AI system that adversely affects their health, safety or fundamental rights has the right to seek clear and meaningful explanations (Art. 86 AI Act).

This means that as a company you must not only report that AI is being used, but also be able to explain how the system arrived at a particular decision. “The computer says no” is no longer legally tenable.

Emotion recognition and deepfakes

The legislature pays specific attention to technologies that may be perceived as intrusive or misleading:

  • Emotion recognition & biometric categorization: Whoever deploys a system that infer emotions or categorizes individuals on the basis of biometric data (e.g., age, gender) must explicitly inform the individuals concerned (Art. 50(3) AI Act). Note that its use in the workplace or in education is even prohibited in many cases (Art. 5(1)f AI Act).
  • Deepfakes: Users of AI systems that generate image, audio or video content that resembles existing persons or events (deepfakes) must clearly disclose that the content was artificially generated or manipulated (Art. 50(4) AI Act).

Frequently Asked Questions (FAQ)

Does the transparency requirement apply to every piece of software with an algorithm?
No. The Article 50 obligation applies specifically to AI systems intended for direct interaction with humans, or systems that generate synthetic content (such as text, images or sound). For high-risk AI systems, heavier obligations apply regardless of whether there is direct interaction.

Should I inform my employees about every AI tool we use?
According to Section 26(7) of the AI Act, the explicit obligation applies to high-risk AI systems (as defined in Schedule III, e.g., recruitment or evaluation tools). However, from a good employer and GDPR perspective, transparency about workplace monitoring is always advisable.

Conclusion

Transparency under the AI Act is not a matter of checking a checkbox once. It requires an active, contextual approach where the user - whether a customer, employee or citizen - understands at the appropriate time that technology plays a role in the interaction or decision-making.


Joris Deene

Attorney-partner at Everest Attorneys

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics