AI Chatbots and AI Agents: legal concerns under Belgian law

General

The deployment of AI Chatbots and AI Agents offers organizations innovative opportunities to communicate with customers, but at the same time brings complex legal challenges that need to be thoroughly analyzed. Below, we provide an overview of the legal framework surrounding the use of AI Chatbots and AI Agents in Belgium.

What are AI Chatbots or AI Agents?

AI Chatbots and AI Agents are digital systems that can process, understand and respond to natural language. They form an interface between companies and their (potential) customers. However, behind this seemingly simple interaction lies a complex technological process.

Modern AI Chatbots are based on large language models (LLMs) that process huge amounts of text. In a compact process, the system analyzes user input, places it in context of the conversation, recognizes the intent of the question, generates a response based on its training, and applies filters where necessary to ensure the quality and safety of the response.

AI Agents go one step further. In addition to communicating, these systems can perform actions such as searching databases, booking appointments, or generating personalized documents. They possess a certain autonomy (or ability to act), allowing them to accomplish certain tasks independently within predefined parameters.

It is important to note that throughout this process, data is stored, processed and often analyzed. This may happen on servers of the company itself, but more often on those of external service providers such as OpenAI (ChatGPT), Google (Gemini) or Anthropic (Claude).

What legal obligations does the European AI Act impose on companies implementing AI technologies?

The European AI Act (Regulation (EU) 2024/1689), which takes effect in phases until 2026, introduces a series of obligations for companies using AI systems, depending on the level of risk associated with the application. For companies deploying AI Chatbots and AI Agents, the following provisions are particularly relevant:

  1. Transparency obligation: Users must be clearly informed that they are interacting with an AI system, not a human (Art. 50). The artificial nature of the interaction must be made known.
  2. Risk Classification: Depending on the purpose of use of the AI Chatbot or Agent, the classification varies:
    • Chatbots for general customer service usually fall under the "limited risk" category with transparency obligations (Art. 50)
    • Systems that make decisions about creditworthiness or contracting may be considered "high risk" (Art. 6)
  3. Requirements for high-risk systems:
    • Conducting risk assessments (art. 9)
    • Implementing risk management systems (art. 9)
    • Preparation of technical documentation (art. 11)
    • Maintaining logs (art. 12)
    • Ensuring human supervision (art. 14)
    • Ensuring accuracy, robustness and cybersecurity (Art. 15)
  4. Prohibited practices: Article 5 of the AI Act prohibits specific applications relevant to chatbots and AI agents:
    • Manipulative or deceptive techniques: Prohibit AI systems that use subliminal or manipulative techniques to substantially distort behavior, causing users to make decisions they would not otherwise have made. For chatbots, this means that they should not be designed to manipulate users undetected.
    • Abuse of vulnerabilities: AI systems should not exploit vulnerabilities based on age, disability or social/economic circumstances. For example, chatbots should not be specifically programmed to identify and manipulate the elderly or people with cognitive disabilities.
    • Social scoring and classification: Ban AI systems that evaluate people based on social behavior, resulting in adverse treatment in contexts separate from where the data was collected, or in a disproportionate manner. Customer service chatbots should not categorize users in a way that results in discrimination.
    • Emotion recognition in the workplace and in education: Explicitly prohibit AI systems that infer emotions in work environments and educational contexts (except for medical/safety purposes). Chatbots in these environments should thus not be designed to analyze emotional states of employees or students.
  5. Fines: For non-compliance with the AI Act, penalties can reach up to 35 million euros or 7% of a company's annual global turnover, depending on the nature and severity of the violation.

For enterprises using AI Chatbots, it is essential to develop their compliance strategy in a timely manner, as most of the provisions of the AI Act take full effect from 2025-2026. Specifically, the following phased timeline comes into effect:

  • Feb. 2, 2025: provisions for prohibited AI practices already in place
  • Aug. 2, 2025: rules for general AI models become applicable
  • Aug. 2, 2026: requirements for high-risk AI applications and transparency obligations
  • Aug. 2, 2027: full entry into force of the AI Act

For more details on the Ai Act, you can also visit our page on the AI Act consult.

Contractual provisions

What legal aspects deserve special attention in contracts with external AI suppliers?

When using third-party AI Chatbots or AI Agents, the following points of interest are important:

  1. Service Level Agreements (SLAs).: Provide clear agreements on availability, response times and technical support in a SLA. An AI Chatbot that is unavailable during peak hours can cause significant commercial damage.

  2. Liability distribution: Accurately determine who is liable in different scenarios:
    • If the AI provides false information
    • In the event of data loss or data breach
    • When service is unavailable
    • For violations of applicable laws
  3. Processor Agreement: Under the General Data Protection Regulation (AVG/GDPR) is a processing agreement required when the third party processes personal data, as stipulated in Article 28 GDPR. It must contain specific provisions on:
    • The purpose and nature of the processing
    • The rights and obligations of both parties
    • Security measures
    • Audit capabilities
  4. Ownership of data and training data: Make clear agreements about:
    • Who owns the data provided by users
    • Whether the supplier may use this data to train its own models
    • What happens to the data after termination of the contract
  5. Exit strategy: Provide provisions that allow for a smooth transition upon termination of cooperation:
    • Data portability to a new provider
    • Return or destruction of data
    • Transition period with support
  6. Compliance Guarantees: Request assurances that the AI solution meets:
    • The AI Act
    • GDPR and other data protection legislation
    • Industry-specific regulations (e.g., for financial or medical applications)

Example: a Belgian webshop implements a chatbot from an American provider without adequate contractual protection. If this chatbot were then to provide incorrect pricing information that customers claim, there could be ambiguity about liability. This could lead to a costly legal dispute and significant reputational damage for the company.

How do you draft optimal terms of use for AI applications and in what way do you make them legally binding on end users?

Terms of use are the contractual basis for the relationship between the company and the end user of the AI Chatbot or Agent. To be legally sound, these terms must include the following:

  1. Clear identification of the AI system: Transparent communication that the user is interacting with an AI, not a human collaborator (in accordance with the AI-Act).

  2. Limitations of liability:
    • Disclaimer for erroneous information
    • Limitation of liability to direct damages
    • Exclusion of warranties of fitness for specific purposes
  3. Use restrictions:
    • Prohibition of misuse or manipulation of the system
    • Ban on importing illegal or harmful content
    • Provisions against automated access (scraping)
  4. Intellectual property rights:
    • Ownership of AI-generated content
    • Rights granted to the user
    • Restrictions on the use of outputs
  5. Privacy provisions:
    • What data is collected
    • How these are used
    • Retention periods
    • Rights of data subjects

To make these conditions opposable (legally binding), the following measures are essential:

  • Active acceptance: Implement a mechanism where users must actively agree (e.g., via a checkbox) before using the chatbot. "Browse-wrap" agreements (where terms can only be accessed via a link at the bottom of a web page) are insufficient to create binding terms, especially in B2C contexts (see Court of Justice ruling).
  • Accessibility: Make sure the terms are easily accessible, in plain language and in the relevant languages for your target audience.
  • Version Control: Keep a record of different versions of the terms of use, including when they were in effect.
  • Proof of acceptance: Keep evidence of user acceptance, including timestamp and IP address.
  • Updates: Develop a procedure for updating terms and conditions, requiring users to re-authorize significant changes.

What specific consumer protection rules apply when deploying AI Chatbots in customer communications?

Special attention should be paid to consumer protection law when deploying AI Chatbots and Agents in B2C relationships. The following aspects are important:

  1. Precontractual disclosure obligationsAccording to Book VI of the Economic Law Code (Articles VI.45 and VI.64), companies must provide clear information prior to the conclusion of a contract about:
    • The main characteristics of the product or serviceThe total price, including taxes and additional costsThe payment, delivery and performance termsThe right of withdrawal (if applicable)
    If an AI Chatbot or Agent provides this information, it must be complete and accurate. Incorrect or incomplete information may result in nullity of the agreement or liability.

  2. Ban on unfair trade practices (Article VI.93 et seq. WER): AI systems may not be used for:
    • Misleading business practices (such as providing false information)
    • Aggressive business practices (such as unwanted persistent messages)
    • Dark patterns (manipulative design elements)
  3. Right of Withdrawal: Agreements concluded via an AI Chatbot are in principle subject to the standard 14-day right of withdrawal for distance contracts (Article VI.47 WER). The AI must provide correct information on:
    • The existence of this right
    • The conditions and deadlines
    • The procedure for revocation
  4. Automated decision-making: If the AI Chatbot or Agent makes automated decisions with significant implications for the consumer (for example, about creditworthiness), special rules under Article 22 GDPR apply:
    • Right to human intervention
    • Right to express views
    • Right to challenge the decision

Example: A Belgian travel company deploys an AI Chatbot that automatically assembles and books travel packages. If this chatbot were to fail to provide clear information about cancellation conditions and additional costs, this could lead to complaints to the Economic Inspectorate. This could then rule that there has been a violation of the information requirements of the WER, potentially resulting in a significant fine and mandatory compensation for duped consumers.

Liability

The use of AI Chatbots and AI Agents poses specific liability risks, both at the pre-contractual stage and during and after contract performance.

What strategies can enterprises use to effectively manage the liability risks of AI implementations?

Containing liability risks when using AI Chatbots and Agents requires a layered approach:

  1. Technical measures:
    • Implement robust control mechanisms that validate outputs before showing them to users
    • Set clear limits on what the AI can do (such as maximum transaction amounts)
    • Provide human supervision for critical decisions
    • Systematically document all interactions for possible evidence
  2. Contractual measures:
    • Include clear limitations of liability in the terms of use in accordance with Article 5.89 of the Civil Code on release clauses
    • Specify that information provided by the AI is indicative and does not constitute binding offers within the meaning of Article 5.19 BW
    • Limit liability to direct damages and exclude consequential damages where possible
    • Implement disclaimers that make it clear that ultimate responsibility lies with the user
  3. Organizational measures:
    • Train staff in monitoring and intervening in problematic AI interactions
    • Implement a rapid escalation procedure for situations where the AI provides incorrect information
    • Conduct regular audits of system operation
    • Document all steps taken to mitigate risks
  4. Insurance measures:
    • Take out specific cyber insurance that provides coverage for AI-related incidents
    • Check whether existing business liability insurance policies cover AI risks
    • Consider specialized policies for technology companies
  5. Compliance measures:
    • Conduct regular risk assessments in accordance with the AI-Act
    • Document all measures taken to comply with regulations
    • Stay abreast of developments in relevant legislation and case law

Important: Limitations of liability in B2C relationships under Belgian law may be significantly limited by consumer protection provisions. Full exclusions of liability are generally considered unreasonably onerous and may be annulled under Article VI.82 of the Code of Economic Law.

What are the legal consequences when an AI Chatbot provides incorrect information?

The legal consequences of erroneous information provided by an AI system vary depending on when it was provided:

Before concluding a contract (pre-contractual phase):

  • Under Belgian law, this can lead to pre-contractual liability based on Article 5.17 BW and Article 6.5 BW.
  • Providing false information can be considered a violation of the pre-contractual information duty as stipulated in Article 5.16 BW
  • The company may be held liable for damages suffered by the customer by relying on this information
  • In some cases, this can even lead to error (Article 5.34 BW), which can be a ground for nullity of the subsequent agreement

During the conclusion of the agreement:

  • Misinformation about essential elements of the contract can lead to error or fraud (Articles 5.34 and 5.35 BW)
  • This may result in the voidability of the agreement
  • If there are significant differences between what the AI communicated and what is in the final agreement, this may be considered a violation of consensus

After the conclusion of the agreement:

  • Erroneous information about the performance of the contract can be considered a contractual breach (Articles 5.73 and 5.83 BW) on penalties for non-performance)
  • This can lead to damages (Articles 5.86-5.89 BW), suspension of obligations by the client (Article 5.98 BW), or, in serious cases, dissolution of the agreement (Articles 5.90-5.96 BW)
  • Incorrect advice on the use of products can lead to product liability

In all cases, an important question is: can the enterprise distance itself from the statements of its AI system? The answer is usually in the negative: since the company chooses to use the AI as a communication tool, the information provided by the AI is basically imputed to the company itself.

Example: Suppose an energy company's AI Chatbot were to provide erroneous rate information to potential customers. If the company subsequently refused to honor these rates, a court could rule that the information provided by the AI was nonetheless binding. Such a ruling could be justified on the grounds that the company deliberately chose this communication channel and that customers could reasonably rely on this information.

How can companies legally protect themselves from reputational damage caused by inappropriate language from their AI systems?

Insulting, discriminatory or otherwise inappropriate language used by an AI Chatbot can cause significant legal and reputational damage. To mitigate this risk, the following measures are essential:

  1. Technical safeguards:
    • Implement robust content filters that block inappropriate expressions
    • Use "guardrails" that exclude certain topics or wording
    • Set clear parameters for the tone and style of the AI
    • Apply regular red-teaming to identify vulnerabilities
  2. Monitoring and supervision:
    • Provide human supervision when discussing sensitive topics
    • Implement systems that automatically escalate potentially problematic interactions
    • Perform spot checks on conversations
    • Regularly analyze aggregate data to identify patterns of inappropriate behavior
  3. Contractual measures:
    • Include specific clauses in contracts with AI providers about preventing inappropriate language
    • Lay down penalty clauses for cases where the AI provider has taken insufficient measures
    • Specify guarantees about the quality and appropriateness of AI interactions
  4. Communication and expectation management:
    • Make it clear to users that they are interacting with an AI system
    • Create an easily accessible reporting system for inappropriate speech
    • Develop an incident crisis communication plan
  5. Correction procedures:
    • Ensure rapid response to incidents
    • Implement a formal apology protocol
    • Document all cases and actions taken
    • Use incidents as learning opportunities to improve the system

Example: a Belgian municipality is implementing an AI Chatbot to answer citizen questions. If this chatbot were to make inappropriate comments about an ethnic minority, it could lead not only to negative publicity, but also to a formal complaint to Unia or the Flemish Human Rights Institute. In such a scenario, the municipality could potentially be held liable even if it were to argue that the utterances were not intentional and resulted from a technical error. This underscores the importance of preventive measures.

Data Protection

The deployment of AI Chatbots and AI Agents raises important questions about privacy and data protection. For more background on these topics, see our specialized pages on privacy and data protection and the GDPR.

What consent requirements apply specifically to the collection and processing of personal data by AI systems?

The deployment of AI Chatbots and Agents almost always involves the processing of personal data. The required consent depends on several factors:

  1. Legal basis for data processing (Article 6 GDPR):
    • Permission (Article 6(1)(a) GDPR): Where no other legal basis applies, explicit, informed consent is required. This must:
      • Be freely given (not a condition of service when not required)
      • Being specific (not a general blank check)
      • Being informed (clear information about what happens to the data)
      • Being unambiguous (through an active action)
    • Execution of an agreement (Article 6(1)(b) GDPR): If the data processing is necessary for the performance of a contract, separate consent is not required
    • Legitimate interest (Article 6(1)(f) GDPR): In some cases, the company's legitimate interest may be invoked, subject to a careful balancing of interests
  2. Special categories of personal data (Article 9 GDPR): In principle, if the AI Chatbot or Agent processes special categories of personal data (such as health data, religious beliefs, or biometric data), explicit consent is required unless a specific exception applies.

  3. Automated decision-making (Article 22 GDPR): Automated decision-making (including profiling) that significantly affects the data subject requires specific consent, unless the decision-making:
    • Necessary for the performance of a contract
    • Is permitted by EU or member state law
    • Is based on the explicit consent of the data subject
  4. Practical implementation:
    • Implement a clear consent mechanism before interaction with the AI begins
    • Provide an easy way to withdraw consent
    • Document when and how consent was obtained
    • Request consent again when the purpose of processing changes

Example: Suppose a Belgian company were to collect customer data through an AI Chatbot and use it for targeted marketing purposes without adequate consent. In such a case, the Data Protection Authority could impose a fine. The regulator could rule that the opt-in mechanisms were not sufficiently clear and that customers were not adequately informed about the use of their data for profiling purposes, which would constitute a violation of the GDPR.

How does the data flow after information is collected by AI Chatbots and what rules apply to it?

Data collected by AI Chatbots and Agents typically go through a complex journey, with different legal implications at each stage:

  1. Collection and initial storage:
    • Data is collected during user interaction
    • These are stored in temporary or permanent databases
    • The GDPR requires that only data necessary for the intended purpose be collected (data minimization, Article 5(1)(c) GDPR)
  2. Processing and analysis:
    • Data is often analyzed to improve AI performance
    • They can be used to personalize future interactions
    • Machine learning algorithms can identify patterns for commercial purposes
    • These secondary processing operations may require additional legal basis
  3. Transfer to third parties:
    • Data may be shared with:
      • AI solution provider
      • Affiliates within a group
      • Commercial partners
    • Any transfer requires an adequate legal basis and transparency
    • Transfer outside the EEA requires additional safeguards (Articles 44-50 GDPR)
  4. Use for training purposes:
    • Collected data is often used to train AI models
    • This requires explicit consent, unless the data is fully anonymized
    • Pseudonymization alone is insufficient to fall outside the GDPR framework
  5. Retention periods and disposal:
    • Data must not be kept longer than necessary for its intended purpose (Article 5(1)(e) GDPR)
    • Clear retention periods must be established and communicated
    • Procedural and technical measures should be in place for effective disposal
  6. Rights of data subjects:
    • Users have rights to access, rectification, deletion, restriction, portability and objection (Articles 15-21 GDPR)
    • These rights must be effectively exercisable, including with respect to data processed by AI systems

Importantly, many AI providers use standard conversational data to train their models. When using third-party AI systems, one should be aware of how user data is processed and stored. The deployment of these systems requires attention to the proper legal basis for data processing and transparency to data subjects.

What does the right of inspection of users in data processing by AI applications include?

The right to inspection is a fundamental right under the GDPR (Article 15) and has specific implications for the use of AI Chatbots and Agents:

  1. Scope of the right of inspection: Affected persons have the right to:
    • To know whether their personal data are being processed
    • Accessing this personal data
    • Receive information on processing purposes, categories of data, recipients, retention periods, rights, and origin of data
    • Get information about automated decision-making, including the underlying logic
  2. Practical implementation for AI systems:
    • Provide a system that can identify and extract all personal data collected by the AI
    • Document what data is used for what purposes
    • Implement processes to process inspection requests within the statutory one-month deadline
    • Develop methods to provide insight into the logic behind automated decision making
  3. Technical challenges:
    • In complex AI systems, it can be difficult to determine exactly what data contributed to a specific result
    • Data may be scattered across different systems and databases
    • For deep learning models, explaining the "underlying logic" can be technically complex
  4. Recommended measures:
    • Provide extensive logging of all interactions with the AI
    • Develop user-friendly interfaces for inspection requests
    • Train staff in properly handling inspection requests
    • Implement technical solutions for "explainable AI" to make decision-making more transparent
  5. Restrictions on the right of inspection:
    • Trade secrets or intellectual property protect the underlying algorithms
    • However, this does not relieve companies of the obligation to provide meaningful information about processing logic (see Court of Justice February 27, 2025 C-203/22)
    • A balance must be struck between transparency and protection of legitimate business interests

Example: A financial institution that used an AI Chatbot for initial credit assessment receives a review request from a rejected customer. Following a complaint to the Data Protection Authority, the institution could be required to (i) provide a full copy of all personal data collected, (ii) provide detailed information about the factors that had led to the rejection, and (iii) provide insight into the weighting of various criteria in the decision-making process. It is not enough to merely communicate the outcome of an AI decision; meaningful information must be provided about the "why" and "how" of the decision.

What cybersecurity risks pose the greatest threat when deploying conversational AI technologies?

The deployment of AI Chatbots and Agents introduces specific cybersecurity risks that require adequate protective measures:

  1. Prompt injection attacks:
    • Attackers can use carefully designed prompts to manipulate the AI
    • This can lead to inadvertent disclosure of sensitive information or circumvention of security measures
    • Protective measures include input validation, rate-limiting, and robust filtering of prompts
  2. Data poisoning:
    • Malicious actors can manipulate training data to influence AI behavior
    • This risk is particularly relevant when user interactions are used for ongoing training
    • Implement careful validation of training data and monitor unusual patterns
  3. Model extraction attacks:
    • Attackers can perform systematic queries to replicate the AI's functionality
    • This may lead to infringement of intellectual property rights or discovery of exploitable weaknesses
    • Protect against these attacks by using rate-limiting and detection of suspicious patterns
  4. Data leakage:
    • AI systems may inadvertently reveal sensitive information from training data
    • The risk of data breach is increased when the AI has access to internal databases or systems
    • Implement strict access controls and ensure sensitive data is recognized and protected
  5. Adversarial attacks:
    • Advanced attack techniques can fool AI systems by slightly altering inputs
    • This can lead to incorrect classifications or undesirable behavior
    • Increase system robustness through adversarial training and regular security audits
  6. Infrastructure vulnerabilities:
    • The infrastructure on which AI systems run may contain vulnerabilities
    • Traditional attack routes such as SQL injection or cross-site scripting remain relevant
    • Ensure regular updates, patch management, and penetration testing

For a broader understanding of the legal framework surrounding cybersecurity, please also see our pages on NIS2 and cybersecurity consult.

Intellectual property rights

The issue of intellectual property rights on AI-created content is complex and still evolving:

  1. Copyright status of AI-generated works:
  2. Allocation of rights in mixed creations:
    • When an AI Chatbot produces content under substantial human control, copyrights may arise
    • In commercial contexts, the question arises: who is the rights holder?
      • The person who drafted the prompts?
      • The company implementing the AI?
      • The developer of the AI?
    • Contractual provisions are essential to clarify these issues
  3. Practical implications for businesses:
    • Document human input into the creation process
    • Include clear provisions in terms of use about ownership of AI-generated content
    • Consider licensing agreements for use of customer-generated prompts
    • Check terms of AI providers regarding property rights
  4. Risks of breach by AI systems:
    • AI systems trained on protected works may unintentionally infringe on copyrights
    • Implement mechanisms to detect potential breaches
    • Consider safeguard clauses in contracts with AI providers
    • Provide a response procedure for claims by rights holders
  5. Protecting proprietary AI implementations:
    • Consider protection via:
      • Patent Law (for technical innovations in implementation)
      • Trade secrets (for unique prompts, configurations and training methods)
      • Trademark law (for the name and visual aspect of the AI)
      • Database Law (for structured collections of training data)

Example: A Belgian marketing company deploys an AI Chatbot to generate personalized marketing texts. If a customer then claimed ownership rights to the generated content, arguing that it was based on his specific prompts and company information, it could lead to a legal dispute. In such a hypothetical scenario, a court could potentially rule that:

  • The generated content is not fully copyrighted
  • The unique combination of prompts and business information possibly constitutes an original intellectual creation
  • The marketing company's terms of use, if claiming ownership of all outputs, may be decisive

This hypothetical scenario highlights the importance of clear contractual provisions on property rights, especially in contexts where clients provide substantial input into the creation process.

Conclusion

The use of AI Chatbots and AI Agents presents significant opportunities for enterprises, but also presents complex legal challenges. A proactive approach is essential to mitigate risks and ensure compliance.

To legally optimize the deployment of these technologies, we recommend:

  1. A thorough legal risk analysis prior to implementation
  2. Customized terms of use and privacy statements
  3. Contractual protection In relationships with AI providers
  4. Robust technical and organizational measures to limit liability
  5. Regular audits and updates To ensure compliance with evolving legislation

Our attorneys have extensive experience providing legal guidance to AI implementations in various industries. We provide support at every stage, from initial compliance analysis to the preparation of legal documentation and managing incidents.

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics