Many providers of AI systems are under the assumption that any application listed in Annex III of the EU AI Act is automatically classified as ‘high risk. This is a misconception that can lead to unnecessary compliance costs. Section 6(3) provides an important but complex escape route for systems that do not pose a material risk to health, safety or fundamental rights. Below we analyze the strict terms of this escape route.
The main rule: the presumption of high risk
The core of the EU AI Act (Regulation 2024/1689) is a risk-based approach. AI systems that fall under the specific domains in Annex III - such as education, employment, essential services or law enforcement - are in principle considered high-risk AI systems.
These systems are subject to onerous obligations: conformity assessment, risk management systems, data governance and registration in the EU database. But what if your system technically falls within an Annex III domain, but in practice has little impact on decision-making?
The nuance of Article 6(3).
Section 6(3) of the AI Act introduces a filtering mechanism. A system listed in Appendix III is not considered high risk where it:
“not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.”
This sounds like an open standard, but the legislature immediately framed it in the second paragraph of Section 6(3). The exception applies only if one of the following four cumulative conditions is met:
- Limited procedural task: The system performs a narrow, procedural task (e.g., sorting documents by file format).
- Improving human activity: The system merely corrects the result of a previously completed human activity (e.g., rewriting a text for tone or style).
- Pattern detection (without decision making): The system detects decision-making patterns or deviations, but does not replace or influence human judgment without review (e.g., a tool that alerts teachers to deviating grades from the average).
- Preparatory task: The system only performs a preparatory task for an assessment (e.g., smart file management or translation).
Discussion point: limiting or illustrative?
A debate rages among legal scholars about the nature of this list. Is this list exhaustive (limitative) or merely illustrative?
- The strict reading: The legal text states: “The first subparagraph shall apply where any of the following conditions is fulfilled”. This suggests an exhaustive list. If your AI system does not fit into one of the four boxes, it remains high risk.
- The pragmatic reading: Others point to Recital 53, which states that “there may be cases (...) where AI systems (...) do not result to a significant risk.” They argue that the main criterion (“not materially influencing decision-making”) is leading.
As a law firm, given the potential penalties and enforcement, we recommend a conservative approach for now. Do not simply assume that you are out of the loop if you do not meet exactly one of the four conditions. The burden of proof is entirely on you as the provider.
Practical examples: high risk or not?
To make the abstract rules tangible, we look at some specific scenarios:
1. Emotion recognition in customer service (quality monitoring).
A company uses AI to analyze recorded customer conversations by emotional tone (e.g., “customer was frustrated”) to measure overall quality trends.
- Analysis: If this is done in aggregate and does not result in individual assessment or profiling of the employee, it can be argued that it does not materially affect decision-making about individuals (Section 6(3)(c)).
- Notice: Although it may not be a high-risk system in accordance with Annex III, emotion recognition systems do fall under the transparency obligations of Article 50 (unless exempted by law). Data subjects must know that the system is active.
2. Assistance with permit applications
An AI tool helps citizens fill out a building application by pointing out missing fields or unclear language, without judging for themselves whether the permit should be granted.
- Analysis: This likely falls under the exception of Section 6(3)(a) (limited procedural task) or 6(3)(b) (improving human activity outcome). The AI makes no judgment on substantive assignability, but merely structures the input.
The hard border: profiling
There is one exception to the exception. Section 6(3) (last paragraph) is crystal clear:
“Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.”
If your system uses personal data to evaluate aspects of a natural person (such as work performance, economic situation, health, personal preferences, etc.), the possibility of exception immediately expires. Profiling in a Schedule III context is by definition high risk.
Your obligations when invoking the exception
If you, as a provider, believe that your system - despite listing in Appendix III - falls under the Section 6(3) exception, you are not exempt from administrative burdens. You must, according to Section 6(4):
- Documenting: Prepare a written assessment before marketing the system. In it, justify why the system does not pose a significant risk.
- Keep available: This documentation must be submitted directly to the national regulators (in Belgium, the competent market surveillance authority) upon request.
- Register: You are required to register in the upcoming EU database and indicate there that you are using the exception.
Frequently asked questions (FAQ)
What if AI autonomously writes technical documentation based on raw notes?
If an AI system (for example, a GenAI model) generates technical documentation for medical devices or machines (which are themselves covered by safety legislation), the question is whether this is a safety component. If there is a human-in-the-loop (a developer who validates and finalizes the output), it can be argued that the AI merely enhances a human activity (Art. 6(3)(b)). However, if the AI generates unsupervised output that directly serves as compliance documentation, the risks are significantly greater.
Is the list of exceptions in Section 6(3) final?
Currently, yes. However, Article 6(6) empowers the European Commission to amend or expand the list of conditions via delegated acts if there is evidence of new ‘safe’ AI applications. Thus, the regulations are dynamic.
Conclusion
Qualification as ‘high-risk’ AI is not a black-and-white story. Section 6(3) allows for nuance, but it is not a free-for-all. The line between a ‘limited procedural task’ and ‘materially influencing decision-making’ is thin in practice and legally complex. Misjudgment can lead to enforcement by regulators for non-compliance for high-risk systems.



