Can a judge use AI to reach a verdict, and will you still have a fair trial?

The rise of artificial intelligence (AI) such as ChatGPT raises fundamental questions within the courtroom. While AI can increase efficiency, its unregulated use by judges poses a direct threat to your right to a fair trial. At the heart of the problem is the opacity of AI - the so-called "black box - and the risk of an inhumane, uncontrollable judicial process.

The legal context: The promise of efficiency versus the reality of fundamental rights

The judiciary throughout Europe is struggling with high workloads and significant backlogs, leading to unreasonably long processing times for court cases. These delays could jeopardize the right of access to justice, which must be "practical and effective". In theory, generative AI (GenAI) can provide a solution here by assisting judges in summarizing cases, looking up legal doctrine or even drafting judgment texts. The temptation to use this technology to speed up the administration of justice is therefore strong.

However, this drive for efficiency is at odds with one of the most fundamental rights in our democracy: the right to a fair trial, enshrined in Article 6 of the European Convention on Human Rights (ECHR). Recent examples show that this is no longer a theoretical discussion. For example, a Dutch subdistrict court judge openly consulted ChatGPT to estimate the technical lifespan of solar panels in a judgment. Although the judge emphasized that this was not decisive, such a practice raises serious questions about the reliability and accountability of the administration of justice. This creates a tension that prompted the European legislature to intervene.

The AI Act

The European Union has recognized the risks and adopted the AI Act in 2024. This legislation takes a risk-based approach. Important is the classification in Article 6.2 and Annex III (paragraph 8) which states that "AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution" qualify as high-risk systems.

This classification is motivated by the "potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial.". These high-risk systems are subject to strict obligations, including:

  • Transparency (Art. 13 AI Act): The operation of the AI system must be "sufficiently transparent" so that a user (the judge) can correctly interpret and use the output. The accompanying instructions for use must contain "concise, complete, correct and clear information."
  • Human supervision (Art. 14 AI Act): There must always be effective human supervision aimed at preventing risks to fundamental rights. The supervisor must be aware of possible "automation bias." The AI Act explicitly states that the final decision should remain a human activity and AI should not replace it.
  • Fundamental Rights Impact Assessment - FRIA (Art. 27 AI Act): Public bodies, such as courts, commissioning a high-risk AI system must conduct a prior assessment of its impact on fundamental rights. This assessment should describe, among other things, the processes and measures for human oversight and is shared with the relevant market surveillance authority.

Legal analysis and interpretation: The 'black box' and the human touch

Despite the framework of the AI Act, there remain fundamental legal sticking points that go to the heart of due process.

The 'black box' problem and the pillars of Article 6 ECHR

The biggest obstacle is the 'black box' nature of many AI models. It is often impossible to figure out exactly how an AI arrives at a particular conclusion. Internal logic is complex, self-learning and differs from the mode of reasoning of judges, who also factor in social values and intuition. This lack of transparency clashes head-on with the guarantees of Article 6 ECHR:

  • Independence and impartiality: A judge who relies too heavily on an opaque AI model is no longer fully independent. The decision is influenced by an external factor whose data, algorithms and built-in biases are unknown. This risk is greater with externally developed, commercial AI models, where trade secrets prevent complete transparency. The notorious COMPAS algorithm in the US, which unfairly assigned higher risk scores to black suspects, is a shocking example of this.
  • Hearing both sides: This principle ensures that parties can respond to all evidence that serves as a basis for decision to a significant degree. If a judge uses information from an AI that is not disclosed to the parties, or if the parties cannot challenge the reliability of that AI-generated information, this right becomes meaningless.
  • The right to a reasoned decision: A judgment must show parties that their arguments were actually "heard." It must explain why the judge made a particular decision, so that a decision can be made on a possible appeal. If the reasoning of the AI is partly incomprehensible to the judge himself, he or she cannot provide conclusive justification either. A possible solution is being sought in "Explainable AI" (XAI), which attempts to capture processes in human-understandable explanations, but this too is still under development.

The threat of dehumanization: The implicit right to a human judge

A fair trial is more than a technically correct outcome; it is fundamentally anchored in human dignity, which is the "essence" of the ECHR. A litigant has the right to be heard by a human being, not to be reduced to a set of data points for an algorithm ("datafication"). This leads to the conclusion that there is an implied right to a human judge, based on elements that AI cannot replicate:

  • 'Voice' (Being Heard): Feeling heard is crucial to the perception of a fair trial. A human judge can listen, show empathy and understand context. An AI processes data without true understanding. An AI can create the illusion of a dialogue, which can actually increase the sense of misunderstanding and dehumanization.
  • Neutrality and empathy: Although AI has no personal feelings, it reproduces and reinforces biases from its giant training datasets. True neutrality requires the ability to see the world from another's perspective. It lacks the emotional authority and moral compass of a human judge.
  • Respect and trust: Respect in the courtroom means recognizing the inherent dignity of each person. An AI cannot show this sincerely; it treats a person as a "thing. Trust is further undermined by the lack of accountability. Who is liable for an AI's wrong decision? This diffusion of responsibility is a historically unprecedented disconnect between the exercise of power and individual responsibility.

What this specifically means

  • For the litigant: You have the right to a transparent and humane process. A judgment must be based on reasoning that is verifiable to you. If you suspect that AI influenced the decision in an opaque way, this may be grounds to challenge the reasoning and fairness of the trial on appeal.
  • In court: The message is clear: be extremely careful. AI can be a useful tool for support tasks (e.g., transcription, anonymization), but should never replace human judgment. Transparency is essential. If AI is used, it should be clear how and why, and the final reasoning and decision should always come from the judge himself.
  • For the attorney: Be vigilant. Your job is to protect your client's rights. This means looking critically at the sources and reasoning in a verdict and being prepared to denounce the use of unverifiable technological devices if they violate due process rights.

Frequently asked questions (FAQ)

Is the use of AI by judges in Belgium already a reality?
Although there is not yet a widespread, overt application like abroad, it is inevitable that AI tools will also make their appearance in the Belgian magistracy, even if only for preliminary work. The legislation is anticipatory, but the issues are real and current.

What is the greatest risk when a judge uses ChatGPT?
There are two fundamental risks. First, the unverifiability of the output (the "black box" problem). Because the AI's reasoning is inimitable, the right to a reasoned decision and the possibility of an effective appeal is eroded. Second, the risk of dehumanization: your case is treated as a data puzzle rather than a human dispute, which erodes the core of justice.

Does the AI Act guarantee a completely fair trial?
The AI Act is a crucial step by imposing strict rules on AI in the judiciary. It mandates human oversight and transparency. However, vague wording such as "sufficiently transparent" leaves room for interpretation, and its effectiveness depends on implementation and enforcement. Ultimately, the fundamental guarantees of Article 6 ECHR and the principle of human dignity remain the most important protections.

Conclusion

Artificial intelligence can make justice more efficient, but it should never replace its soul. The right to a fair trial is inextricably linked to human interaction, verifiable reasoning and the guarantee that you will be judged by a human judge who truly hears your case. The European AI Act provides a framework, but vigilance by litigants, attorneys and judges themselves remains crucial to maintaining a fair and humane justice system.


Joris Deene

Attorney-partner at Everest Attorneys

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics