Generative AI systems such as Stable Diffusion are trained on billions of images retrieved from the Internet. Many of those images are copyrighted. So the question on the minds of creatives and lawyers worldwide is: Is this process legal? A groundbreaking ruling of the British High Court of England and Wales of 4 November 2025 in the case Getty Images v Stability AI provides an initial, nuanced answer.
The court ruled that the AI model itself is not an “infringing copy,” primarily because the model does not store or reproduce the protected works. This is a victory for AI developers, although Getty Images won a very limited trademark infringement victory.
The facts: what did the case revolve around?
The famous image bank Getty Images challenged Stability AI, the developer of the popular AI image generation model Stable Diffusion, in a British court. Getty Images' claims were threefold:
- Primary copyright infringement: Getty argued that training the AI model on millions of its images was a direct reproduction and thus a copyright infringment.
- Secondary copyright infringement: Getty argued that offering and importing the trained AI model (Stable Diffusion) in the UK amounted to distributing an “infringing copy”.
- Trademark infringement: Finally, Getty claimed that the AI model generated images that contained a distorted version of the “Getty Images” or “iStock” watermark, violates its trademark rights .
British Supreme Court decision
The 205-page ruling is complex, but the outcome can be summarized in three parts, which match Getty's claims.
The claim about the AI training itself: unanswered
This is perhaps the most crucial point that was not decided. Getty Images dropped its claim of primary copyright infringement (the actual training) before the end of the trial.
The reason was purely territorial: Stable Diffusion's training had not taken place in the United Kingdom. As a result, the British court could not rule on the fundamental question of whether the training of an AI on copyrighted material constitutes infringement per se or not.
The claim about the AI model as ‘infringing copy’: rejected
This was the biggest victory for Stability AI. The judge completely rejected the claim of secondary copyright infringement.
Getty's argument was that the model, which had been trained on pirated copies, had itself become an “infringing copy” within the meaning of British copyright law (the Copyright, Designs and Patents Act or CDPA).
The court did not follow this reasoning. The core motivation is technical:
“Although an “article” may be an intangible object for the purposes of the CDPA, an AI model such as Stable Diffusion which does not store or reproduce any Copyright Works (and has never done so) is not an “infringing copy” such that there is no infringement under sections 22 and 23 CDPA.”
In other words, the AI model learns from the data, but it does not contain the data. The model consists of ‘model weights’ (a set of mathematical parameters) that have learned patterns, but it does not store copies of the original photographs. Because the model itself is not a reproduction of the works, it also cannot be an “infringing copy”.
The claim of trademark infringement: limited assignment
Getty Images did win a small, but significant, trademark infringement victory.
The judge found that older versions of Stable Diffusion (v1.x and v2.x) could indeed generate images that contained (distorted) Getty or iStock watermarks. This was considered trademark infringement because it could cause confusion among the public about the origin of the image.
However, the judge emphasized that this finding was “both historical and extremely limited in scope.” The claims relating to the newer models (SD XL and v1.6) were dismissed because Stability AI had clearly addressed this problem and these models no longer generated the watermarks. The claims of reputational damage and drawing undue advantage were also dismissed.
Legal analysis and interpretation (in a Belgian context)
Although this ruling comes from the United Kingdom and thus has no binding force in Belgium, its impact is nevertheless significant. The UK judge's reasoning will undoubtedly be cited in similar proceedings in the EU.
The territoriality problem
That the claim about the training itself foundered on the territoriality issue illustrates the big hurdle for creatives. A Belgian photographer whose work was used would have to prove where the training took place (presumably in the U.S.) and file a lawsuit there. This is a huge hurdle financially and practically.
The training and the Belgian TDM exception
The British court has not ruled on the legality of the training process. Should this process take place in Belgium, this would be the crux of the debate.
During the training, protected works are analyzed. This requires making reproductions, even if they are only temporary, indirect or partial. These acts fall under the very broad definition of the reproduction right in Article XI.165, §1 of the Belgian Code of Economic Law (CEL).
In Belgium (and the EU), however, the AI developer could rely on the Text- and Data-Mining (TDM) exception, transposed in Art. XI.190, 20° CEL. This exception explicitly allows the making of reproductions for training purposes unless the rightholder has expressly and appropriately opposed it (the ‘opt-out’). Thus, the legal battle in Belgium over the training process is not likely to be about whether reproductions have been made, but rather about whether or not the author's opt-out has been respected.
The AI model and Belgian ‘idea/expression’ theory
The British court ruled only on the final product: is the model itself an illegal copy? The answer was no, because the model contains only patterns and ‘learned’ concepts, but not the original photographs (the ‘expression’) .
This reasoning aligns very closely with a fundamental Belgian copyright principle: the idea/expression dichotomy. Copyright, as confirmed by international treaties (TRIPS, WIPO) and the Belgian Court of Cassation (e.g.. ruling dated Feb. 17, 2017), does not protect the idea, concept, method or style behind a work. It protects only the concrete form of expression (the ‘expression’).
Stability AI's argument, followed by the British court, is that the AI model only distilled the ‘ideas,’ ‘patterns,’ and ‘style’ from the training data. Thus, the final product contains only unprotected elements, not protected expression. There is a real chance that a Belgian court, based on this same principle, would reach a similar conclusion regarding the final product.
What this specifically means
For AI developers
The ruling is largely positive. The argument that the AI model itself is not an infringing specimen is a powerful defense. However, the limited conviction for trademark infringement is a clear warning: AI developers should actively filter their output to prevent the generation of recognizable trademarks, logos or watermarks. However, the fundamental legality of the training process in the EU remains subject to the TDM exception and respect for opt-outs.
For creatives and rights holders
The ruling is a disappointment. The route of ‘secondary infringement’ (challenging the design itself) seems closed in the UK. The focus is now best on enforcing the TDM opt-out. Creatives should label their works in a ‘machine-readable’ manner to prohibit TDM. Additionally, the fight against outputs (if too similar to a specific work) remains a possible avenue.
For enterprises using AI tools
Be aware of the risks of output. This ruling confirms that if an AI tool generates an image that contains a trademark (such as a watermark or logo), you are committing trademark infringement if you use that image commercially. The end user bears significant responsibility here. A thorough review of AI-generated images before publication is essential.
Frequently asked questions (FAQ)
Is training AI on copyrighted material now legal?
Not necessary. This UK ruling did not answer that key question because the training took place outside the court's jurisdiction. In Belgium and the EU, legality depends on the Text- and Data-Mining (TDM) exception and whether the data owner has opposed it (an ‘opt-out’).
My AI tool created an image with a logo in it. Can I use that?
No, be very careful. The ruling confirmed that generating watermarks (a trademark) can indeed be trademark infringement. If your AI output contains a recognizable trademark, as a commercial user you are at high risk of a trademark infringement claim.
Is the AI model itself now an ‘illegal copy’?
According to this British judge: no. Because the model does not contain or store the original images in their original form, the model itself is not an “infringing copy”. The judge sees the model as a new product made with the data, but it is not the data.
Conclusion
The ruling Getty v Stability AI is an important victory for AI developers, allaying fears that their AI models themselves could be labeled illegal copies. However, the fundamental question of whether the training itself is permissible remains unanswered.
For creatives, this shows that the legal battle must shift from the model to the training process (and the TDM opt-out in the EU) and to the specific outputs that are too close to their work. The limited trademark infringement conviction is a clear warning to all parties: AI's output is not royalty-free.



