The meteoric rise of generative AI tools such as DALL-E, Midjourney and Stable Diffusion has made it child's play to generate or manipulate images based on text commands. But can you just use these images? And what about legally when it comes to misleading or erroneous content, such as images that appear "more real than real"?
AI images: artistic freedom or legal trap?
AI images circulate abundantly on social media. Think of videos about historical events that go viral on TikTok and Instagram. While aesthetically impressive, these images are often full of historical errors. Yet they are particularly compelling precisely because of their realism.
This begs the question: can you just publish or use such images?
No blanket ban on inaccurate AI content
In Belgium (and more broadly within the EU) is disinformation not punishable per se. There is no general prohibition against spreading factually incorrect information, even when it came about through AI. Freedom of speech and artistic expression remains an important factor here.
Yet this does not mean that every use is without risk.
New European rules: DSA and AI Act
Meanwhile, there are European rules regulating AI content, especially with regard to transparency and platform accountability:
- Digital Services Act (DSA): article 34 explains very large online platforms (VLOPs) and very large online search engines (VLOSEs) obligations to identify and address systemic risks. They must conduct a risk assessment and take action against, among other things, misleading or deceptive content, including disinformation (recital 84).
- AI Act:
- Article 50.2 requires providers of generative AI to have to clearly mark When an image, video, audio or text has been artificially generated or manipulated. This must be in a machine-readable format and be detectable (e.g., via Content Credentials and/or notifications as "made by AI").
- Specifically for so called. deepfakes Article 50.4 requires that users of GenAI expletive should announce That it is artificially generated or manipulated material.
Where are the legal boundaries?
Although a purely false picture not punishable is, can following legal risks do play:
- Portrait rights and privacy - If an AI image resembles an existing person, that person may invoke portrait rights (Art. XI.174 WER) or even privacy protection (Art. 8 ECHR).
- Trademark or copyright law - Recognizable logos or original expressions may involve trademark or copyright infringement if AI uses protected distinctive texts or works.
- Deception or reputational damage - When using AI images in a commercial context (such as advertising), rules from the consumer law whether it trade practices law (Book VI WER) apply, particularly in cases of fraudulent misrepresentation.
- Criminal liability - In extreme cases, AI images can be used for defamation, hate speech or identity fraud (e.g., fake videos of public figures).
Practical recommendations
As lawyers, we recommend the following best practices when using AI imagery:
- Mark images as generated or manipulated, especially if they seem realistic or can create misunderstandings.
- Do not use AI images of existing people without permission, unless a clear exception (such as satire or public figure in public context) applies.
- Avoid using misleading AI images in commercial communications without clear context.
- Follow the evolution of European regulations: Both the AI Act and the DSA impose more transparency obligations on companies and platforms.



