Will AI render you invisible as an author?

Artificial intelligence (AI) like ChatGPT is trained with your creations, but does it respect your fundamental rights as an author? The conflict between AI and copyright is not always about money, but often about your moral rights: the inalienable right to recognition for your work and the protection of its integrity. Below we explain how these moral rights are under pressure and how you might defend them.

The dual realities of generative AI

On the one hand, generative AI offers unprecedented opportunities, especially in the world of research and education. Consider AI systems that remove language barriers by instantly translating academic papers, or help students with visual or hearing impairments by converting text into spoken word or vice versa. AI can also create personalized learning experiences that increase motivation and help reduce learning gaps. This technology has the potential to democratize knowledge and accelerate innovation, which fits perfectly with the United Nations' Sustainable Development Goals (SDGs), such as quality education for all (SDG4).

On the other hand, a significant risk lurks. The output of AI models poses a significant risk to the integrity of authors and science in general. Because AI systems often do not cite sources, it becomes impossible to verify that the information is reliable. This leads to a direct loss of control and recognition for the original authors. The consequences can be devastating. In a recent and high-profile example a professor discovered that an entirely AI-generated scientific article had been published under his name in a professional journal, without his knowledge or consent. Although the article was quickly removed, the digital footprint remains, with potential damage to his scientific reputation. Imagine your life's work being used to train an AI that decontextualizes your ideas, doesn't mention your name-which leads to a drop in citations and damages your career-or worse, uses your research to generate content you never want to be associated with, such as discriminatory or misleading writing.

When technology clashes with copyright: the heart of the problem

The discussion of AI and copyright often focuses on the economic aspects, such as licensing fees for the use of data. A research by Christophe Geiger and Franceca Di Lazzaro, however, emphasizes that the main challenge lies with the author's moral rights, which protect the personal and inalienable bond between the creator and his work. In Belgium these moral rights are enshrined in article XI.165, §2 of the Code of Economic Law (CEM). Two of these rights are under severe pressure from AI.

The right to paternity: who is the author?

The right of paternity gives you the right to demand that your name be included with your work (Art. XI.165, §2, fifth paragraph CEL). When an AI uses your work as training data and then generates new content without mentioning you, this right is violated. The AI's output then appears to come out of nowhere, when in reality it is built on the shoulders of countless, invisible creators. This not only undermines the recognition that is crucial to careers in academia and creativity, but also makes it impossible for users to verify the reliability of sources.

The right to integrity: the soul of work

Perhaps even more fundamental is the right to integrity (Art. XI.165, §2, sixth paragraph CEL). This allows you to oppose any disfigurement, mutilation or other alteration that could damage your honor or reputation. An AI may summarize your research while omitting crucial nuances, rip your arguments out of context or even use them to generate content that is diametrically opposed to your own beliefs, such as discriminatory or misleading texts. In such a case, not only is your work compromised, but also your intellectual and moral integrity.

Sustainability to guide a new balance

How do we balance the benefits of AI with the protection of authors? The researchers propose a "sustainable copyright," a system that enables technological progress without sacrificing the fundamental rights of creators. This balance rests on two pillars.

Pillar 1: Radical transparency

The first pillar is transparency. AI developers must provide clarity about the copyrighted works they use to train their models. This transparency is essential to allow authors to enforce their rights and to allow users to critically evaluate the output. It is the basic prerequisite for a fair and auditable system.

Pillar 2: A people-centered approach

The second pillar is a "human-centered" approach, with the human author at the center of any regulation. This means that technology should serve humans, not the other way around. A sustainable framework must include robust mechanisms to protect authors' moral rights when their work is used to generate AI output.

To a concrete solution: how to protect your rights?

Theory is one thing, but how can these principles be put into practice? The researchers suggest a proactive approach in which a regulatory body, such as the recently established EU AI Office, plays a central role.

Authors who find that their moral rights are being violated (e.g., through false attribution or decontextualization) should be able to file a complaint with such an authority through a simple and quick online portal. This authority would then have the power to investigate the complaint and, if warranted, require the AI developer to take corrective action. In extreme cases, this could even lead to an order to remove the work from training datasets - a sort of enforced "opt-out" to ensure the integrity of the author.

Frequently asked questions (FAQ)

What exactly is the difference in copyright law between economic rights and moral rights?
Economic rights are those you can sell or license, such as the right to copy, distribute and publish a work. Moral rights are personal and inalienable; they protect your connection as an author to your work, such as the right to name (paternity) and the right to oppose alterations (integrity).

Can I already prevent AI systems from using my online work?
This is technically and legally complex. Some platforms offer tools (such as robots.txt files) to block web crawlers, but their effectiveness against AI developers is not guaranteed. The legal battle over this is ongoing and there is no clear-cut answer yet.

Is the output generated by an AI itself protected by copyright?
In most legal systems, including the Belgian one, copyright protection requires an "original creation" bearing the "personal stamp of the author." Because an AI is not (for now) considered a human author, output generated purely by AI is in principle not protected. The discussion becomes more complex when there is significant human input in the creation of the output (specifically, in the entry of a prompt).

Conclusion: your rights in a changing world

Generative AI is a revolutionary technology, but it does not operate in a legal vacuum. Your moral rights as an author - the right to paternity and integrity - remain fundamental in Belgium. While legislators are under pressure to respond to these new challenges, it is appropriate to have a system in place that guarantees transparency and provides you with effective means to act against abuse. The future of creativity and knowledge sharing depends on a balance where innovation and the fundamental rights of the creator go hand in hand.


Joris Deene

Attorney-partner at Everest Attorneys

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics