Many companies are currently focusing exclusively on the new AI Act to legally secure their artificial intelligence. However, this is a fallacy: the AI Act does not stand alone, but is part of a complex “triptych” along with the Digital Markets Act (DMA) and the Data Act (DA). For developers and users of AI, this means that compliance with one regulation does not automatically lead to compliance with another; in practice, these rules create a layered system of obligations that deeply impacts how data may be collected, shared and used for AI training.
The European digital playing field: a trio of regulations
The European Union is building a comprehensive digital regulatory framework. Whereas the AI Act focuses primarily on product safety, fundamental rights and risk classification of AI systems, the DMA and the Data Act have other goals. The DMA acts as a competition tool to curb the power of large ‘gatekeepers’ (such as Big Tech), while the Data Act democratizes access to data from connected devices (Internet of Things).
The interface between these laws is inevitable: AI needs data, and data is exactly what the DMA and DA regulate.
The AI Act and the Digital Markets Act: clashing interests?
The Digital Markets Act (DMA) focuses on so-called ‘gatekeepers’: large digital platforms that provide core platform services (such as search engines and social networks). These players are often also the largest developers of advanced AI models.
1. Limitations on data use versus data quality.
An interesting area of tension arises when training AI. The AI Act requires high-risk AI systems to use high-quality datasets (Section 10 AI Act) to avoid errors and bias. However, the DMA prohibits gatekeepers from combining personal data from different services without explicit user consent (Article 5 DMA).
While at first glance this may seem contradictory - less data can lead to lower quality AI - legally this is not seen as a conflict. The quality requirement in the AI Act is qualitative and not purely quantitative; gatekeepers must simply build the best possible AI within the limits of allowed data.
2. The ban on self-preference and AI rankings
Algorithms largely determine what products or services you will see online. The DMA prohibits gatekeepers from favoring their own services in these rankings (Section 6(5) DMA).
When AI is used to generate these rankings, it creates a double transparency obligation:
- According to the AI Act transparency is primarily aimed at the user (deployer) and regulators.
- According to the DMA is transparency aimed at the end user and business users so that they understand why a particular ranking is established.
It is important to note that an ‘AI audit’ under the AI Act does not automatically prove that you meet the DMA's fairness standards. The DMA requires a specific test of whether there is no discrimination or self-depreciation, something that the AI Act's technical standards do not necessarily cover.
3. Generative AI as a new core platform service?
Currently, AI models (such as LLMs) are not explicitly mentioned as a separate ‘core platform service’ in the DMA yet. However, with the rapid rise of services such as ChatGPT, which increasingly act as gateways to the Internet, the debate arises as to whether these systems should be covered by the DMA.
The European Commission may interpret existing categories, such as ‘virtual assistants’ or ‘online search engines,’ broadly to include generative AI. Thus, if an AI service becomes an essential infrastructure for other companies, it may be subject to the strict DMA obligations in the future.
The Data Act and AI: accessing data from smart devices
While the DMA targets giants, the Data Act (DA) also touches smaller players, particularly in the Internet of Things (IoT) sector. The Data Act aims to give users back control over data generated by their ‘connected products.
1. Is an AI system a ‘connected product’?
The Data Act defines a connected product as an ‘item'. Because software and AI systems are not in themselves physical goods, they are not strictly speaking covered by this definition.
However, AI systems are often embedded in physical products (think a smart camera, a car or a robotic vacuum cleaner) or function as a ‘related service. In those cases, the Data Act's data obligations do apply.
2. The distinction between raw data and derived data
A crucial distinction for AI developers is what data to share with the user:
- Available data: The Data Act mandates the sharing of data generated by the product (raw data or pre-processed data).
- Derived data: Information that is the result of significant investment and processing through proprietary, complex algorithms is excluded from the partial obligation.
For AI systems, this boundary is often vague. Input data (e.g., sensor data) falls under the Data Act, but the output of a complex AI model probably does not. This distinction will cause much legal debate in practice and requires accurate ‘data mapping’ within your organization.
3. Contractual pitfalls for AI training
The Data Act limits what data holders (e.g., manufacturers) may do with non-personal data. Manufacturers may not simply use data from connected products to train their own AI models or gain insights about the user's economic situation, unless contractually specified.
Moreover, the Data Act prohibits sharing this data with third parties for purposes other than contract performance. This can be an obstacle for manufacturers who want to share data with external AI partners for further development.
Frequently Asked Questions (FAQ)
Does the Digital Markets Act (DMA) apply to my SME?
As a rule, not directly. The DMA specifically targets ‘gatekeepers’; these are very large companies with a systemic role in the internal market (such as Alphabet, Amazon, Apple, ByteDance, Meta, Microsoft). However, if you as a business user depend on these platforms (for example, for your app or web shop), the DMA does provide you with important rights and protection against unfair competition.
Can I use data from my IoT devices to improve my AI?
Under the Data Act, as a data holder, you may use non-personal data (“readily available data”) only on the basis of a contract with the user. You may not use this data to gain insights that undermine the user's commercial position. Thus, for training AI models, it is essential to have clear agreements in your user agreements.
What if the rules of the AI Act and the Data Act contradict each other?
Although the legislations have different purposes, they are meant to be complementary. There is no direct hierarchy. In practice, this means that you have to comply with both . For example, you must ensure that your AI system is secure (AI Act) AND that the user has access to the data generated (Data Act). This requires a holistic compliance strategy.
Conclusion: an integrated approach is necessary
The time of preparation is behind us. As of Sept. 12, 2025, most of the provisions of the Data Act apply. This means that contracts you enter into today on data sharing and AI development must immediately comply with the strict requirements against unfair terms (Chapter IV of the Data Act).
The interaction between the AI Act, the DMA and the now active Data Act creates a complex legal landscape. For providers of connected products and AI services, it is no longer a theoretical exercise: the rules apply now. Those operating today with outdated contracts or data streams that do not account for user access rights are at acute compliance risk.



