The EU's 'Apply AI' strategy: a legal breakthrough or a paper tiger?

The European Commission today unveiled its long-awaited ''Apply AI' strategy , an ambitious plan to accelerate the adoption of artificial intelligence across the Union. The strategy recognizes that despite a strong industrial base, the EU lags behind in the concrete adoption of AI, particularly among SMEs. The central message is a shift from development to application, with the goal of strengthening European competitiveness and technological sovereignty. While bursting with ambition, the plan simultaneously raises fundamental legal and strategic questions that will determine its success.

Core strategy: from lab to workplace

The "Apply AI" strategy is built on a simple but crucial premise: Europe must not only develop AI, but above all, use it. It introduces an "AI first policy," encouraging companies and governments to look first to AI solutions, preferably European-made, when solving problems.

To accomplish this, the strategy unfolds a two-pronged approach:

  1. Sectoral 'flagships': Targeted initiatives in strategic sectors such as healthcare, industry, defense, mobility and the public sector should boost AI adoption. Think AI-driven screening centers for cancer diagnosis or the development of AI models for the manufacturing industry.
  2. Horizontal challenges: The strategy also addresses overarching issues. This includes better support for SMEs through European Digital Innovation Hubs , educating an 'AI-ready' workforce and, crucially, removing uncertainty around the implementation of the AI Act.

The Frontier AI Initiative: the strategic and legal center of gravity

Although the sectoral approach is broad, the true strategic core of the strategy lies in the so-called Frontier AI Initiative. This initiative focuses on developing the most advanced, frontier AI models - the "foundation models" that form the basis for countless applications. It is here that the ambition of technological sovereignty is brought into sharpest focus. But it is also here that the biggest legal and practical hurdles emerge. The success of this initiative, and by extension the entire strategy, depends on how the EU addresses the following five crucial aspects.

1. From model to measurable impact

A European AI model is only valuable if it is used in practice. The strategy should avoid the EU itself trying to build a model from scratch. The focus should be on targeted support for existing private and academic initiatives. The real test is whether a manufacturing SME or a local public service actually derives a measurable benefit from these technologies. Simple "EU-wide competitions" may not be enough in this regard; concrete, approachable support is needed.

2. Security and reliability as the foundation

In the current geopolitical context, AI without robust security guarantees is an unacceptable risk. Frontier AI Initiative must integrate safety from design ('safety by design'). This goes beyond just technical robustness and touches on product liability, critical infrastructure security and preventing misuse for disinformation or cyber attacks.

3. The legal sticking points: copyright, data and fairness

The strategy recognizes the need for "trustworthy AI", but the road to it is littered with legal obstacles. The Frontier AI Initiative should play a leading role in resolving outstanding issues:

  • Copyright: The ambiguity about the use of copyright protected material for training AI models is a major bottleneck. The strategy promises a study, but the market urgently needs concrete guidance. For example, what qualifies as a valid, machine-readable opt-out under the Text and Data Mining (TDM) exception? Without clarity on this, developers and creatives alike will continue to operate in a legal vacuum.
  • Data and privacy: AI models need huge amounts of data. The strategy refers to the European Health Data Space and Data Spaces for Manufacturing, but the practical and legal elaboration of data-pooling respecting the GDPR, trade secrets and intellectual property remains an immense challenge.
  • Fairness and bias: Avoiding discrimination and "hallucinations" is a core requirement of the AI Act. European models must be trained on diverse and representative datasets to comply with these principles and gain the trust of citizens and consumers.

4. Sovereignty: is money and political will enough?

The ambition to be sovereign in a market dominated by non-European tech giants requires more than good intentions. The Commission is mobilizing about 1 billion euros, but the question is whether this is enough. True sovereignty requires bold choices:

  • Concrete deliverables: Instead of mere competitions, concrete levers are needed, such as standardized templates for SMEs, vouchers for AI training and the guarantee of computing power running on renewable energy.
  • European tendering rules: A powerful but politically sensitive tool could be the introduction of new procurement rules ("Buy European"). By creating significant demand from the public sector for European AI models, market development could be significantly stimulated.
  • Coordinated action: The strategy calls on member states to coordinate their national strategies. Initiatives such as a Franco-German summit on digital sovereignty could be crucial in strengthening this coordination.

5. Governance: who is keeping a finger on the pulse?

The strategy provides for a governance structure with an "Apply AI Alliance" and an "AI Observatory". This is a good step, but its effectiveness depends on how it is implemented. An independent advisory board with experts can monitor quality and keep ambitions high. Moreover, it is essential that not only industry but also civil society and consumer organizations are given a full voice in the debate. This is the only way to get a full picture of what Europe really needs and to ensure that AI is at the service of all of society.

What does this mean for your business?

The Apply AI strategy is not an abstract policy document. It signals a clear direction that will eventually affect every business. Companies would do well to act proactively now:

  • Anticipate the AI Act: Don't wait until all guidelines are published. Start mapping your (planned) use of AI systems now and evaluate whether they may fall under the "high risk" category.
  • Evaluate your data: The quality and legality of your data are the fuel for AI. Make sure your data governance is in line with the GDPR and that you have the necessary permissions to use data for AI applications.
  • Check for copyright: If you are using or developing generative AI, be aware of copyright risks. Document sources and be careful of output that may infringe on the rights of third parties.

Conclusion

The "Apply AI" strategy is a necessary and ambitious attempt to turn Europe's lag in AI adoption into an advantage. The focus on practical implementation and technological sovereignty is justified. However, success will depend not on the rhetorical declarations, but on the extent to which the EU succeeds in actually resolving the complex legal, financial and governance bottlenecks - especially within the crucial Frontier AI Initiative. Without concrete answers to questions around copyright, data and security, this strategy risks remaining a paper tiger.


Joris Deene

Attorney-partner at Everest Attorneys

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics