What is the difference between a FRIA and a DPIA?

The advent of the European AI Act (or 'AI Regulation') introduces a wave of new obligations for organizations that develop or use artificial intelligence. Central to these new rules is the Fundamental Rights Impact Assessment (FRIA) , a mandatory impact assessment. Many companies mistakenly assume that this is simply a new name for the Data Protection Impact Assessment (DPIA) we already know from the General Data Protection Regulation (GDPR).

This is a misconception. While both analyses evaluate risk, the FRIA is fundamentally different and significantly broader than a DPIA. Understanding this difference, the methodology and the new operational challenges is critical to your organization's compliance.

The fundamental difference in scope

The first and most important distinction is the scope of protection.

The DPIA (GDPR).

The DPIA, or "Data Protection Impact Assessment," required under Article 35 GDPR, is strictly focused on one specific fundamental right: the protection of personal data. A DPIA is required when a processing of personal data, especially through new technologies, is likely to pose a high risk to the rights and freedoms of natural persons. The focus is on risks such as data breaches, unauthorized access or loss of control over personal information.

The FRIA (AI Act).

The FRIA, or "fundamental rights impact assessment," required under Section 27 AI Act, has a much broader and deeper scope. This analysis requires organizations to evaluate the impact of an AI system on all fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.

The AI Act thus forces an organization to look beyond data protection. The FRIA must evaluate risks on such things as:

  • Human dignity;
  • The right to non-discrimination;
  • Freedom of speech and information;
  • Workers' rights;
  • Right to education;
  • Consumer Protection;
  • The right to effective judicial remedies

The FRIA thus forces an organization to think about systemic risks, such as the likelihood that an AI model for recruitment will unknowingly discriminate, or that an AI in healthcare will violate human dignity. So where the DPIA puts personal data at the center, the FRIA puts the system at the center and looks at all societal impacts.

DPIA vs. FRIA: a methodological difference

The approach of the two analyses differs significantly, even though they share a similar logic of risk management.

Commitment and focus

A DPIA is required for any high-risk processing, whether it is AI or some other technology (e.g., large-scale camera surveillance).

The FRIA is specifically and exclusively required for the commissioning of a high-risk AI system. The AI Act, in its annexes, defines precisely which systems are included (e.g., AI in critical infrastructure, education, employment, law enforcement, etc.).

Analysis of risks

Both analyses require an assessment of necessity and proportionality. However, the FRIA explicitly adds that the user should look at possible non-AI alternatives.

In analyzing the risks (based on probability and severity), the FRIA requires a deeper, more contextual assessment. This includes variables such as the effort required for recovery (remediation) and the degree of exposure of affected groups.

A crucial methodological difference is that the FRIA requires an evaluation by fundamental right. It is not permissible to "offset" a negative impact on one right (e.g., non-discrimination) with a positive impact on another right (e.g., efficiency).

Measures and monitoring

Following a high residual risk DPIA, the user is required to consult prior with the Data Protection Authority (DPA).

The AI Act uses a different logic. The FRIA leads to a holistic package of measures (technical, organizational as well as contextual). It then requires the user to notify the results to the relevant national market surveillance regulator. This is a form of control a posteriori rather than consultation ex ante.

Transparency

A DPIA is an internal document that must be kept available to the DPA. There is no general publication requirement.

The FRIA introduces a new form of public transparency. For certain users, such as government agencies, there will be an obligation to publish a summary of the FRIA in a central EU database. This significantly increases public accountability.

The operational challenge: who does what?

The AI Act creates two major operational challenges that the GDPR did not face in its current form: unclear governance and the information gap.

The 'AI Officer' does not exist (yet)

The GDPR institutionalized the role of the Data Protection Officer (DPO): a mandatory, independent expert role within the organization.

The AI Act does not do this. It does not create a mandatory "AI Officer. The Act leaves governance flexible. This gives organizations the freedom to choose their own model (e.g., an ethics committee, a multidisciplinary team, outside experts). While this is flexible, it also creates a risk of very different (and potentially low) quality and depth of FRIAs in practice.

The question arises whether the DPO should then take on this task. This is not without risk. The DPO's independence may be compromised if he or she is also given a decisive role in the development or purchase (deployment) of AI systems. A DPO role limited to purely compliance advice could be possible.

The information gap between provider and deployer

Perhaps the biggest challenge is information asymmetry. The AI Act creates a value chain with "providers" (developers) and "deployers" (users). The ultimate responsibility for the FRIA lies with the deployer.

However, to perform a proper FRIA, the deployer is entirely dependent on the provider's technical documentation, test results and risk analyses. The provider is required to provide information about the features and residual risks of the system. However, there is a real risk that providers will be reluctant to share crucial information, for example, because of trade secret protection or the sheer technical complexity of "black box" machine learning models.

This asymmetry makes the deployer vulnerable. Without complete information from the provider, it is virtually impossible to establish a compliant FRIA. This highlights the critical importance of watertight contractual agreements. Just as the GDPR led to mandatory data processing agreements, the AI Act will force specific contract clauses that clearly regulate the duty to cooperate, information sharing and division of liability between provider and deployer.

Existing tools and the way forward

The requirement to conduct a FRIA will take effect on Aug. 2, 2026. Currently, the European 'AI agency' has not yet published an official, harmonized model for a FRIA.

In the meantime, however, a multitude of tools and methodologies have already been developed by governments, academia and private players. Some examples are:

  • The Dutch 'FRAIA': One of the first initiatives developed for the Dutch government. It is a substantial (nearly 100 pages) discussion tool, but a pilot study also revealed that the process can be perceived as slow.
  • The Catalan model: An initiative of the Catalan DPA that proposes a more empirical and streamlined approach to reduce the burden on organizations.
  • The ALTAI Checklist: Developed by the EU High-Level Expert Group. This tool was influential before the AI Act, but has not been updated since 2020 and may be outdated.
  • Other tools: Also UNESCO , Microsoft and various academic groups have published their own impact assessments.

It is hoped that the AI Bureau will soon propose a harmonized framework that provides legal certainty but also allows the necessary flexibility.

Frequently asked questions (FAQ)

What is a "high-risk AI system"?
The AI Act defines "high-risk systems" in Annex III of the regulation. These are systems used in specific domains where they may pose a significant risk to health, safety, or fundamental rights. Consider AI for selecting job applicants, making medical diagnoses, or assessing creditworthiness.

When does the FRIA requirement take effect?
The requirement to conduct a FRIA for high-risk systems becomes applicable as of 2 August 2026. However, given the complexity of the analysis, it is essential to start preparing for this now.

Are there already official templates for a FRIA?
No. The "AI Bureau" of the European Commission has yet to develop an official model or template. In the meantime, various methodologies and tools have been developed by national governments (such as the Dutch FRAIA), academics and private companies (such as Microsoft), but they have not yet been harmonized.

Conclusion: start preparing today

The FRIA is a complex and sweeping requirement that significantly adds to the compliance burden of using AI. It is not a "check-the-box" exercise and goes far beyond the DPIA you are used to. Organizations must evaluate not only the technical risks, but also the broad societal and ethical impact on all fundamental rights.

Supplier dependence and the lack of an imposed governance model require proactive action. It is essential to start thinking about internal processes now and to have your contracts with AI providers reviewed by legal counsel.


Joris Deene

Attorney-partner at Everest Attorneys

Contact

Questions? Need advice?
Contact Attorney Joris Deene.

Phone: 09/280.20.68
E-mail: joris.deene@everest-law.be

Topics