Justice / Opinion
April 26, 2024

On the relative importance of the AI Act right to explanation

The initial Commission proposal for the AI Act focused on establishing requirements and safeguards applicable to the various operators of AI systems but lacked individual rights and remedies for persons on the receiving end of such systems. In contrast, the AI Act version approved by the European Parliament on 14th March 2024 includes some rights and remedies available to natural and legal persons interacting with, or impacted by, AI systems. One such right – a right to explanation for individual decision-making – is given closer attention in this contribution. Although a welcome addition to ensure the protection of fundamental rights, it is argued that the practical impact of this right to explanation may have its limits as it lingers in the shadow of the equivalent right in the GDPR and is restricted to the use of high-risk AI systems which may adversely impact the health, safety, or fundamental rights of affected persons.

Amid the heated debates over the prohibited uses of AI, the classification of high-risk AI systems and the governance framework for generative AI, the right to explanation for individual decision-making entered Article 86(1) AI Act relatively quietly with little discussion around its implications.

Meanwhile, a ‘right to explanation’ for automated decision-making has long been sought for in the provisions of GDPR. Diverging opinions have been voiced over the existence and material scope of this right. By now, it seems that the scales have tilted in favour of reading a right to explanation into the GDPR. In a recent opinion, Advocate General Pikamäe interpreted the right to receive ‘meaningful information about the logic involved’ in automated decision-making (Article 15(1)(h) GDPR) to encompass both “sufficiently detailed explanations of the method used to calculate the [output] and the reasons for a certain result” (para. 58).

The soon-to-be-adopted AI Act does not contain similar ambiguity. Following the trilateral negotiations, the AI Act now contains a more explicit “right to request from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken”.

However, if we recognise that the right to explanation exists in the GDPR, the added value of the new provision in the AI Act becomes questionable. In this contribution, I will examine some of the limitations to the exercise of the AI Act right to explanation and argue that it can still prove a useful tool for protecting individual rights in a time of increasing use of AI systems.

Omission from the initial AI Act proposal

The Commission’s decision to omit a right to explanation from the initial AI Act proposal is the first sign to make us question its added value. According to the explanatory memorandum, the Commission was confident that “the obligations for ex ante testing, risk management and human oversight will also facilitate the respect of other fundamental rights by minimising the risk of erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, important services, law enforcement and the judiciary.” Regardless of these ex ante obligations and safeguards, the Commission acknowledged that infringements of fundamental rights may occur. However, it trusted that the transparency and traceability requirements established for AI systems would ensure effective redress for affected persons. By effective redress, the Commission was presumably referring to the existing legal pathways under EU and member state laws. As Panigutti et al. demonstrate, even in the absence of the right to explanation, the AI Act proposal included requirements ensuring a degree of algorithmic transparency from the start. Considering this, the practical significance of the AI Act right to explanation seems limited.

Overlap with the GDPR right to explanation

The added value also seems limited after the CJEU’s rather wide interpretation of Article 22(1) GDPR which, when applicable, subjects automated decision-making to additional safeguards such as the right to information established in Article 15(1)(h) GDPR. The CJEU clarified in its SCHUFA judgement in 2023 that the concept of a ’decision’ based solely on automated processing within the meaning of Article 22(1) GDPR should be understood to also include a fully automated part of the final decision if such a part plays a determining role in the decision (para. 48). Whereas Article 22(1) GDPR was formerly thought to only apply to decisions based on solely automated processing, the Court’s interpretation now opens it up to also include certain partially automated decisions. Combined with AG Pikamäe’s interpretation of Article 15(1)(h) GDPR, this judgment points to the presence of a rather widely applicable right to explanation in the GDPR. Arguably, the AI Act right to explanation adds little meaningful protection to this.

Furthermore, Article 86(3) AI Act specifies that „[t]his Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law.“ It is safe to assume that, among others, the EU legislator is here making a reference to the supremacy of the right to explanation arising from the combination of Articles 15(1)(h) and 22(1) GDPR. The AI Act right to explanation is thus only relevant when and to the extent that a person does not have a right to receive an explanation under these provisions.

The limited scope of Article 86(1) AI Act

In addition to the above, the AI act right to explanation can only be enforced against an AI system deployer if two cumulative conditions are met.

First, the right will exist only if a decision has been made using a high-risk AI system belonging to the areas listed in Annex III of AI Act. This means that it only applies in relation to systems in the areas of:

1- biometrics,
2- education and vocational training,
3- employment, workers management and access to self-employment,
4- access to and enjoyment of essential private services and essential public services and benefits,
5- law enforcement,
6- migration, asylum and border control management, and
7- administration of justice and democratic processes.

However, Article 6(3) AI Act introduces a number of carve-outs from the high-risk categories listed in Annex III. Such carve-outs apply where a system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of the decision-making. Examples where no significant risk of harm is considered to exist include AI systems intended to:

1- perform a narrow procedural task,
2- improve the result of a previously completed human activity,
3- detect decision-making patterns or deviations from prior decision-making patterns (not meant to replace or influence the previously completed human assessment, without proper human review); or
4- perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

Although essential to relieve AI system deployers from unnecessary administrative burdens, these carve-outs may prove to be devils in disguise. The option to argue that a system falls under any of these exceptions poses a threat that high-risk systems which should be subject to the AI Act safeguards will not be determined as high-risk by their operators. A system excluded from the high-risk categories listed in Annex III in this manner would not have to be explained to affected persons in accordance with Article 86(1) AI Act.

Second, the right to explanation will exist only if a decision produces legal effects or similarly significantly affects a person in a way that they consider to adversely impact their health, safety and fundamental rights. Since this requirement is formulated similarly to the wording used in Article 22(1) GDPR, the terms ‘legal effect’ and ‘similarly significant’ effect will likely be interpreted in accordance with the guidance provided by CJEU case law and European Data Protection Board. The additional requirement that the ‘health, safety and fundamental rights’ of an affected person should be adversely impacted likely refers to the carve-outs listed in Article 6(3) AI Act. This requirement aims to eliminate the need for high-risk AI system deployers to explain all AI systems within their domain of activities with insignificant impact on decision subjects. For example, systems unlikely to cause fundamental rights infringements include systems optimizing document handling, information storage or improving indexing, searching, or text processing.

The gaps left by the GDPR right to explanation

To assess the relative strength of the questions concerning the added value of the AI Act right to explanation, we must first establish when the GDPR right to explanation does not apply and where, consequently, the AI Act version of this right could fill a gap.

A data subject enjoys the right to obtain information outlined in Article 15(1)(h) GDPR where the cumulative conditions established in Article 22(1) GDPR are met (SCHUFA, para. 56). For that, a decision must: a) be based solely on automated decision-making, b) involve the processing of personal data, and c) have legal effects or similarly significantly affect the decision subject.

Article 22(1) GDPR does not apply to situations where a decision is reached based on solely automated processing, but where no personally identifiable information is processed, even if the resulting decision may lead to legal or similarly significant effects. For instance, if the processing involves anonymised GPS data, or if privacy enhancing technologies are used to perform data analytics on aggregated data which no longer includes personal data, the second condition established in Article 22(1) GDPR would not be satisfied.

Article 22(1) GDPR also does not apply to most situations where personal data is processed, but the decision-making procedure is only partially automated. For instance, where public bodies use algorithms for fraud detection, the algorithms are only used to filter out a subset of individuals ’suspected’ of fraud, who will then be further investigated by human officials. The distinction between solely and partially automated decisions requires careful consideration following the SCHUFA judgement, but even where the case-by-case assessment may lead to some human-machine decision-making combinations entering the scope of application of Article 22(1) GDPR, there will be many combinations falling outside this scope.

Thus, there are circumstances in which the GDPR right to explanation will not apply, but where citizens and companies might still require access to information about AI systems affecting them. In those circumstances, the AI Act right to explanation could be useful. As specified in Article 86(1) AI Act, the deployer must explain the AI system’s role in the decision-making procedure as part of the explanation. This indicates that the AI Act right to explanation can apply to decisions where AI systems may be involved in different steps and capacities ranging from merely advisory to fully determinative. Consequently, it may fill a considerable gap left by the GDPR’s right to explanation.

The final verdict: valuable regardless of the limitations

The added value of the AI Act right to explanation is somewhat limited due to a substantial overlap with the GDPR’s right to explanation and the limitations built into the AI Act provision. In light of these limitations, the Commission’s initial approach for individuals to rely on EU member state laws to protect their rights may have some merit after all. Instead of arguing that Article 86(1) AI Act applies, it could be easier for an affected person to petition courts under national law for the technical documentation (Article 11), system logs (Article 12) and instructions for use (Article 13) of a high-risk AI system.

Nevertheless, even with considerable limitations to its scope of application, the utility of the AI Act right to explanation should not be underestimated. When applicable, the new provision grants the affected person a right to receive clear and meaningful information about a decision directly from the decision-maker; exercising this right does not presume entering into any legal proceedings. Hence, there are conceivable circumstances in which the AI Act right to explanation could serve its purpose to “provide a basis on which the affected persons are able to exercise their rights,” as specified in recital 171 AI Act. For instance, administrative decisions concerning public services and benefit allocation often include algorithmic components, personal data, and have significant consequences, but may not qualify for the protections awarded by Articles 22(1) and 15(1)(h) GDPR. Similarly, optimization solutions used in the areas of, for example, workers management, education services or law enforcement, where system inputs may focus on other data points than personal data, fall out of the scope of the protections afforded by the GDPR provisions. If these entities use algorithms which qualify as high-risk AI systems within the meaning of Annex III and are not claimed to fall under the carve-outs in Article 6(3) AI Act, the AI Act right to explanation would provide affected persons insight into the role of these systems in the resulting decisions. Even if this right will not be used often, it has relevance as the last resort measure to obtain crucial information to exercise one’s right to an effective remedy.

This article was first published on the Digital Constitutionalist on April 25, 2024.

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe