Justice / Opinion
June 18, 2024

AI in the justice sector: a force to be reckoned with

On June 6th 2024, Algosoc hosted an online interactive roundtable on the AI Act and its impact on the justice sector. The AI Act considers justice sector applications of AI systems as high-risk (AI Act Annex III, pt 8) and therefore subjects them to detailed rules related to risk management, reporting and human oversight. With the AI Act finally approved by the European Council, this discussion could not have been better timed.

The roundtable brought together practitioners and academics each with their unique perspective on the justice sector and was moderated by Eleni Kosta, Professor of Technology Law and Human Rights at Tilburg University. The discussion was kicked off by a short presentation by the president of the Amsterdam Court, Eveline de Greeve. She lay the foundation for the discussion by describing the current state of affairs regarding digitalisation and uses of AI in the judiciary. Her presentation was followed by Corien Prins, Professor of Law and Information Technology at Tilburg University and the Chair of the Netherlands Scientific Council for Government Policy (WRR), who also acts as the Principal Investigator of Algosoc. She highlighted a number of relevant principles and concerns arising out of the relevant publications by the European Commission for Efficiency of Justice (CEPEJ) and the Consultative Council of European Judges (CCJE), which provide a framework for interpreting how the AI Act will affect the justice sector. Following the institutional and academic take on the subject, Hans Graux, an ICT lawyer and founding partner of law firm Timelex, offered his perspective regarding the key challenges and potential benefits of introducing AI systems to the justice sector. Finally, Lokke Moerel – senior counsel at law firm Morrison & Foerster and Professor of Global ICT Law at Tilburg University – weighed in on the topics raised by the previous speakers.

The roundtable covered a number of fascinating topics – some closely related to the AI Act and some concerning the use of AI in the justice sector more generally. At the core of the discussion were questions of judicial independence, the role and capabilities of the human judge, and the call to avoid over-generalizations when discussing the impacts of using AI in the justice sector.

The justice sector, digitalization, automation and AI

Eveline de Greeve’s account of the current stage of automation in the Dutch courts gave a realistic overview of the challenges which the courts face and the potential they see in AI. Although the Dutch courts are experimenting with certain AI applications, they are currently faced with the bigger challenge of digitalisation of existing case law and automation of case management. Whereas these activities do not involve AI systems, digitalisation and automation present their own dilemmas. One such dilemma concerns the conflict between judicial independence and the concentration of control of digital infrastructure in the hands of a few large technology companies.

While actively engaged with digitalisation, courts are also looking at various ways in which AI could, in the future, improve the efficiency of case handling and citizens’ access to justice. For example, one considered possibility is using AI to bundle similar cases related to traffic fines. Another considered possibility is a dashboard that features case-based algorithmic recommendations which would allow people to assess their legal dispute against existing case law and get recommendations for further possible action. Importantly, the AI applications under consideration would not restrict or in any way affect a person’s right to have a case decided by a judge. In other words: the dystopian image of robot judges will remain a fantasy for the foreseeable future.

Use of AI in the justice sector demands caution

Why robot judges would be a bad idea in general was outlined by Hans Graux. He started off by contrasting judicial decision-making with medical diagnoses, where AI solutions have been successfully used to increase diagnostic accuracy and are now considered invaluable decision support tools. However, as he readily added, legal disputes often do not have as clear-cut solutions as medical issues and thus may need further scrutiny and reflection. For example, a distinct issue inherent to both human and AI decision-making is bias. Whereas a human can learn to recognize and avoid bias, a machine indiscriminately repeats the (biased) patterns in its input data. Although careful system design could help overcome this problem, another issue likely to arise if AI systems enter the domain of judicial decision-making is judicial stagnation. Human judges are unique in their capacity to interpret existing rules in light of changing social norms and societal circumstances. In contrast, AI systems make predictions based on historical training data and are unable to take into account factors which are not represented therein. Hans Graux illustrated this point with the example of how courts are increasingly being asked to (re-)interpret state obligations in relation to protecting human rights in the climate crisis (e.g. Klimaseniorinnen vs. Switzerland).

But why not use AI systems as a judicial decision support tool? That way the human judge will be in control of the final decision and compensate for the qualities which the AI lacks, while still benefiting from the AI system’s capacity to broaden the scope and quality of information available for deliberation. In relation to this, Lokke Moerel referred to recent CJEU case law which indicates that there is a thin line between assisted and fully automated decision-making. In OQ vs SCHUFA Holding the CJEU ruled that under certain circumstances a fully automated part of a decision can be subject to the safeguards intended for solely automated decision-making (Article 22 GDPR) even if a human is later involved in issuing the final decision. Lokke Moerel was making the point that in certain instances, AI-assisted decision-making can, or according to the CJEU should, be treated similarly to fully automated decision-making. To explain why this is the case, she pointed to automation bias – the phenomenon of having ‘disproportional trust in the validity and rationality of algorithms’, which leads human decision-makers to follow algorithmic outcomes as a default. Automation bias may lead humans to follow AI recommendations without sufficiently scrutinizing them. To help counter this effect, some much-awaited clarity regarding what should effective human oversight look like is provided in Article 14 of the AI Act. But even with human oversight measures in place, the possible implications to the decision quality must be carefully assessed when considering AI-assisted decision-making in the justice sector.

A more hopeful outlook

To counterbalance the various problems related to AI, some potential benefits of AI systems were also discussed. Lokke Moerel urged the audience to distinguish between the more controversial use of AI systems in judicial decision-making from the more mundane applications intended to increase the efficiency of the court system. AI tools aimed at increasing the efficiency of the courts could help remove bottlenecks and result in better access to justice without jeopardizing the decision-making autonomy of judges or the independence of the judiciary. As one tangible solution, Hans Graux suggested using AI to build out-of-court alternative dispute resolution platforms for certain disputes with long-standing and uniform case law where the subject of the dispute concerns the distribution of assets (e.g. family matters). An AI system built for such purpose could propose a reconciliation solution based on similar historic cases without the need to go through expensive and time-consuming court proceedings. However, the parties should naturally still have the option to decline the AI system’s proposal and opt for the traditional proceedings.

Next to the potential use cases of AI systems in the judiciary, Corien Prins highlighted that the courts should provide some guidance to lawyers and litigants who already use or consider using AI tools in drafting court submissions. She referred to how US District Court Judges have approached the subject, and asked whether lawyers and judges in the EU are receiving sufficient support in how to deal with court submissions which have been drafted with the help of AI. Some open questions that she raised in this respect are whether lawyers and litigants ought to be subjected to the duty to disclose the use of AI systems in their submissions, and how should judges assess the reliability of AI-generated content? Answers could be sought from the 2022 guide on the use of AI-based tools by lawyers in the EU. However, this guide represents the understanding of the subject by the EU lawyers’ associations. Perhaps the judiciary should also weigh in on this matter?

By the end of the session, all speakers agreed that although the risks and benefits stemming from the use of AI systems to the functioning and independence of the judiciary are manifold, the discussion should be nuanced. We should not flatly rule out the benefits that AI systems could bring to the justice sector in fear of the worst-case scenarios. Instead, we should consider specific AI systems in their particular implementation context to assess their potential implications for the judiciary. Hopefully, the regulation foreseen for high-risk AI systems in the AI Act will provide a structure for doing just that.

More results /

/ health
From dialogue to decision

By Leonie Westerbeek • November 22, 2024

Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
Content moderation and platform observability in the Digital Services Act

By Charis Papaevangelou • Fabio Votta • May 29, 2024

Journalism in the age of AI, platformisation, and regulation

By Agustin Ferrari Braun • Charis Papaevangelou • May 27, 2024

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe