Operationalising human oversight of AI-supported judicial decision-making: A systemic perspective

Partners: University of Amsterdam & Tilburg University
Type: PhD
Duration: 2023—2027

Decision support systems that use artificial intelligence (AI) to automate aspects of judicial decision-making – and the private companies often driving their development – challenge traditional notions of legal authority in ways that may compromise the fundamental rights of litigants and bring about a consequential shift in public values and expectations around justice (Re & Solow-Niederman 2019). At the same time, they promise greater efficiency, accuracy, consistency, and convenience that – if realized – could make justice more accessible (Susskind 2019; Sourdin 2021). These AI systems and the prospect of “robojudges” presiding over courts of the future raise the question of whether adjudication is a uniquely human activity or one that can gradually be taken over by machines. 

The unique capacities and limitations of human and AI judges make human-machine collaboration in adjudication and more specifically, human oversight of decision support systems, seem like an intuitive way to deliver the best of both worlds and avoid the worst of each (Crootof et al. 2023; Van Domselaar 2022). Human oversight of decision support systems is typically organized by keeping a human professional “in-the-loop” to interpret, review, and if necessary, challenge automated outputs (Binns 2022). 

The importance of human agency and oversight has long been recognized in the judicial context. As early as 2013, the European Commission for the Efficiency of Justice emphasized the need for judicial AI systems to remain “under user control.” This requires users of these systems to be sufficiently informed, autonomous, and enabled to review automated judicial decisions and the data relied upon to produce them. Human oversight is also a central feature of the new EU AI Act, which classifies AI systems used to support judicial decision-making as high-risk. 

Many questions remain about how the human-in-the-loop model of human oversight should be organized in judicial practice, however. Judges responsible for using and overseeing decision support systems may exercise their discretion in disparate ways, resulting in an unequal distribution of individual justice and the loss of any consistency-related benefits of automation. The experimentation that would likely be needed to calibrate effective human-machine collaboration is particularly difficult in the judicial context, “where human liberties are stake” and AI systems can “fail in ways that harm litigants” (Crootof 2019, 243). 

This PhD project aims to address the uncertainties around human oversight in the judicial context by understanding decision support systems as part of complex socio-technical systems. Through theoretical, empirical, and doctrinal research, it seeks to address the question: How should human oversight of AI-supported judicial decision-making be conceptualized, organized, and governed? The dissertation is organized around the following sub-questions: 

  • How should human oversight of AI-supported judicial decision-making be conceptualized? Specifically, what does human oversight aim to achieve in the judicial context, and what does it mean for judges to be both professionals and “humans-in-the-loop”? This question will be addressed through a comprehensive and critical analysis of human oversight that unpacks the concept, surfaces its underlying assumptions, and applies it to the judicial context. 
  • How is human oversight of AI-supported judicial decision-making organized (or envisioned) in practice? This question will be explored through semi-structured interviews and ideally, participant observation focused on understanding the practical mechanics of (or professional and policy discussion around) human oversight in a specific judicial context. 
  • How could the organization of human oversight in the judicial context be improved? This question will be addressed through a co-design session that convenes experts in law, technology, and human-machine interaction. Participating experts will be confronted with the combined insights of the previous two studies and asked to engage in a requirements analysis focused on effective human oversight of AI-supported judicial decision-making. 
  • How could the governance of human oversight in the judicial context be improved? This question will be addressed through a critical, doctrinal analysis of the human oversight requirement of the draft AI Act (Article 14) and other relevant governance mechanisms (e.g., guidelines, professional codes, or other forms of self-regulation or professional practice) in the judicial context. 

This project sits at the intersection of AlgoSoc’s Justice and Governance work streams, meaning it will additionally help to address two of the consortium’s overarching research questions: How can the justice sector define and realize public values surrounding the rule of law in the algorithmic society? How can responsibility for public values be organized, and decision-making power regulated, in the algorithmic society? 

By identifying gaps in how human oversight of AI-supported judicial decision-making is understood in theory, in law, and in practice, this cumulative dissertation aims to support the development of more realistic, systemic, and socio-technical models of human oversight in the judicial context. It will also seek to inform better governance of judicial decision support systems through self-regulation and the newly implemented EU AI Act.

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe