October 02, 2025
Interactive workshop: Designing a ‘fair’ human-in-the-loop

What does it really mean for humans-in-the-loop (HITLs) to make AI systems more “fair”? At the Fourth European Workshop on Algorithmic Fairness (EWAF), AlgoSoc PhD candidates Isabella Banks and Jacqueline Kernahan tackled this question in their interactive workshop “Designing a ‘fair’ human-in-the-loop.” Drawing on case studies from the justice and health sectors, they invited participants to reflect on the real challenges, limits, and possibilities of human oversight in sociotechnical systems.
On 2 July 2025, AlgoSoc PhD candidates Isabella Banks (Institute for Information Law, University of Amsterdam) and Jacqueline Kernahan (Technology, Policy, and Management, Delft University of Technology) facilitated an interactive workshop entitled ‘Designing a ‘fair’ human-in-the-loop’ at the Fourth European Workshop on Algorithmic Fairness (EWAF) in Eindhoven. Isabella researches the governance of automated decision support systems (ADSS) in the justice sector and Jacqueline the governance of ADSS in the health sector, but both are interested in understanding human oversight through a sociotechnical lens and more specifically in the capacity of humans-in-the-loop (HITLs) to make ADSS more fair.
After briefly introducing the topic and structure of the workshop, which was joined by experts from academia, civil society, and government, this blog post highlights a few of the key insights it generated among EWAF participants. In the final section, Isabella and Jacqueline outline the workshop in detail and provide the materials so that anyone interested can replicate or adapt it for their own educational purposes. They are also open to facilitating the workshop again in a different environment and can be reached at i.banks@uva.nl and j.a.kernahan@tudelft.nl to arrange this.
Human oversight in sociotechnical AI systems
The new European Artificial Intelligence (EU AI) Act makes human oversight central to the governance of AI systems classified as high-risk. This means that such systems must be “designed and developed in such a way…that they can be effectively overseen by natural persons.” The primary purpose of human oversight according to the AI Act is to prevent or minimize risks to health, safety, or fundamental rights that may emerge while an AI system is used – in particular those that persist despite the application of the other high-risk system requirements. Although responsibility for human oversight is placed primarily with providers of high-risk AI systems, institutions that deploy them are required to assign human oversight to individuals with the necessary competence, training, authority, and support.
In the context of decision support systems that provide information to professionals to help them make decisions, the individual tasked with oversight will typically be the system’s user. This HITL acts as a check on the system by reviewing its outputs and retaining authority over the final decision.
In their 2023 paper ‘Humans in the Loop,’ Crootof, Kaminski, and Nicholson Price identify eight different roles that HITLs may implicitly or explicitly be expected to play. These include:
- Improving system accuracy (e.g. detecting and correcting bias and error)
- Improving system resilience (e.g. by minimizing the harms from bad outcomes)
- Justifying decisions (e.g. by providing reasons)
- Respecting the dignity of decision subjects (e.g. by making decisions more humane)
- Being the accountable (legally liable or morally responsible) party
- Signalling through their presence that some kind of regulatory work has been done
- Slowing down system operation in useful ways (by creating friction)
- Preserving human jobs
This workshop was however geared towards understanding what it takes for humans-in-the-loop to operate according to the ideal principle. It therefore focused on the oversight roles that prevent unfairness or unequally distributed harm as opposed to those that are legal or purely performative - namely:
- Improving system accuracy (e.g. detecting and correcting bias and error)
- Improving system resilience (e.g. by minimizing the harms from bad outcomes)
- Respecting the dignity of decision subjects (e.g. by making decisions more humane)
- Slowing down system operation in useful ways (by creating friction).
Workshop structure and insights
The central question the workshop explored is how misalignments in mental models and asymmetries in epistemic power between developers, users/HITLs, and decision subjects affect how these stakeholders understand the ‘fairness’ of ADSS and the role of the HITL in actualizing that fairness in practice. Through a comparative analysis of case studies in the healthcare and welfare domains, participants were invited to think together about how the contexts in which these systems were deployed, and the positionalities of their assigned stakeholder group (system developer, user/HITL, or decision subjects), affected how they conceived of a ‘fair’ and ‘effective’ HITL. The workshop concluded with a presentation about the real-world outcomes of each case, and a discussion about what HITLs can (and cannot) be expected to achieve in practice and whose knowledge and experiences are (and should be) reflected in the way ADSS are designed, used, and overseen.
Insights and reflections from participants in the workshop highlight some of the critical challenges that stakeholders face in systems that rely on HITLs.
Insight 1: Risks were difficult to anticipate due to system complexity Participants in each stakeholder group found it difficult to anticipate the unfairness or harms that might emerge from the wider system. Following the presentation of the negative outcomes that resulted in the real-world case studies, one participant noted that while the risks seemed obvious in hindsight, it had been much easier to create a narrative about the benefits of each system while answering the workshop questions.
Insight 2: A feeling of powerlessness to meaningfully affect the system was shared across stakeholder groups None of the stakeholder groups felt that they had enough control over or knowledge about the system to meaningfully anticipate or mitigate its risks. They could not see a way to prevent the system from being built and used (even the developers), and also felt powerless to disengage from the system. Each stakeholder group felt that their actions were constrained by other (more powerful) actors and organisational, political, and legal requirements and that they did not know how to push back on. While groups easily identified the limits of their own control and information, they tended to assume that the other groups had more information about and control over the system. That is, boundaries of control were seen as small for one’s own group and large for others - even though all groups received the same information in their handouts.
Insight 3: Humans-in-the-loop may not be in a position to recognize system unfairness, or take action to change it After learning the real-world outcomes of each case, participants observed that both systems seemed to have been reinforcing rather than ameliorating the biases of the designated humans-in-the-loop. That is, HITLs were being used as a check on unfairness in systems that were developed based on their biased intuitions. This made it difficult for them to identify or respond to unfairness in either system. Participants also noted that an overriding faith in the objectivity of each system led all stakeholders - including the HITLs - to place undue trust in its outputs.
More broadly, participants shared that the workshop helped them to interrogate their assumptions about the risks of complex sociotechnical systems and the capacity of HITLs to prevent them. It gave them insight into the level of information, control, and engagement with other actors that is necessary for system users acting as HITLs to effectively identify and mitigate unfairness.
Designing effective human oversight
The cases presented in the workshop reveal that rather than operating according to the ideal principle, HITLs often perform more symbolic or legal functions that serve to distract from harms caused by the socio-technical system as a whole and the punitive policies underlying it.
Designing a ‘fair’ HITL therefore requires an understanding of the wider system, political environment, and the ways that stakeholder positioning can impact the capacity of HITLs to prevent harm. Frameworks and methods that take these questions into account are needed to design systems that can be meaningfully overseen by humans. Towards this end, Isabella and Jaqueline are exploring the following concepts and approaches in their own research:
- Systems safety
- Meaningful human control
- Epistemic injustice
- Collective/institutional approaches to human oversight
- Restorative justice
You can learn more about Isabella’s and Jacqueline’s research here and here.
Workshop materials for download
A detailed outline of the workshop as well as all material related to its activities can be accessed in this folder (password: 4aFLY9w9Ki).
The materials you can find there were developed based on the following sources:
- Szalavitx, M (2021) The pain was unbearable. So why did doctors turn her away? Wired.
- Oliva, J. D. (2022). Dosing discrimination: regulating PDMP risk scores. Cal. L. Rev., 110, 47.
- Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State.
© Image at the top: Fanny Maurel & Digit / https://betterimagesofai.org / https://creativecommons.org/li...
More results /
By José van Dijck • August 25, 2025
By Natali Helberger • Claes de Vreese • July 31, 2025
By Natali Helberger • March 06, 2025
By Kätliin Lember • November 07, 2024
By Leonie Westerbeek • November 22, 2024
By Corien Prins • May 18, 2021

By Charis Papaevangelou • Fabio Votta • September 22, 2025
By Agustin Ferrari Braun • July 04, 2025
By Agustin Ferrari Braun • April 16, 2025