Rules, tools and metrics – the social construction of legitimate algorithmization

Partners: Tilburg University & Utrecht University
Type: PhD
Duration: 2023—2026

The increasing use of algorithmic systems and automated decision making systems within the public sector has been accompanied by a societal call to increase the oversight of the use of these mechanisms. Several instruments – such as a human rights impact assessment, or legal, technical, or ethical audits – can be used to both check whether the use of the algorithm falls within the scope of government work, but also to legitimize working with the algorithm. Having the checkmark that an algorithm is audited by an external party can lead to the increased acceptance of the government using these systems, whether the system is fully automated (an automated decision-making system) or merely used as decision support.

This research project will investigate how instruments designed for ensuring the responsible and legitimate use of ADMs shape decision-making processes within public organizations. In order to do so, this research will map the societal debate surrounding the call oversight mechanisms for these systems. As a second part, a study will also be conducted to look at the intended effects of these instruments – for example, the FRAIA (Fundamental Rights and Algorithm Impact Assessment) was written by several people, who had to make decisions regarding the values and checks included in this check. Additionally, this research project will include one or multiple case studies, to look at the actual effects these instruments have on decision making processes within public organizations. The intended effects might, after all, not match how automated decision making systems are actually worked with. Based on the research conducted within this project and the literature, another part of this PhD project will be to create an overview of current auditing methods, and in which way they might contribute to the legitimization of algorithmization.

Beyond the academic output, this research project aims to provide actionable insights for public organizations. Understanding how these instruments actively shape decision-making processes can inform the development of more effective governance instruments and strategies. Recent scandals and controversies (particularly in Dutch society) have eroded public confidence, and the responsible and legitimate use of algorithmic systems is a crucial step towards rebuilding this trust. By shedding light on how instruments impact societal perceptions, the research can offer guidance on how public organizations can improve transparency and accountability. Ultimately, this research strives to ensure that algorithmic systems are harnessed for the benefit of society while addressing the concerns that have arisen in an era of increasing automation and algorithmic decision-making.

This project is included in the Justice sector of the AlgoSoc project, as it looks at decision-making within the public sector which can have a huge impact on (distributive) justice, but also the possibility to have decisions on e.g. welfare tailored to the individual (in Dutch: ‘de menselijke maat’). Furthermore, by looking at the values embedded into the oversight instruments and how these play out in day-to-day routines and practices, this project focuses on the effects of automated decision-making systems on our society. As such, this will also create further opportunities to discuss which values we want to have included within both the algorithmic systems, but also the oversight mechanisms.

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe