March 06, 2023
What kind of algorithmic society do we want to strive for and live in? Interview with AlgoSoc directors Natali Helberger and Claes de Vreese (part I)
Algorithmic decision-systems are increasingly central to our lives. In some situations they even replace human decision-making. Renowned scientists from the universities of Amsterdam, Rotterdam, Utrecht, Delft and Tilburg have joined forces in a long-term research program which investigate how society can ensure that these (semi-)automated decision systems are designed to respect public values and human rights.
“It is essential to understand in what kind of algorithmic society we want to live and what kind of values should be central in such society.” — Prof. dr. Natali Helberger
The research programme Public Values in the Algorithmic Society (AlgoSoc) is directed by Natali Helberger, who is University Professor in Law and Digital Technology with a special emphasis on artificial intelligence at the University of Amsterdam (UvA). Together with co-director and UvA-colleague Claes de Vreese, University Professor of Artificial Intelligence and Society, she is member of a team of six principle investigators who are jointly responsible for making AlgoSoc result into an integrated vision on the relation between society and advanced algorithms and artificial intelligence.
We interviewed Natali Helberger (NH) and Claes de Vreese (CdV) to gain insight into AlgoSoc and its connections to current technological and regulatory developments.
We see increasing societal concerns about algorithmic technology because of its potentially disruptive effects. How does AlgoSoc relate to these concerns?
NH: “From a governance perspective, we currently see a very strong focus on human-centered AI design and compliance with fundamental rights and values. But actually we take all these values very much for granted and don't really ask ourselves what these values are, how to operationalise and translate them into algorithmic technology, and who is currently taking decisions about these values? And I think an important contribution of the AlgoSoc program is to dig deeper into these questions and study the dynamics between values and automated decision-making, including the humans behind these technologies and the powerscapes in which they operate.”
NH: “I think that public values are the DNA of our society: the public values we cherish determine what kind of society we live in. What we see at the moment is that, under the influx of these new technologies and automated decision making, entirely new – mostly technology - players get a very determining role about what these values are and how they are to be understood. And that is why AlgoSoc is also so important, because it develops a societal perspective on automated decision making. And such a perspective is key for building a governance structure that centers upon societal interests.”
“The public values we cherish determine what kind of society we live in.” — Prof. dr. Natali Helberger
CdV: “It is important to note that, at the moment, a lot of automated decisions are very much still hybrid decisions. There is still a lot of human agency involved before the rolling out of systems that do take larger decisions and have the self-learning capacity where every new decision is part of the next decision. However, some of the societal concerns about AI and self-learning systems are very legitimate. Many of these are rooted in the fact that machine learning models are only as good as the training data that are they build on. If biases go in, then biases come out. In AlgoSoc, we are particularly interested in how one can increase the transparency of outcomes coming from these technologies. But also, very important from a user or citizen perspective, what is the degree of explainability? And accountability? Can you contest their outcomes? These are some of the fundamental human rights related concerns that people have about these systems and which are central to AlgoSoc.
CdV: “Next to that, we are also looking into specific sectors, as the discussions on the implementation and societal embedding of this technology might vary from sector to sector. In AlgoSoc, we consider the justice, health, and media sector. There might be core values that are important to respect across each of these, but there might also be values that have particular importance to only one.”
Apart from the technology itself, also research on self-learning and automated decision systems is thriving. How does AlgoSoc fit in? To what extent is the program pioneering?
CdV: “Indeed, it all goes very fast right now. A year ago, very few people were talking about generative AI. It was available to few industrial stakeholders, let alone individual consumers. AlgoSoc lasts for ten years and given the current speed with which algorithmic technology is developing, it is impossible to look that far into the future and anticipate what’s coming. But the research questions of AlgoSoc are so fundamental in nature that it does not matter so much what is happening today and what tomorrow’s new hype in AI will be. And yet, having said that, the 50+ research projects that the programme comprises, will have to be sufficiently agile to take new technological developments into account and include these as cases when they come available.”
NH: “What is really pioneering about AlgoSoc is that we do not look at an algorithm or a value in isolation. We study them by looking at the broader ecology of players that help shaping and realizing a particular value, and how the relationships within such ecology changes under the influence of automation. We combine this with conceptual and empirical research into what these values are, how they are perceived, how they are operationalized and implemented in algorithms, what effects these algorithms have on people using them, and what insights we can gain for governance. This is really important because, for example, if we look at regulation, there's a very strong focus to narrowly concentrate on the algorithm, without acknowledging that they operate in the context of a particular organization or institution. What I think is missing is a better and deeper understanding of this broader social-organizational context in which public values in automated decision making are realized. And this is what AlgoSoc contributes and makes the program unique."
“The research questions of AlgoSoc are so fundamental in nature that it does not matter so much what is happening today and what tomorrow’s new hype in AI will be” — Prof. dr. Claes de Vreese
What would be the effect if we do not take values into careful regulatory considerations?
CdV: “You would then allow the market to operate without boundaries. You really do not want to leave fundamental rights and values out of the systems we create, as it could result into additional discrimination and more inequality. Safeguarding values like diversity and equality need to be embedded in automated decision-making systems. Your own imagination is the only limitation for thinking about the implications of having systems that are not guided either by rights or values.”
NH: “Values very much represent the kind of society we want to live in. That's why it's so important to understand what kind of algorithmic society we want to strive for and live in, and what kind of values should be central in this, because it affects our living and working conditions. It is certainly not the case that tech companies have no values, but these values may be guided by efficiency, productivity and scalability. Those are important values. They can contribute to economic growth, but there are other values – like inclusivity, diversity, equality, respect for human dignity, autonomy - that that are not as easily represented or operationalized in algorithmic design. And I therefore think it's really important that we keep an eye on the overall balance between different kind of values, that we discuss the kind of society we want to live in, that we reflect on the values that fit to this society. And also that we study how to realize them in automated decision-making and regulate the way these systems are implemented and used in our society.”
To what extent are values themselves an object of study in your research program?
CdV: “A large part of the program is actually devoted to problematizing, defining, carving out and identifying these values. And values also serve as inspiration for other research projects that are more empirical in nature, projects which also seek to understand how some values are perceived by citizens or how they are ingrained in regulation. Values are thus central to the research program. They are studied in relation to the three sectors that AlgoSoc is focusing on, but values are also central in its cross-sectoral synergetic part in which we also identify which of these might be more universally applicable.”
Do citizens sufficiently recognize the importance of values? Does AlgoSoc also make them more aware of what’s at stake?
NH: “The program is about doing excellent research, but also about reaching out to society involving citizens. For example, we have a citizen panel in which we engage in participatory approaches to better understanding and conceptualizing values. There will be outreach activities and literacy initiatives too. And we publish our findings not only in in academic journals, but also in mainstream media. So I think this program has not only an important academic, but also an important societal role to play for the Netherlands and beyond."
What kind of literacy activities will you engage in?
NH: “These can take the form of trainings for e.g., people operating automated decision-making systems, like judges or newsroom professionals, who impact the realization of public values and the algorithmic society.”
CdV: “Indeed, it is important to target key people in the stakeholder chain because they might be involved in decision-making procedures about whether or not to implement specific values. It is one thing to have a sort of generally enlightened public and have our results feed a public debate about public values in the algorithmic society, but it is equally important to engage with policy, sector and industrial stakeholders.”
To continue reading, click for interview part II