AlgoSoc / Opinion
November 30, 2023

How’s life? About the role of public values on well-being in an AI-dominated world

Keynote delivered by Prof. Corien Prins, Professor of Law and Informatization at Tilburg University, chair of the Netherlands Scientific Council for Government Policy (WRR), The Hague, and principle investigator of AlgoSoc, on the occasion of AlgoSoc's festive kick-off (November 29, 2023).

Today we celebrate the kick-off of an exciting research program: AlgoSoc. In many ways this program brings together the best of many different worlds. It brings together different universities: the University of Amsterdam, Utrecht University, Erasmus University, Delft University of Technology and Tilburg University. It brings together disciplines: media studies, law, ethics, public administration, computer science, machine learning. It also combines research in different sectors: media, health and justice. It brings together different research methods: quantitative, qualitative panels and surveys, desk research, field work, etc. And of course: it brings together people of different background, countries and generations. In terms of content, AlgoSoc is about bringing together technology. On the one hand, it's about artificial intelligence, and more recently generative AI, computational infrastructures. On the other, it's about public values, such as privacy, autonomy, individual freedom, and solidarity. Or, put differently, AlgoSoc aims to bridge technological innovation and societal interests. And finally, AlgoSoc aims to contribute to knowledge distribution by bridging the worlds of academia and society. By fostering the use of science for policy.

Let me, in this keynote, offer yet another glance on what AlgoSoc might offer in building bridges. For this, I turn to an organization that - for six decades now - brings together countries: the Organisation for Economic Co-operation and Development (OECD). The OECD is an organization that aims to shape policies for better lives of people, thus fostering prosperity, equality, opportunity and well-being for all. As you probably know, the OECD works on establishing evidence-based solutions to a range of challenges and advices on best-practices, policies and international standard-setting. Among them in the domain of digital technologies.

As part of its work, the OECD developed the so-called well-being framework. This framework is key in measuring progress that is based on more than merely macro-economic statistics, such as GDP. It also measures societal progress, the living conditions that ordinary people experience. As part of the Better Life Initiative, the OECD thus aims to contribute to the credibility and accountability of public policies as well as the functioning of democracy. To measure well-being, the framework uses various dimensions. Dimensions for current well-being, such as health, work, income and knowledge. And dimensions for future well-being: social capital, human capital, economic capital and natural capital.

“To measure well-being, the OECD uses various dimensions. For future well-being, these dimensions are social capital, human capital, economic capital and natural capital. But shouldn't we add a fifth dimension: digital capital?”

But what if we added a fifth dimension: digital capital? Although I am not sure whether ‘digital’ is the correct term, let me in this keynote use ‘digital capital’ as a lens. A lens to assess the opportunities and risks of the digital transformation and more in particular AI, and link it with public values. 

Such a lens would show us three things. First, it would show us once again the importance of using a broader perspective than merely one discipline can offer. And – when applied to policy making - which one ministry, or one European Union Directorate General can offer as well as handle.

Second, the lens helps us systematize and measure what AI offers as well as implies in its effects being an instrument in facilitating well-being.

Third, it offers us a glance at a complex, but important question for further thinking: how to get more grip – and that includes: how to measure - second and third order effects of AI on individuals and society and thus the systemic effect of AI on well-being?

In the remainder of my talk, I will talk in more detail about these three perspectives offered by the of digital capital. 

Firstly, the need for a broader perspective. In the Dutch policy-making and research, the OECD term ‘well-being’ is translated into ‘broad prosperity’. In an overarching research initiative of my own university - Tilburg University - broad prosperity is coined as ‘Broad is Better’. In looking at AI, we acknowledge of course the tremendous results of technological innovation. We cannot do without the first tentative steps in research areas such as machine learning, natural language processing or speech recognition. But once technology is used and embedded in society, other areas become of relevance – I should say: of importance - as well.

“There is a close relationship between public values and well-being; realizing broad prosperity in an algorithmic society implies taking public values into account.”

Throughout history, public values have always steered and conditioned the actual use of technological innovation. The history of earlier technologies teaches us that not only adaptations in the environment – think of the grid system for electricity and public roads for automobiles - are crucial to make it work in practice. What is needed also is public acceptance in society, for which public values are crucial. Realizing broad prosperity in an algorithmic society implies taking public values into account. And thus there is a close relationship between public values and well-being. Between perceptions on public values and well-being. Between the actual effects of public values on AI on the one hand and well-being and prosperity on the other hand.

With AI now increasingly present throughout people’s daily lives, the challenge is then to enrich our knowledge on what exactly is well-being in an algorithmic society. What does it imply when it comes to the interaction between AI-innovation and public values? With AlgoSoc we aim to play our role in enriching academic knowledge on what could be called a well-being based algorithmic society. Knowledge on perceptions of well-being at an individual level as well as well-being at the level of society at large.

“With AlgoSoc we aim to play our role in enriching academic knowledge on what could be called a well-being based algorithmic society.”

But we also intend to feed politics and policy makers based on the insights gained. Let me therefor wrap up my observations on the first perspective – broad is better – with a final word on policy making and AI. As mentioned, the perspective of well-being – albeit not yet related to AI - is explicitly on the radar of different governments. The Dutch cabinet for example - while monitoring policy and budget decisions - focuses on several priorities in the field of broad prosperity that follow from the coalition agreement. What if one of these priorities would be digitalization, including AI? Being aware of how difficult it is for policy makers to join efforts across different ministerial departments and governmental organizations when it comes to AI-related policy making: would the umbrella of broad prosperity offer an opportunity to set additional steps in structurally integrating certain AI-related policy ambitions? I do not know the answer. But I do see that on the seven priorities that were chosen by the Dutch cabinet Rutte IV (among others social security, health, democratic legal order, sustainability) indeed steps towards a coordinated and broader perspective have been taken. Perhaps for the very reason that it is a somewhat familiar instrument in politics and policy making, broad prosperity could act as a facilitating instrument for a more coordinated AI-policy making as well.

Let me now turn to the second perspective: digital technologies - AI - as an instrument in realizing well-being and broader prosperity. Here, looking at the work of the OECD could inspire us again. During the past decade the OECD started using different indicators to measure the digital technologies component of well-being. Among these indicators are:

  • governments providing equal internet access to all,
  • ensuring an inclusive use of digital technologies, and
  • means to facilitate a user-driven approach to improve the use of digital instruments and online services.

A OECD-study presenting detailed evidence on the opportunities and risks associated with the digital transformation (it was based on 33 indicators), is the 2019 report How’s Life in the Digital Age? It offers a rich overview of how different countries capitalize on digital technologies. But in combination with data on among others social demographics, the OECD work of the OECD tells us much more. Not surprising it shows us the risks of the digital transformation fall more heavily on people with lower levels of education and skills, and its opportunities may not be for the benefit of all. While mobile phones have surely improved the living conditions of the world’s poor (providing connections to people that had none before), the growth of large digital companies with high capitalization has also contributed to wealth concentration at the top. In the health field, the OECD reports that the use of expensive technologies may also contribute to higher inequalities. In other words, while the digital transformation offers opportunities to people in terms of attaining higher levels of well-being, it also confronts societies with a risk of higher inequalities in many well-being outcomes. Although the OECD does not report specifically on AI, we can certainly expect that similar conclusions can here be drawn.

And, there is more to uneven distribution than merely benefits and risks of the use of AI-instruments as such. What is important to realize is that some people may be better able to leverage the digital impact on public values than others. Certain groups will for example be more exposed to AI-related risks of privacy invasion or disinformation than others. Differences can also be seen in the extent to which people are able to capitalize on public values such as autonomy and personal freedom. And, of course, through specific interventions, governments may help mitigate these effects. But conversely, policies that lack the recognition of a potential uneven distribution may leave certain people behind in being exposed to the risks of the algorithmic society.

The OECD concludes that the evidence base needed to assess the well-being impacts of the digital transformation should be expanded. I cite from a report: “Harmonised data on many aspects of the digital transformation are currently lacking. This limits research on key impacts, such as the impacts of online networking on people’s social lives, the mental health effects of extreme Internet use, or the effects of automation of jobs and earnings. National Statistical Offices, other data collectors, researchers and policy analysts should design and implement new instruments to better capture the well-being impacts of the digital transformation.”

“With AI-instruments being used in these sectors, it becomes crucial to not only collect harmonized data on how the use of these instruments facilitates well-being, but also collect data on the role public values play in this respect”

Health, media and justice are key sectors for broad prosperity and well-being. With AI-instruments being used in these sectors, it becomes crucial to not only collect harmonized data on how the use of these instruments facilitates well-being, but also collect data on the role public values play in this respect. In other words, we also need insights and data on what public values are perceived important by citizens confronted with the justice sector, patients and news-consumers. Are these indeed the same public values as those being discussed in academia or by interest groups? For it might very well be that citizens prioritize different public values in relation to AI than we as academics usually discuss in our work or NGO’s advocate as being crucial. In other words, what public values are seen as most important by the broader society when it comes to well-being and the use of AI-instruments?

In the upcoming years the AlgoSoc program will contribute to the required expansion of the evidence base. In particular with our research on effects in the three sectors we focus on: media, health and justice. Also our Algosoc Longitudinal Survey Panel, other survey-work and experiments will feed us with a rich amount of insights and data. I expect us to gain more detailed data on perceptions, use and effects of public values in the three sectors we study. Data which allows us to say something on how public values are distributed among different groups of people. Are certain (groups of) people indeed able to better leverage opportunities while mitigating risks? Do we see differences between the sectors studied. And, can we see patterns in the ability of institutions and individuals to capitalize on public values when AI-instruments are being used.

Dear audience, let me turn to the third and final perspective. What if we look through the lens of well-being in order to get more solid information on the second and third order effects of AI on individuals and society? For there is more to say about the effects of AI on well-being than in looking at an instrument.

All of us here present today realize that by regarding technology as merely a more efficient version of something familiar, we underestimate the technology’s ultimate impact on society. Although we are unable to foresee the true impact of AI because the embedding in society is to a large extent an unpredictable process, we know there will be qualitative and even systemic effects. An algorithmic society will transform. Not only with new instruments, products and services but also in new ways of thinking and working, new principles of organization and changing and new power relations.

“At some point in the future AI is growing a mind of its own. A mind of its own that needs to be sensitive for public values. A mind of its own that needs to reason and help decide while recognizing as well as balancing public values”

Programmable infrastructures for example will do something with power relations. With platforms, boundaries between objects, between user and platform, and even the walls of our homes, have become porous due to interconnection and intermingling. Social media expose citizens to deep fakes, influencing their opinions and contributing to a polarization of political views. Disinformation misleads people in their worldviews and has the potential to destabilize democracies and societies. Harari argued AI may even “undermine our basic understanding of human civilization, given that our cultural norms, from religion to nationhood, are based on accepted social narratives. Human beings pick up emotional cues from each other, enabling them to civilize their behavior.”

These systemic effects ultimately relate to prosperity of societies and the well-being of its citizens. I have just mentioned a few systemic effects that potentially undermine prosperity and well-being. But there is crucial potential as well. Let me illustrate this for the justice sector. Today, in many countries around the world, there are more lawyers than ever before. Still, the law is very far from accessible to ordinary people. This has a profound effect on both the quality of their lives and their position in society. What is more, the accessibility of the justice system has an impact on promoting social order and communicating and reinforcing public values and norms. With AI no longer restricted to simple processes and decisions, we will be able to deploy the technology for decisions that until now have been considered as the exclusive domain of the human mind. A justice sector that embraces the algorithmic society has the opportunity to provide legal services that are often out of reach for the vulnerable members of our society. I personally do not believe that AI will replace judges (if only because the EU AI Act). But I do see potential for a future where humans and machines cooperate and provide better, quicker and more just legal outcomes than judges alone can today. Adding to the human intelligence of judges, AI has the potential to recall and memorize case-related information flawlessly, has the potential to multitask and operate without interruption, filtering through vast amounts of earlier court cases, relevant legal documents and records, and help prepare judgements.

But this also means that at some point in the future AI is growing a mind of its own. A mind of its own that needs to be sensitive for public values. A mind of its own that needs to reason and help decide while recognizing as well as balancing public values. But what does such a mind look like? What does it run on? Training AI systems in being sensitive for public values requires that we are not only able to conceptualize and articulate public values. Also, we need to know more on how these values are perceived and balanced in society. Again, what values are considered more important than others? We will not be able to properly train AI-systems if we keep considering the potential set of public values as merely a large and rather indistinct set of values in a random order of priority. Also, we will not be able to properly train AI-systems if we ignore how people in society prioritize public values in different contexts. And if our goal is to realize an algorithmic society with AI having a mind of its own that is sensitive for public values, this requires evaluation standards and indicators to somehow measure its level of attainment. But public values are difficult to quantify, hampering their use as an evaluation metric. More in general the challenge I thus see is: how to measure the systemic effects of AI on well-being and broader prosperity? How to get a clearer picture on the qualitative effects of the transformation to an algorithmic society? How to measure the effects of governance and regulatory initiatives on guaranteeing and realizing public values?

“What if we were to lay the algorithmic society across the bar of digital capital, being society’s ability to protect citizens against arbitrariness, against abuse of power and against the law of the strongest. Being part of building this digital capital is what AlgoSoc is in essence about.”

Let me conclude. The year 2023 has shown that AI is advancing at an unprecedented pace. Not merely in an acceleration of the deployment of an instrument that has an effect on well-being. We also find ourselves on the brink of fundamental societal transformations, many of which we can’t foresee. Systemic changes that also affect well-being of individuals and prosperity of society. To return once more to the OECD. Broad prosperity is often defined as “the quality of life in the here and now and the extent to which this is (or is not) at the expense of that of later generations and/or of people elsewhere in the world'.” In reflecting on this definition, I was thinking: should digital capital not also be a dimension the OECD, policy makers and researchers need to reflect on? In other words, a fifth dimension taken into account in addition to the key dimensions used now in policy making, statistics, academia and other situations: social capital, human capital, economic capital and natural capital. The reason for putting this question on the table is that merely looking at digital technologies as an instrument in light of well-being does not suffice. There is much more to it.

What if we were to lay the algorithmic society across the bar of digital capital, being society’s ability to protect citizens against arbitrariness, against abuse of power and against the law of the strongest. Digital capital, being society’s ability to expressly and consistently rely on public values such as inclusion, freedom, tolerance, responsibility and the prohibition of discrimination. Digital capital, being society’s ability to foster inclusiveness in the use and role of public values and avoid that the algorithmic society compounds existing socio-economic inequalities. Digital capital also, as society’s ability to conceptualize, implement and use meaningful ways to achieve an algorithmic society that fosters true and lasting well-being and prosperity.

Being part of building this digital capital is what AlgoSoc is in essence about. With its four central research questions on ecology, values, effects and governance. Focusing on three sectors where public values are crucial to – media, health and justice. With inspiring colleagues. Bringing together the best of many different worlds. With this, AlgoSoc is some form of digital capital as well.

More results /

/ algosoc
Plea for the introduction of a 'monitorial power'

By Albert Meijer • October 03, 2024

Emerging generational fault lines in public opinion on Artificial Intelligence

By Ernesto de León • Fabio Votta • Theo Araujo • Claes de Vreese • May 17, 2024

/ justice
AI in the justice sector: a force to be reckoned with

By Kätliin Kelder • June 18, 2024

Artificial Intelligence and the Judiciary

By Corien Prins • November 21, 2023

/ health
Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
Content moderation and platform observability in the Digital Services Act

By Charis Papaevangelou • Fabio Votta • May 29, 2024

Journalism in the age of AI, platformisation, and regulation

By Agustin Ferrari Braun • Charis Papaevangelou • May 27, 2024

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe