AlgoSoc / Publication
December 07, 2023

Observe, inspect, modify: Three conditions for generative AI governance

In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, AlgoSoc PI Prof José van Dijck, Dr Fabian Ferrari and Prof Antal van den Bosch introduced in their recent article Observe, inspect, modify: Three conditions for generative AI governance three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, they explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions, so they claim, represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.

Across the world, governments are now confronted with the complex challenge of setting up oversight mechanisms for generative AI systems, from democratic to authoritarian contexts. Consider Italy, which, in response to concerns about violations of user data privacy, imposed a temporary ban on ChatGPT in early 2023 (Satariano, 2023). Or take China, which has been noted for rolling out some of the world’s earliest generative AI regulations. China’s Interim Administrative Measures for Generative AI Services, which came into effect on 15 August 2023, combine several oversight instruments. Generative AI service providers are required to file their algorithms for a security assessment, which authorities list in an “algorithm registry” (Sheehan, 2023). Other tools are disclosure obligations regarding training data sets used to develop generative AI applications and the technical integration of mandatory watermarks for AI-generated content. Those types of measures show that the toolkit of oversight approaches propelled by the sophistication of generative AI systems is rapidly evolving.

Maneuvering through this volatile empirical landscape is challenging, not only due to inconsistent governance approaches at an international level, but also because new variations of generative AI technologies emerge every week. Consequently, there is a need to develop a conceptual vocabulary for high-level oversight conditions—categories that are not limited to one particular regulatory context, but can serve as a systematic basis for cross-country comparisons and inform research and policy agendas. In other words, what are the structural conditions for governing generative AI systems? In a world increasingly shaped by generative AI systems such as ChatGPT, the absence of benchmarks to measure the efficacy of oversight mechanisms is a problem for research and policy. Without a clear conceptual framework to interpret these fleeting, quick-paced developments as empirical expressions of structural conditions for the governance of generative AI, it is difficult to navigate the global AI policy landscape and compare measures across countries.

To address the governance question, Van Dijck, Ferrari and Van den Bosch introduce a nested structure of three oversight conditions for generative AI systems: industrial observability, public inspectability, and technical modifiability. The first condition refers to the macro-level observation of the AI industry, including the relationship between consumer-facing applications and computational infrastructure. The second condition points to the bitwise inspection of the technical underpinnings of generative AI systems, especially their underlying “foundation models” (Bommasani et al., 2021). The third condition addresses the extent to which those foundation models can be modified, such as by enabling digital watermarks. Importantly, those conditions are interdependent. It is only when they come together, so is being claimed by Van Dijck, Ferrari and Van den Bosch, they create a coherent framework for research and policy upon which regulators can then act. Observation of industry structures alone is futile without the ability to inspect and modify generative AI systems. Conversely, both the inspection and modification of those systems never occur in a political-economic vacuum, but always within identifiable industry and government structures that need to be observed closely.

The crux of this framework lies in the fact that those three anchor points enable to perceive generative AI systems as inherently negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity. Their negotiability derives from a recognition that we can ground the study of oversight structures for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Systems can be more or less observable, more or less inspectable, and more or less modifiable. As Rieder and Hofmann (2020: 3) argue, “unlike transparency, which nominally describes a state that may exist or not, observability emphasizes the conditions for the practice of observing in a given domain.” The same nuance applies to inspectability and modifiability. It is not a yes or no question; it is a more-or-less question - a negotiable field of action. Nonetheless, for this argument to carry empirical weight, it must be developed vis-à-vis a specific regulatory framework; it cannot remain an abstract claim. Therefore, in their recent article entitled Observe, inspect, modify: Three conditions for generative AI governance Van Dijck, Ferrari and Van den Bosch explicate the real-world implications of those three conditions with a focus on the EU’s proposed Artificial Intelligence Act, analyzing to what extent the regulation accounts for the granular properties of generative AI systems.

This text is a lightly edited copy of the introductory chapter of the open access article Ferrari, F., van Dijck, J., & van den Bosch, A. (2023). Observe, inspect, modify: Three conditions for generative AI governance. New Media & Society, 0(0). You can find it, and continue reading, here.

References

Bommasani R, Hudson DA, Adeli E, et al. (2021) On the opportunities and risks of foundation models. arXiv. Available at: https://arxiv.org/abs/2108.07258.

Fisher E (2014) Chemicals as regulatory objects. Review of European, Comparative & International Environmental Law 3(2): 163–171.

Rieder B, Hofmann J (2020) Towards platform observability. Internet Policy Review 9(4): 1–28.

Satariano A (2023) ChatGPT is banned in Italy over privacy concerns. The New York Times, 31 March. Available at: https://www.nytimes.com/2023/03/31/technology/chatgpt-italy-ban.html.

Sheehan M (2023) What the U.S. can learn from China about regulating AI. Foreign Policy, 12 September. Available at: https://foreignpolicy.com/2023/09/12/ai-artificial-intelligence-regulation-law-china-us-schumer-congress/

More results /

/ algosoc
Plea for the introduction of a 'monitorial power'

By Albert Meijer • October 03, 2024

Emerging generational fault lines in public opinion on Artificial Intelligence

By Ernesto de León • Fabio Votta • Theo Araujo • Claes de Vreese • May 17, 2024

/ justice
AI in the justice sector: a force to be reckoned with

By Kätliin Kelder • June 18, 2024

Artificial Intelligence and the Judiciary

By Corien Prins • November 21, 2023

/ health
Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
Content moderation and platform observability in the Digital Services Act

By Charis Papaevangelou • Fabio Votta • May 29, 2024

Journalism in the age of AI, platformisation, and regulation

By Agustin Ferrari Braun • Charis Papaevangelou • May 27, 2024

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe