AlgoSoc / Opinion
January 10, 2024

On the importance of a more diverse and inclusive tech sector (and what we can do to help support it)

On the occasion of her nomination as one of the "100 Brilliant Women in AI Ethics", we interviewed Dr. Corinne Cath about the importance of a more diverse and inclusive tech sector. According to Cath, the tech sector is still heavily skewed, primarily dominated by males, individuals of white ethnicity, and those from Western backgrounds. She emphasizes that this imbalance poses a significant problem.

With whom are you on the list of 100 Brilliant Women in AI Ethics?

The list is 100 women long and all of them are leaders in their respective fields. There are many names amongst them that people in the academic community might recognize like Berhan Taye, Lama Ahmad, and Mellissa Heikkilä. I mentioned these three as I have had the pleasure of working with them directly in the past, but I would encourage everyone to look at the full list. You can find it here: https://womeninaiethics.org/the-list/of-2024/

Why is such a list important?

The tech sector still skews heavily, male, white, and western. This is a problem. We know from research that this subsection of the population is often late to understanding how AI technologies might impact communities of color, the Majority World, or minority groups. This list demonstrates both the need and the incredibly richness of diversity that exists in the field of AI ethics. The list also makes it much harder to argue that “well, we just could not find any good women” for this panel on AI, or this technical roles in an AI team, or this policy position—which sadly is an excuse I still hear all too often. In my experience, its women, non-binary, queer, and trans folks, as well as communities of color or migrant groups, and people from the Majority World who are doing some of the most cutting-edge work in the field, and that should be recognized. I appreciate the work that the organization behind the list is trying to do that make that recognition real.

Is the problem of imbalance also recognized by the tech sector itself?

I would love to answer “yes” to this question. But when I am honest, I am not so sure. There certainly is a lot of lip-service being paid to the importance of diversity along several axes, not just gender. Companies and universities are aware of the importance of being seen as caring, the importance of virtue signaling that “diversity and inclusion” matters. Yet, when we look at company headcount or who makes it to senior faculty, this commitment to diversity is not always reflected. This is of course a universal problem, that exists beyond the field of AI ethics, but it stings a bit more given the discrepancy between what companies and universities, for that matter, publicly say and what they privately do.

I have experienced this discrepancy many times personally. For example, as a PhD student I supported my professor in his role as one of a few academics on the EU Commission’s High Level Expert Group on AI (HLEG) that was set up to develop early input for what is now EU AI regulation. After one corporate representative espoused the importance of diversity on stage, off-stage he asked me “whether I was my supervisors’ secretary and if I could help him set up a meeting”. Implicitly assuming that I, the young woman trailing around an Oxford professor, could not possibly be working on these issues as an academic.

What other initiatives are being developed in this regard and how can we take responsibility ourselves?

There is a plethora of initiatives being developed across different organizations and companies, some with more success than others. Getting this right, requires much more than good pipelines into the sector, or better recruitment, or good complaint mechanisms, it takes a fundamental shift in your worldview. You need to be willing to move away from soft and malleable terms like diversity, and instead name and undo how implicit gender, racial, and other biases are present in your organizational structures and teams. Something that, as scholar Sarah Ahmed reminds is in her groundbreaking work On Being Included, is too radical for many organizations.

What can we do ourselves? Well, that depends on who this we is. I do think there is role in our case, as scholars of AI systems in The Netherlands, to take into consideration our relative positions of power and privilege. What are we doing to make sure our research teams are safe and welcoming? How do we enforce our codes of conduct, taking instances of discrimination seriously? What trainings should we follow to prevent any biases from guiding our hiring decisions, how we treat complaints, or who we help make it up the academic ladder? Are we willing to take uncomfortable political positions that might hurt our future job prospects for the right causes? There is always more room for introspection on our positionality, action to use any comparative privilege we might to ensure equitable cultures, and humbleness about how much we don’t know because we’ve never had to experience it.

More results /

/ algosoc
Plea for the introduction of a 'monitorial power'

By Albert Meijer • October 03, 2024

Emerging generational fault lines in public opinion on Artificial Intelligence

By Ernesto de León • Fabio Votta • Theo Araujo • Claes de Vreese • May 17, 2024

/ justice
AI in the justice sector: a force to be reckoned with

By Kätliin Kelder • June 18, 2024

Artificial Intelligence and the Judiciary

By Corien Prins • November 21, 2023

/ health
Discriminerende algoritmes

By Corien Prins • May 18, 2021

/ media
Content moderation and platform observability in the Digital Services Act

By Charis Papaevangelou • Fabio Votta • May 29, 2024

Journalism in the age of AI, platformisation, and regulation

By Agustin Ferrari Braun • Charis Papaevangelou • May 27, 2024

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe