Why we need diversity & agreement in AI

In my book AI by Design, I advocate for diverse teams and a unified approach in developing AI. Here’s why.

Catriona Campbell
4 min readJun 20, 2023

At our London Bridge headquarters, I recently welcomed Google’s Responsible AI Programme Manager & Founder of Diverse AI Toju Duke, BT’s Group Director of Data Platforms Kerensa Jennings, and our very own Frank De Jonghe for a panel discussion on diversity and agreement in artificial intelligence (AI).

Generative AI tools like ChatGPT and Midjourney have taken the world by storm, with their ability to undertake tasks once exclusive to humans (and in the blink of an eye), helping more people understand that AI will soon dominate every aspect of life.

The panel agreed that we need ethical standards developed by diverse multidisciplinary teams working in a unified way. Why though? I offer some answers in the future-back plan for living with AI that I set out at the end of my book AI by Design. Here are a couple of tweaked excerpts:

Diverse teams

A common challenge in creating any standards or principles is having the right people around the table. In addition to a mix of technical and industry experts, public policymakers and lawmakers, there should also be representatives of diverse demographics. Having users of AI technology in the room is essential to developing relevant and credible answers.

The start of any design thinking exercise is to empathise with your users. To really connect with and understand them, you first need to ask who the users are. In the case of AI, the answer is everyone. That isn’t the most helpful response, but it demonstrates that creating AI principles requires effort.

Suppose we don’t make broadly diverse user groups?

Then people design in their own image, and we become destined to replicate our current problems in future AI design. With all the development of AI principles usually originating from universities or associated institutes, there is always a core of academics and — along with government funding — politicians. These groups then co-opt technical experts from Big Tech and a smattering of others interested in these things.

But what appears to be missing are three things:

  1. Citizens
  2. Children
  3. The developing world.

The developing world has a relatively small voice in global affairs or the running of financial markets. Only when the more impoverished country becomes a more prosperous, vigorous competitor is its voice heard. We need to skip this phase and involve people, children and the poorer in society so we build something better.

A unified approach

Leanne Pooley, director of We Need To Talk About AI, once said:

“We don’t have a plan as a species, and we need to have a plan. But before we can create one, we need to get together as a species and have a think about where we should even start.”

The united approach worked well to tackle vaccine research for Covid-19, which came at us from nowhere and saw multiple stakeholders galvanised around a single challenge. Although we soon reverted to nationalist vaccine distribution, we proved that we can work together across organisations and research institutes — wherever they are in the world — to create vaccine trials and vaccines delivered in a short space of time.

Could we achieve the same with AI? In the book, my original answer was:

“In time perhaps, but probably not right now. Countries need to see AI as an existential threat like Covid-19 for that to work. At the moment, not all governments are looking at AI in that way. Because the danger isn’t specific to the moment we’re in right now, it’s easier — and right — to look at the problems we face today. The reality is that we will never successfully come together until the threat is looming right here in front of us.”

What a difference a year makes, huh? The capability of generative AI has focused attention to a global need for an immediate, unified response. So, today, I say: Yes, we can achieve the same with AI.

My approach is to let the players play as it will take years to get a complete global agreement. In the interim, we should collectively decide on one framework with the countries and companies that want to join. Early adopters of the framework will be passionate, more engaged and supportive. Over time, consumers will come to trust the firms that follow it. The potential loss of sales by organisations remaining outside such a framework (especially if adopted as a government supplier standard) could create a virtuous circle. Together we can start to address the bigger questions instead of re-writing the old ones.

--

--

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com