Mind over machine — understanding the challenges of generative AI

At the London Innovation Realized in Focus, we explored both the transformational potential & challenges of generative AI. In this blog, I focus on the latter.

Catriona Campbell
4 min readJul 10, 2023

We recently had the pleasure of hosting Innovation Realized In Focus. At our London Bridge HQ, we convened senior leaders to explore the immense transformational potential of generative AI tools like OpenAI’s ChatGPT and how businesses can unlock it.

Soon after, our resident AI guru Harvey Lewis published a fantastic piece looking at the steps organisations should take to get the most out of generative AI. Although it “serves as a beacon, guiding us towards a level of productivity and creativity enhancement not seen since the first Industrial Revolution,” Harvey says, “it does present challenges, notably in forthcoming changes to the regulatory landscape around AI and the need for equally rapid corporate governance.”

In this blog, I’d like to consider the challenges of AI regulation and corporate governance in greater detail, but let’s first start with another: trust.

Trust

If businesses are to harness generative AI, they need people to trust it. Trust can be an elusive beast, especially in relation to such general-purpose AI models as ChatGPT — where the stakes are higher than, say, the algorithms making Netflix recommendations — but we only need to look to history to see that she can be captured.

As I say in my book, AI by Design, although the automated elevator had been around since the early 1900s, fear meant people preferred to take the stairs. The innovation got its big break during the 1945 elevator operators’ strike in NYC, which cost the city an estimated $100 million. Suppliers and property developers worked hard over the following four decades to build trust in the automated elevator, and looking at its ubiquity today, their efforts clearly paid off.

Three ways we can build trust is through:

  1. Education and skills development: Helping people understand the importance and relevance of AI through training is critical.
  2. Risk assessment of all AI systems: Organisations could keep a record of all AI systems or systems using AI, with every application subject to risk-assessment. High-risk items would be flagged, reviewed internally, and replaced if needs be.
  3. Independent AI Assessment: A company’s datasets and systems could be subject to third party assessment, which would provide much-needed assurance.

Corporate governance

Trust goes hand in glove with ethics. Given that, with AI, we’re trying to replicate humanity, the tech should embody human ethics and values. As part of their corporate governance programmes, I encourage large organisations to establish an AI ethics committee.

In a recent blog, I highlighted a common challenge in AI ethics: ensuring the right people have a seat at the table. It’s important members of this group are from various backgrounds, including law, ethics, technology, science, engineering, and philosophy. It’s equally important those individuals are representative of diverse demographics, otherwise we run the risk of people designing in their own image and becoming destined to replicate our current problems in future AI design.

The committee would be tasked with advising on AI ethical matters, as well as developing ethical guidelines. And there’s no need to start afresh, considering the existing wealth of fantastic ethics guidelines available.

Regulation

The EU is the clear frontrunner in AI regulation with its AI Act. Once enacted, this will be the world’s first comprehensive regulatory attempt to ensure AI systems developed and used in the EU are safe, transparent, traceable, and non-discriminatory. Over two years in the making, it takes a risk-based approach, classifying general purpose AI models according to the risk they pose to users, with different risk groups being subject to varying degrees of regulation.

The UK government published a pro-innovation white paper on AI regulation earlier this year and will host a global summit on AI in the autumn, and the US recently published a framework for developing AI regulation that prioritises goals like security, accountability and innovation. Canada, Japan, South Korea, Singapore, and China are also working on their own AI regulation.

However, these varied approaches to AI regulation have further complicated matters for international organisations already striving to understand the benefits and risks of generative AI. As I say in this blog, in an ideal world, it would be great to see a more unified strategy, a collective framework developed as countries work on their own individual approaches.

We covered these topics and others during Innovation Realized in Focus, a series of events that EY curated across multiple cities around the globe to build understanding, encourage collaboration and help realise new opportunities for business value and positive human impact. Get in touch if you’d like to discuss how EY can help your organisation explore the impact of generative AI and unlock its potential.

--

--

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com