Roadrunner: A Film About Anthony Bourdain — When Celeb Chef Meats AI

In his new documentary, Morgan Neville controversially uses AI to bring the late Anthony Bourdain’s words to life. But what’s all the fuss about?

Catriona Campbell
4 min readJul 30, 2021

In life, Anthony Bourdain could cook up a right storm in the kitchen. In death, the celebrity chef has cooked up a storm of a whole different flavour. Only it wasn’t really him who cooked up that storm. Or was it?

Of course, I’m referring to the uproar around Roadrunner: A Film About Anthony Bourdain. In his new documentary, film-maker Morgan Neville drops a pinch of artificial intelligence into his recipe, using the technology to construct a few lines of voiceover by Bourdain.

A lot of critics — including Bourdain’s ex-wife Ottavia — find Neville’s creative choice completely unpalatable, suggesting the end result amounts to little more than a deepfake. You’ve probably heard of deepfakes, but if you haven’t, these are AI-generated synthetic media showing people saying or doing things they’ve never said or done. Do you recall a 2020 Channel 4 Christmas video of the Queen dancing atop a desk? That was a deepfake — although one specifically made to highlight the dangers of the tech!

Neville defends his decision as a “modern storytelling technique”, insisting that his ethics haven’t been diced into a million tiny pieces like some believe they have. Why does he think so? Well, even though AI-generated just like a deepfake, the dialogue in question here isn’t actually manipulated in the same way. Instead, we hear words Bourdain himself once wrote in an email to his friend, artist David Choe:

“My life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?”

For the award-winning director of 2018 Mister Rogers documentary Won’t You Be My Neighbor?, it all boils down to the distinction between putting words in someone’s mouth and bringing that person’s own words to life. In his mind, because Bourdain’s reading of the words out loud is fake while the words themselves are not, it’s an entirely different kettle of fish.

My opinion on the whole thing isn’t fully baked just yet. It’s getting there, with most of me inclined to side with Neville at present — especially seeing as he had allegedly already captured Bourdain speaking the contentious words and simply misplaced the original audio. But there are still some little dollops of nagging doubt around whether Neville’s actions are really acceptable.

In reality, is the use of AI in the context an innocent slice of cod? Or is it a piece of fatally ill-prepared pufferfish Neville has us convinced is an innocent slice of cod? What are your thoughts? I posed the same question to my Linkedin, Instagram and Twitter followers last week, and I received a couple of interesting answers — divisive but interesting nonetheless.

One follower, who thinks this is a smart use of tech, argues: “If I wrote something, then I said it”, adding that a manipulated version of him, such as a deepfake, is far more dangerous than his real voice and words meshed by AI. He closes by saying that “bringing back the voices of those we’ve lost fills me with emotion”, which reminds me of a recently patented Microsoft chatbot I wrote about earlier this year, which would allow people to bring their deceased loved ones back to life — an equally divisive innovation!

Another follower asks whether Neville’s application of artificial intelligence in the documentary is “necessary for storytelling?” His response? A decisive no. However, he concludes that what we’re seeing here isn’t bad, as such, but rather a lesson for us all: misuse of AI technologies like this encourages us to create better rules governing their creation and application.

And on that, my opinion is baked to perfection. We’ve seen AI badly misused a lot in recent years, and there are justifiable fears that future abuses could be even worse. This is why, in April, the EU released their Draft AI regulations, which represent the world’s first meaningful attempt at regulating artificial intelligence and establishing greater algorithmic fairness.

I’ll be covering the proposed new rules in more detail in next week’s blog, so come back then for more on those. Until then, feel free to share your thoughts on the Neville-Bourdain controversy in the comments below.

--

--

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com