Tensions Between the Requirements of Trusted AI
What kind of conflicts arise between the requirements of trusted AI? How do we strike a balance? When can’t we strike a balance?
I’ve just finished off at this year’s Mobile World Congress (MWC) in Barcelona, where I had the pleasure of moderating two fascinating and insightful panel discussions: one called Open Innovation and CVC and the other In AI We Trust?
In today’s blog, I’ll focus on the latter, where I was joined by Capgemini’s Yanick Martel, Telefonica’s Maria Loza, Kaspersky’s Vladislav Tushkanov, and Microsoft’s David Carmona to explore how organisations can increase trust in artificial intelligence. During the discussion, we spoke of the requirements of trusted AI:
- No bias
- Resilience
- Explicability
- Transparency
- Performance
Since the discussion, I’ve been left wishing the panel and I had been given more time to dig deeper into some ideas, one being the potential conflicts between the above requirements — which are all of equal importance.
Such tensions can arise in the same way they do between, for example, the principles of varying ethical frameworks developed for AI systems across the globe, and we need to keep those tensions in mind when applying the requirements — especially when they have an effect on humans and animals.
There are far too many conflicts to cover in a blog post like this, so let’s just stick to a couple of examples.
Common tensions
I caught up with EY’s Global Data and AI Leader Beatriz Sans Saiz, who joined me at 2022’s MCW. She explains: “The greater tensions are between explicability and performance.”
Explicability is one of the most important ways of ensuring trust in AI. Linked with transparency (which relates to keeping an AI model’s decisions and outputs in the open), explicability is about making sure all of those decisions and outputs are as explainable as possible to the people whose lives they impact in some way.
Basically, to fully trust AI systems, people need to understand what’s going on under the bonnet — why they’re doing what they’re doing, their strengths and weaknesses, what we can expect of them in future, and so on. That said, this isn’t always possible.
According to Beatriz: “Not always but quite often, the algorithms with greater performance tend to be quite a black box.”
Black box algorithms are those where it is either impossible or extremely challenging to state with certainty exactly why they have arrived at a given decision or output. And that’s problematic in terms of explicability.
Black box algorithms also create a separate tension between explicability and resilience. They can make it tricky to properly assess the resilience of a system — in other words, its ability to adapt to risks like hacking, which could cause it to make poor or dangerous decisions, or even shut down altogether.
In the past, this has led to a number of major AI failures: crashes involving self-driving vehicles, racially or sexually biased facial recognition models, and so on.
Solutions to conflicts
As with the conflicts between ethical principles, those between trusted AI requirements can be incredibly difficult to solve. But where to look for solutions in the first place? Probably not the requirements themselves.
At the end of the day, just like ethical principles, while the requirements offer some general guidance in building trusted AI, they ultimately amount to no more than sweeping ethical recommendations. For that reason, those working with AI would find it difficult to develop effective solutions using them as a basis. However, they should tackle tensions rationally and using methods rooted in evidence rather than by following gut instincts.
Returning to the conflict between explicability and performance, Beatriz proposes one such rational, evidence-based solution”
“In these cases, a good solution is to combine techniques so you get the highest performance from a set of algorithms, but in parallel you apply other types of algorithms to explain and understand the outcome and where it comes from.”
We can also turn to alternative explainability measures, including traceability and auditability. This may not be necessary as the extent to which explainability is required depends on the circumstances and the gravity of any impact an absence of explainability could have.
Whatever the solution, it’s crucial that the system, on the whole, does not undermine the basic rights of those it affects, and that the overall benefits of any compromise outweigh the individual risks.
Unfortunately, there will always be scenarios where no ethically justifiable balance can be struck. Some human rights (such as the right to ‘freedom from torture and inhuman or degrading treatment,’ as conferred by the Human Rights Act 1998) and ethical principles linked with those rights are simply inviolable.
For instance, when a conflict between explicability and resilience threatens security or privacy, Beatriz declares: “Maintaining privacy and security will always come first.”
…
I could go on forever about the conflicts between the requirements of trusted AI. However, I’m afraid I have a flight to catch back to the UK, where I return from MWC 2022 with some amazing ideas that I’m impatient to discuss with my team!