Using AI to Make Sure Queer Doesn’t Mean Fear

How designing for the minority can help protect the LGBTQ+ community and improve the internet for everybody

We’re nearing the close of Pride month, 30 colourful days when people come together in unity and solidarity to celebrate the queer community and the vast distance LGBTQ+ rights have come. In fact, just yesterday I had the pleasure of presenting an Allyship award at the 2021 EY Unity LGBT+ Awards.

June (the month of the landmark 1969 Stonewall riots) may soon come to an end, but what doesn’t end is the need to continue to teach LGBTQ+ history, promote tolerance, challenge homophobia and campaign for equality and diversity in all areas of life.

Of course, in some places, there’s a great deal of scope for improvement. And I’m not just talking about places where queer people are still persecuted today. For the community, one massive problem area is the internet. This should be a safe space for folk across the world to connect with like-minded souls who understand their feelings and experiences. But thanks to the web’s worst keyboard warriors and trolls, the online sphere often looks more like a hornet’s nest of hate, cruelty, and abuse. Of course, this is the internet’s achilles heel — and healing it for LGBTQ+ community will help not only them but everyone.

Artificial intelligence can be both a problem and a solution here, capable of intensifying or lessening social discord. Let me explain.

In 2018, LGBTQ+ non-profit organisation GLAAD announced a collaboration with Jigsaw, a unit within Google’s parent company Alphabet that strives to ensure a safer internet and preserve the digital lifeline for the queer community. At the time, GLAAD found that social media platforms can accidentally block positive content. Why? Because content on marginalised groups usually attracts a lot of criticism, meaning the platforms’ algorithms perceive LGBTQ+ vocabulary negatively and can consequently censor and delete the content.

So, GLAAD and Jigsaw want to change the way artificial intelligence understands LGBTQ+ content online, ensuring social media algorithms don’t adopt and perpetuate the same ugly human biases queer people face offline. Working together, the two organisations seek to build better public data sets and machine learning research resources, which will make online conversations more inclusive for the LGBTQ+ community.

A step in the right direction, without doubt, but is it enough? Probably not. The laudable efforts of GLAAD and Jigsaw stand out from…well…nothing much really. Their work should be part of a bigger movement to keep the queer community included as artificial intelligence advances, but sadly no such movement exists. Anthropologist Mary L. Gray believes that AI will, by default, “always fail LGBTQ+ people.” How so?

If you read last week’s blog on AI in the Navy, you might recall that I said these technologies work best in circumstances involving predictability and worst with those entailing unpredictability. The same applies here. Gray praises the beautifully dynamic nature of the LGBTQ+ community (and I guess she’s referring to things like the shifting language around gender identity). However, she argues that, because the community is constantly evolving, AI doesn’t get the chance to learn before changes occur. And so AI can never fully protect queer people.

For that reason, the LGBTQ+ community has to rely on itself and its allies, working harder together to guarantee inclusivity. More collabs like that between GLAAD and Jigsaw are needed for AI to accurately represent queer people, and I hope to see them very soon. Luckily, artificial intelligence and experts in the field are just as dynamic as the LGBTQ+ community itself, so we do have access to the resources and tools needed to do the right thing.

Doing the right thing includes using AI systems for the right reasons. For instance, a while back we heard about an algorithm that could accurately predict a person’s sexuality based on their face. The results of the Stanford University study support the idea that people are born gay and that sexual preference is not a choice, which is a win for the LGBTQ+ community. But what if that tech is misused to harm queer people in some way — parents using it on their kids, employers on staff, oppressive regimes on citizens?

Not only do we need to build better, human-centric systems fed on impartial data, but we also need to make certain they’re used in the fairest ways possible. We can accomplish this in various ways, including governance and regulation — a major thread in my upcoming book, AI by Design.

We’ll get there. And as we design for the minority, we know that everything works even better for the majority. I made a similar point in this blog on mainstream users enjoying products like voice recognition, originally designed for accessibility. Insights gleaned from GLAAD can be applied more broadly and help us build a better, positive, and inclusive internet.

The most important thing right now is that there’s a conversation around using AI to protect our queer friends, colleagues, and family members. The more of us who participate in that conversation the better. Will you join and be part of something life-changing and world-altering?

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com