The Role of AI in Suicide Prediction & Prevention

On my socials this morning, I proudly shared news of my sister-in-law Laura Campbell appearing on the Telegraph’s podcast Bryony Gordon’s Mad World.

The Suicide Prevention Manager for Govia Thameslink Railway, and the first of her kind at that, Laura joined the show’s host on World Suicide Prevention Day to speak on training railway staff to carry out suicide interventions at stations, share invaluable tools to support vulnerable people, and open up about the painful childhood experience that drew her towards this role.

Suicide is an extensive worldwide public health problem, one helped in no way by the surge in depression and anxiety precipitated by the pandemic. The latest data from mental health charity Samaritans shows that 6,002 people died from suicide in England, Wales and Scotland in 2020, and 209 in Northern Ireland in 2019.

Despite a decrease of 477 between 2019 and 2020 in England, Wales and Scotland, as well as action to raise awareness of suicide and treat people experiencing suicidal thoughts, rates have remained problematic over the years. They’ve even increased in other countries, such as the United States, which saw a staggering increase of almost 25% between 1999 and 2014.

Suicide may be a complex and widespread issue, but it’s also a preventable one — even if global prevention efforts to date suggest otherwise. For this reason, my sister-in-law’s appearance on the podcast (and also on Sky News this morning) got me thinking about the contribution of artificial intelligence to the problem. And a little research demonstrates there’s a lot of potential for these technologies to make a significant dent in suicide rates.

Here’s why:

A lot of victims actually speak to a medical professional in the days, weeks and months before they commit suicide. This indicates an extremely worrying issue with risk detection among the people best qualified and trusted to detect risk, especially when given clear opportunities to do so. Others don’t engage with a medical professional about their worries — or with anyone, for that matter — for fear of stigmatisation or forced hospitalisation.

Young people are more likely to reveal signs of suicidal intentions on social media sites like facebook and Twitter than they are to healthcare workers, highlighting differences in risk detection among age groups. Differences in risk detection, as well as risk itself, also exist depending on a person’s gender, ethnicity, socio-economic status, geographical location, and access to healthcare.

For instance, Samaritans found that suicide rates in the UK are higher among men than women and those living in deprived areas.

Combined, all of this highlights the need for thorough studies of suicide risk, which can be expensive, time-consuming and held back by challenges like researcher bias when traditional research methods are employed. That’s likely why no such studies have been conducted to date — at least none that reduce risk to a sufficient degree.

Enter AI…

The tech could have a huge impact on the accurate prediction of suicide risk and the development of effective suicide prevention programmes.

Exploiting vast datasets, researchers can use predictive modelling techniques unique to AI to achieve far more accurate suicide prediction results. One example of this would be the application of machine learning to electronic medical records.

For anyone who closely follows the well-documented progress of artificial intelligence in the press, this may very well ring a bell. And that’s because we’re talking about tools already successfully used for predicting risk in a number of other medical fields, including death in premature babies, sepsis in those with severe infections, and rare outcomes — the list goes on.

Although the use of artificial intelligence to predict and prevent suicide is still in its infancy, it all looks very promising. The number of research studies in the area is fast increasing, and so I feel super confident that AI can and will help save lives.

But like I always say, as with the application of such technologies in any field, it’s important to stay within ethical and legal boundaries — especially when data as sensitive as that contained in electronic medical records is involved.




Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Does the AI K-pop girl group have an AI singing voice?

AV Safety Is No Accident

The Fallacies of A.I.

Machines Want to Help You Do an Amazing Job

Aww, isn’t that cute! Our society’s technological baby is beginning to talk.

Symptom Checker APIs: How They Improve Medical Triage and Diagnosis

What I learned from building a chatbot in 2017

How deepfakes will change creative jobs forever, after this day

Two persons walks through a blue corridor while one stares at them.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Catriona Campbell

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position.

More from Medium

Putting the ‘Social’ Back in ‘Social Science’ Research

Squid Game, the Sequel — Anicka Yi’s ‘In Love With the World’

Image of Anicka Yi’s exhibition at the Tate Modern called In Love With the World

Attrition in AI: What Can Your Organization Do About It?

Artificial Intelligence is not Human