The Role of AI in Suicide Prediction & Prevention

Suicide is an extensive public health problem, but it’s preventable too. How can we leverage artificial intelligence to predict risk and save lives?

Catriona Campbell
4 min readSep 10, 2021

On my socials this morning, I proudly shared news of my sister-in-law Laura Campbell appearing on the Telegraph’s podcast Bryony Gordon’s Mad World.

The Suicide Prevention Manager for Govia Thameslink Railway, and the first of her kind at that, Laura joined the show’s host on World Suicide Prevention Day to speak on training railway staff to carry out suicide interventions at stations, share invaluable tools to support vulnerable people, and open up about the painful childhood experience that drew her towards this role.

Suicide is an extensive worldwide public health problem, one helped in no way by the surge in depression and anxiety precipitated by the pandemic. The latest data from mental health charity Samaritans shows that 6,002 people died from suicide in England, Wales and Scotland in 2020, and 209 in Northern Ireland in 2019.

Despite a decrease of 477 between 2019 and 2020 in England, Wales and Scotland, as well as action to raise awareness of suicide and treat people experiencing suicidal thoughts, rates have remained problematic over the years. They’ve even increased in other countries, such as the United States, which saw a staggering increase of almost 25% between 1999 and 2014.

Suicide may be a complex and widespread issue, but it’s also a preventable one — even if global prevention efforts to date suggest otherwise. For this reason, my sister-in-law’s appearance on the podcast (and also on Sky News this morning) got me thinking about the contribution of artificial intelligence to the problem. And a little research demonstrates there’s a lot of potential for these technologies to make a significant dent in suicide rates.

Here’s why:

A lot of victims actually speak to a medical professional in the days, weeks and months before they commit suicide. This indicates an extremely worrying issue with risk detection among the people best qualified and trusted to detect risk, especially when given clear opportunities to do so. Others don’t engage with a medical professional about their worries — or with anyone, for that matter — for fear of stigmatisation or forced hospitalisation.

Young people are more likely to reveal signs of suicidal intentions on social media sites like facebook and Twitter than they are to healthcare workers, highlighting differences in risk detection among age groups. Differences in risk detection, as well as risk itself, also exist depending on a person’s gender, ethnicity, socio-economic status, geographical location, and access to healthcare.

For instance, Samaritans found that suicide rates in the UK are higher among men than women and those living in deprived areas.

Combined, all of this highlights the need for thorough studies of suicide risk, which can be expensive, time-consuming and held back by challenges like researcher bias when traditional research methods are employed. That’s likely why no such studies have been conducted to date — at least none that reduce risk to a sufficient degree.

Enter AI…

The tech could have a huge impact on the accurate prediction of suicide risk and the development of effective suicide prevention programmes.

Exploiting vast datasets, researchers can use predictive modelling techniques unique to AI to achieve far more accurate suicide prediction results. One example of this would be the application of machine learning to electronic medical records.

For anyone who closely follows the well-documented progress of artificial intelligence in the press, this may very well ring a bell. And that’s because we’re talking about tools already successfully used for predicting risk in a number of other medical fields, including death in premature babies, sepsis in those with severe infections, and rare outcomes — the list goes on.

Although the use of artificial intelligence to predict and prevent suicide is still in its infancy, it all looks very promising. The number of research studies in the area is fast increasing, and so I feel super confident that AI can and will help save lives.

But like I always say, as with the application of such technologies in any field, it’s important to stay within ethical and legal boundaries — especially when data as sensitive as that contained in electronic medical records is involved.

--

--

Catriona Campbell

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com