In the UK, a quarter of people who commit suicide were in contact with a healthcare professional the previous week, and most have spoken to someone within the last month. However, the assessment of patients’ risk of suicide remains extremely difficult.
There was 5,219 deaths recorded by suicide in England in 2021. While the the suicide rate in England and Wales has fallen by around 31% since 1981most of this decline occurred before 2000. Suicide is three times more common among men than among women, and this gap has widened over time.
A study conducted in October 2022, led by the Black Dog Institute at the University of New South Wales, found that artificial intelligence models outperformed clinical risk assessments. He reviewed 56 studies from 2002 to 2021 and found that artificial intelligence correctly predicted 66% of people who would experience a suicidal outcome and predicted 87% of people who would not. In comparison, traditional scoring methods performed by healthcare professionals are barely better than random.
Artificial intelligence is the subject of much research in other medical fields such as cancer. However, despite their promiseartificial intelligence models for mental health are not yet widely used in clinical settings.
Prediction of suicide
A 2019 study from the Karolinska Institutet in Sweden found four traditional scales used to predict suicide risk after recent episodes of poorly executed self-harm. The challenge of suicide prediction comes from the fact that a patient’s intention can change quickly.
The advice on self harm used by health professionals in England explicitly states that suicide risk assessment tools and scales should not be relied upon. Instead, professionals should use a clinical interview. Whereas physicians conduct structured risk assessmentsthey are used to get the most out of interviews rather than providing a scale for determining who receives treatment.
The Black Dog Institute study showed promising results, but if 50 years of research into traditional prediction (non-artificial intelligence) has resulted in methods that barely better than random, we have to ask ourselves if we should trust artificial intelligence. When a new development gives us something we want (in this case, better suicide risk assessments), it can be tempting to stop asking questions. But we can’t afford to rush this technology. The consequences of a mistake are literally life or death.
AI models always have limitations, including how their performance is evaluated. For example, using precision as a measure can be misleading if the dataset is unbalanced. A model can achieve 99% accuracy by always predicting that there will be no risk of suicide if only 1% of patients in the data set are at high risk.
It is also essential to evaluate AI models on different data on which they are trained. It is to avoid overfitting, where models can learn to perfectly predict outcomes from training material, but struggle to work with new data. Models may have worked perfectly during development, but made incorrect diagnoses for real patients.
For instance, artificial intelligence proved to be too suitable for surgical marks on a patient’s skin when used to detect melanoma (a type of skin cancer). Doctors use blue pens to highlight suspicious lesions, and artificial intelligence has learned to associate these marks with a higher likelihood of cancer. This led to misdiagnosis in practice when blue highlighting was not used.
It can also be difficult to understand what AI models have learned, such as why they predict a particular level of risk. This is a prolific problem with artificial intelligence systems in general, and has led to quite a area of research known as explainable artificial intelligence.
The Black Dog Institute found that 42 of the 56 studies analyzed were at high risk of bias. In this scenario, a bias means that the model over or under predicts the average suicide rate. For example, the data has a suicide rate of 1%, but the model predicts a rate of 5%. High bias leads to misdiagnosis, either by missing high-risk patients or by overattributing risk to low-risk patients.
These biases stem from factors such as participant selection. For example, several studies had high case-control ratios, meaning the suicide rate in the study was higher than in reality, so the AI model was likely to assign too much risk. to patients.
A promising prospect
The models primarily used data from electronic health records. But some also included data from interviews, self-report surveys and clinical notes. The advantage of using artificial intelligence is that it can learn large amounts data faster and more efficiently than humans, and point patterns missed by overworked healthcare professionals.
Although progress is being made, the artificial intelligence approach to suicide prevention is not ready to be used in practice. Researchers are already working to solve many problems with AI-based suicide prevention models, such as the difficulty of explaining why the algorithms made their predictions.
However, predicting suicide is not the only way to reduce suicide rates and save lives. An accurate prediction is useless if it does not lead to effective response.
On its own, predicting suicide with artificial intelligence will not prevent all deaths. But it could give mental health professionals another tool to treat their patients. It could be life-changing as cutting-edge heart surgery if it raised alarm bells for neglected patients.
Joseph Early is a PhD candidate in Artificial Intelligence at the University of Southampton.
This article first appeared on The conversation.