SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Chaiyachati KH. Mayo Clin. Proc. Digit. Health 2024; 2(1): 41-43.

Copyright

(Copyright © 2024, Elsevier Publishing)

DOI

10.1016/j.mcpdig.2023.11.009

PMID

unavailable

Abstract

Screen for suicidality. This mantra was imprinted in me during medical school. My subconscious, however, asks me to reconsider. Omission bias stems from behavioral science.1
The human brain can desire inaction because the choice is perceived as easier and less risky. By avoiding the topic of suicidality--omission, a provider's moral obligation to subsequently act or intervene is nonexistent. Blissful ignorance. By contrast, screening for suicidality, or commission, increases the risk of feeling at fault (or judged) for having asked about suicidality but then producing an inadequate safety plan or inadvertently introducing the thought of committing suicide when the thought was not previously there.

Enter modern-day advances in artificial intelligence (AI). Natural language processing, large language models, and machine learning. These tools are being developed and tested in healthcare settings for a variety of use cases--including the prediction of suicide. AI's perceived advantage is that it can be disentangled from behavioral biases, unmoved by fears of regret or failure. Trained correctly, AI can be immune to omission bias, operating with mechanical precision when predicting suicide.

Moreover, AI can continuously process and learn from a limitless expanse of data. Human providers combine medical knowledge (eg, textbooks, scientific journals, and training experiences), information in patients' medical records, and cues from patients during one-on-one interactions--word choices, vocal patterns, and body language. AI can use all these same data points, or at least has the potential to, and can go beyond this corpus of information. It can absorb an infinite-fold greater quantity of information while adding novel data into its algorithm from sources not traditional in health care, such as consumer wearable devices, social media, financial data, or travel patterns. Moreover, AI can improve without the need for rest or sleep. Undoubtedly, AI has the means to become ever more precise than providers.

In this edition of the proceedings, Bhandarkar et al2
outline a methodology for developing a predictive algorithm to detect suicide-related events (SREs). The authors focus on a novel data set--patient portal communications. In it, the authors describe the identification of 420 patient-provider communications 30 days before an SRE and compare them with a randomized control set based on 3 parameters: keyword frequencies, metadata (eg, punctuation patterns and message lengths), and message sentiment (ie, positive or negative). The neural network machine learning model had the highest area under the receiver operating curve at 0.71. As the authors note, key improvements could enhance the model such as a greater volume of SREs and analyzing how within patient communication patterns change over time instead of focusing solely on comparing patterns between patients. Even without these enhancements, their model performed remarkably better than current screening tools used in routine clinical settings, such as the Columbia-Suicide Severity Rating Scale or the SAD PERSONS scale.

Other AI-based models for predicting SREs have been...


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print