SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Volkmer S, Meyer-Lindenberg A, Schwarz E. Psychiatry Res. 2024; 339: e116026.

Copyright

(Copyright © 2024, Elsevier Publishing)

DOI

10.1016/j.psychres.2024.116026

PMID

38909412

Abstract

The ability of Large Language Models (LLMs) to analyze and respond to freely written text is causing increasing excitement in the field of psychiatry; the application of such models presents unique opportunities and challenges for psychiatric applications. This review article seeks to offer a comprehensive overview of LLMs in psychiatry, their model architecture, potential use cases, and clinical considerations. LLM frameworks such as ChatGPT/GPT-4 are trained on huge amounts of text data that are sometimes fine-tuned for specific tasks. This opens up a wide range of possible psychiatric applications, such as accurately predicting individual patient risk factors for specific disorders, engaging in therapeutic intervention, and analyzing therapeutic material, to name a few. However, adoption in the psychiatric setting presents many challenges, including inherent limitations and biases in LLMs, concerns about explainability and privacy, and the potential damage resulting from produced misinformation. This review covers potential opportunities and limitations and highlights potential considerations when these models are applied in a real-world psychiatric context.


Language: en

Keywords

Therapy; Hallucination; BERT; Transformer; GPT; Llama; Medical question answering; PaLM

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print