SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Matsler N, Pepin L, Banerji S, Hoyte C, Heard K. Clin. Toxicol. (Phila) 2024; ePub(ePub): ePub.

Copyright

(Copyright © 2024, Informa - Taylor and Francis Group)

DOI

10.1080/15563650.2024.2348107

PMID

38864738

Abstract

INTRODUCTION: Efficient and complete medical charting is essential for patient care and research purposes. In this study, we sought to determine if Chat Generative Pre-Trained Transformer could generate cogent, suitable charts from recorded, real-world poison center calls and abstract and tabulate data.

METHODS: De-identified transcripts of real-world hospital-initiated poison center consults were summarized by Chat Generative Pre-Trained Transformer 4.0. Additionally, Chat Generative Pre-Trained Transformer organized tables for data points, including vital signs, test results, therapies, and recommendations. Seven trained reviewers, including certified specialists in poison information and board-certified medical toxicologists, graded summaries using a 1 to 5 scale to determine appropriateness for entry into the medical record. Intra-rater reliability was calculated. Tabulated data was quantitatively evaluated for accuracy. Finally, reviewers selected preferred documentation: original or Chat Generative Pre-Trained Transformer organized.

RESULTS: Eighty percent of summaries had a median score high enough to be deemed appropriate for entry into the medical record. In three duplicate cases, reviewers did change scores, leading to moderate intra-rater reliability (kappa = 0.6). Among all cases, 91 percent of data points were correctly abstracted into table format.

DISCUSSION: By utilizing a large language model with a unified prompt, charts can be generated directly from conversations in seconds without the need for additional training. Charts generated by Chat Generative Pre-Trained Transformer were preferred over extant charts, even when they were deemed unacceptable for entry into the medical record prior to the correction of errors. However, there were several limitations to our study, including poor intra-rater-reliability and a limited number of cases examined.

CONCLUSIONS: In this study, we demonstrate that large language models can generate coherent summaries of real-world poison center calls that are often acceptable for entry to the medical record as is. When errors were present, these were often fixed with the addition or deletion of a word or phrase, presenting an enormous opportunity for efficiency gains. Our future work will focus on implementing this process in a prospective fashion.


Language: en

Keywords

ChatGPT; Artificial intelligence (AI); Artificial intelligence summary; poison center charting

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print