SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

McKernan LC, Clayton EW, Walsh CG. Front. Psychiatry 2018; 9: e650.

Affiliation

Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, United States.

Copyright

(Copyright © 2018, Frontiers Media)

DOI

10.3389/fpsyt.2018.00650

PMID

30559686

PMCID

PMC6287030

Abstract

In the United States, suicide increased by 24% in the past 20 years, and suicide risk identification at point-of-care remains a cornerstone of the effort to curb this epidemic (1). As risk identification is difficult because of symptom under-reporting, timing, or lack of screening, healthcare systems rely increasingly on risk scoring and now artificial intelligence (AI) to assess risk. AI remains the science of solving problems and accomplishing tasks, through automated or computational means, that normally require human intelligence. This science is decades-old and includes traditional predictive statistics and machine learning. Only in the last few years has it been applied rigorously in suicide risk prediction and prevention. Applying AI in this context raises significant ethical concern, particularly in balancing beneficence and respecting personal autonomy. To navigate the ethical issues raised by suicide risk prediction, we provide recommendations in three areas-communication, consent, and controls-for both providers and researchers (2).


Language: en

Keywords

artificial intelligence; code of ethics; ethics; machine learning; suicide

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print