SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhang H, Wang Y, Zhang Z, Guan F, Zhang H, Guo Z. Am. J. Bioeth. 2021; 21(7): 43-45.

Copyright

(Copyright © 2021, MIT Press)

DOI

10.1080/15265161.2021.1928793

PMID

unavailable

Abstract

The target article by Laacke et al. (2021) focuses on the specific context of identifying people in social media with a high risk of depression by using artificial intelligence (AI) technologies. It suggests an extended concept of health-related digital autonomy by referring to the classic concept of patient autonomy developed by Beauchamp and Childress. However, as the authors note, autonomy is not the only relevant and necessary principle in this context.

According to Beauchamp and Childress (2019), as one of the four principles of biomedical ethics (respect for autonomy, beneficence, nonmaleficence, and justice), the principle of beneficence refers to a general moral obligation to act for the benefit of others, and the duty of rescue is obligatory beneficence. In biomedical ethics, it is inaccurate to assign moral priority to any one basic principle over the others. However, balancing often occurs in circumstances of contingent conflict and allows for a due consideration of all factors, including relative weights and strengths of different moral norms.

Suicide is a serious international public health problem. The World Health Organization (2019) estimates that approximately 800,000 people die from suicide worldwide, annually (one person every 40 s). A review article on suicide (Fazel and Runeson 2020) states that in high-income countries, half of the persons who have died by suicide are estimated to be linked to mental illnesses, particularly depression and bipolar disorder. In many cases, people with depression, including those who are potentially suicidal, are often underdiagnosed and undertreated due to concerns about stigmatization, discrimination, forced medical treatment, or a lack of available services; worse still, they may not self-identify as being at risk and have poor insight into their mental state. Therefore, they do not seek help from medical offices and hospitals but are likely to disclose suicidal thoughts and risk factors on social media (D'Hotman and Loh 2020; Laacke et al. 2021).

In this paper, we have added a commentary to highlight AI-enabled suicide prevention by using data from social media and to emphasize the principle of beneficence (preventing suicide and saving life) for potentially suicidal individuals...


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print