SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Netzer NC. Sleep Breath. 2024; ePub(ePub): ePub.

Copyright

(Copyright © 2024, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s11325-024-03129-7

PMID

unavailable

Abstract

"A tool doesn't harm people, people harm people." The National Rifle Association (NRA) even stretches this slogan to interpret the second amendment in the Bill of Rights in a way that everybody should have the right to buy and own any kind of weapon. While this Janus-faced interpretation of using a tool is already questionable based on the original text of the second amendment, which speaks of a militia to defend the state in combination with weapon ownership, it becomes kind of absurd when it allows children in early puberty in the State of Missouri to own an AR-15 and bring it to school in their school backpack with Disney characters.

But what might be inadequate for firearms, fits our use of artificial intelligence (AI) as a Janus-faced tool, like a kitchen knife, normally extremely useful but can also be misused to injure someone, figuratively quite well. AI can help to interpret clinical symptoms faster and more extensive than a doctor's brain and find the right diagnosis in shorter time. It does that not much differently than Dr. House does it. It scans probabilities. But out of a much larger sample base, that humans could ever do.

To give a simple example of how the Generative Pretrained Transformer (GPT) works: Finish the sentence "I put the toast in the ……….." Now ChatGPT scans a billion sentences with toast and gets a 99.99999 probability, that the toast should be put in the toaster and not in the mailbox (Generative). It memorizes this action and uses it the next time immediately without the scanning process (Pretrained). It has been pretrained that an envelope should be put in the mailbox. Now finish the sentence "I wanted to show my enemy that he is toast, so I put a toast in an envelope and put it in……" From what it had all learned, ChatGPT will finish the sentence with "………. the mailbox" (Transformer). And it can meanwhile in this form put together complicated texts, elaborate diagnoses from images etc. But it is at least in the moment generative, transforming and not fully creative to invent something completely new out of nowhere as a human like Albert Einstein did with a thought about gravity to come up with the relativity theory. And it is not powered to proof the sense of his solution. Problem is, we are not all Einsteins.

The positive aspect of AI for writing manuscripts is of course finding the best references, the best statistical methods etc. For example we ask ChatGPT a certain scientific question like "Does OSAS increase symptoms of the metabolic syndrome in geriatric patients?" If I put the key words osa, metabolic syndrome and geriatric patients into Pubmed, I get seven results. ChatGPT finds way more and I can ask it to produce a metanalysis of the results [1].

AI can help to write nursing protocols and save valuable nursing time for the care of patients in a time in which the shortage of nurses is becoming almost a desperate situation in health care in most western countries. However, already for this possible usage of AI, authors, who tested it and wrote an article about it, see possible problems of integrity and possible nonsense outcome, which if not double checked by a registered human nurse could lead to mistreatment of patients...


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print