Categories
Consult-Liaison Nonfiction

Delirium Adventures with ChatGPT.

I still think one of the most valuable skills psychiatrists have is to help distinguish psychiatric illness from “delirium”, which, for the purposes of this post, we can call “acute brain failure”. Other organs can abruptly stop working for a variety of reasons. Hepatitis infections can cause acute liver failure; dehydration can lead to acute kidney failure; we’re all familiar with acute heart failure, too.

Delirium is a symptom of an underlying medical condition. It’s like a fever or a cough: Many conditions can cause fevers or coughs, so you have to seek out the “real” reason. When people develop delirium, their thinking, behavior, and levels of consciousness change abruptly. People can get confused about who or where they are; they might start seeing things or hearing things that aren’t there; sometimes they seem to “space out” for periods of time. These are all vast departures from their usual ways of thinking. (The abruptness here is key; people with dementia may have similar symptoms, but those typically develop over months to years.)

(Fellow psychiatrists and hospital internists recognize that delirium isn’t always that dramatic. Sometimes people are lying quietly in bed, hallucinating and feeling confused, but never behave in a way that would suggest otherwise.)

Because I spent a few years working in medical and surgical units (where the risk of delirium is higher than in the community), it is still my habit to consider delirium when I am meeting with people. Given the disease burdens that people experiencing homelessness and poverty face, this is prudent. (Fellow health care workers might also more likely to believe a psychiatrist when we report that someone might be delirious, rather than psychiatrically ill.)

I wondered if there is any evidence to support that psychiatrists are more likely to detect delirium compared to other health care professionals. Enter ChatGPT.

ChatGPT cited two papers that reported that, yes, psychiatrists are more likely to detect delirium, though shared only the journal and the year, along with a summary of results. I asked for a list of authors for one, thinking that might help narrow down the search. It did not. So then I asked for the title of the two papers.

I could not find either title on Pubmed. This was curious. And concerning.

I then asked ChatGPT to share with me the Pubmed ID (a number assigned to each article) for each paper. Here’s what happened:

ChatGPT said that the first paper, “Detection of Delirium in the Hospital Setting: A Systematic Review and Meta-Analysis of Formal Screening Tools”, was published in the Journal of the American Geriatrics Society in 2018. ChatGPT said that the ID was 26944168. In PubMed, this leads to an article called “Probable high prevalence of limb-girdle muscular dystrophy type 2D in Taiwan”.

The second paper reportedly had the title of “Detection of delirium in older hospitalized patients: a comparison of the 3D-CAM and CAM-S assessments with physicians’ diagnoses”. (CAM stands for Confusion Assessment Method, which is a real, validated tool to help measure delirium.) ChatGPT said that the ID was 29691866. In PubMed, this leads to an article called “Gold lotion from citrus peel extract ameliorates imiquimod-induced psoriasis-like dermatitis in murine”. (I did learn that “gold lotion” is “a natural mixed product made from the peels of six citrus fruits, has recently been identified as possessing anti-oxidative, anti-inflammatory, and immunomodulatory effects.”)

It makes me wonder how ChatGPT generated these articles and their titles, where it created the summaries from, and where it found the PubMed ID numbers.

Indeed, ChatGPT is artificial, but not so intelligent. And it will take me a bit more time to find the answer to my question.