Dr. Patricia Farrell on Medika Life

The Decision of Life/Death Should Be Left to Whom?

End-of-life circumstances present difficult ethical and personal dilemmas, and deciding who should make the decision is problematic.

Life is precious, but there may come a time when painful, emotional decisions must be made about maintaining it. Who should be the ultimate authority, and how do we decide who/what gets to make this momentous decision?

The technology field believes they have the answer, and it removes the decision from human hands, medical or otherwise. Should we give up this decision to something as automated as an algorithm? Ethicists will surely have input on this decision, but what are the coders saying? Aren’t humans more complex than mathematical equations or software programs? Doesn’t our very humanity account for more than that?

I have written about how really, really dumb AI is sometimes. In fact, ChatGPT, while a wonderful resource, is somewhat stolid in its responses and may even point to non-existent websites or make contradictory statements. We’re all told to be knowledgeable about what we want AI to do, but it is dense unless given very specific prompts. Even then, it doesn’t always provide accurate or helpful information.

Request certain images or videos, and it is concrete often enough to make me laugh. But we must persist because it is like a child learning, but would you give a child this kind of power?

As it is in many medical decisions, the thorny question of ending someone’s life now brings in artificial intelligence programs that promise relief. The consideration of SIC (serious illness communication) is a time for a new hybrid decision-making model.

Numerous AI models are being tested, and at least some of them are directed toward evaluating clinician input into electronic medical records to provide the probability of efficacy of continuing care.

When, or if, AI gets too much power, EHR becomes all the more important. AI can’t correct errors and only synthesizes the information provided; it makes no decision regarding its veracity. There must be caution applied because even if AI is directing a clinician to relevant information, there is no guarantee the material is correct. Herein lies a concern for all.

One area where researchers believe they may need NLP (natural language processing) in this regard. “NLP also has the potential to address barriers resulting from poor EHR design that prevent or inhibit the extraction and flow of meaningful advanced care planning information across the care continuum.” Ultimately, the coding of the EHR is where the unseen problem may lie. Who could know that? We know now that coding is biased. How do we know that this bias, unrecognized by programmers or those that use the program, isn’t damaging rather than offering solutions to difficult decisions?

Should AI Make All Decisions for Healthcare Professionals?

The application of AI to decisions about end-of-life care is a difficult and debatable subject. There are a number of reasons why AI may not be appropriate for making decisions about end-of-life care, despite the fact that it has shown promise in some areas, such as spotting patterns in patient data to assist clinicians in making better-informed judgments.

Unlike human caregivers, AI lacks empathy and emotional intelligence. Making decisions about end-of-life care necessitates a thorough comprehension of the patient’s unique needs and preferences, as well as the capacity to offer the patient and their loved ones emotional support. Artificial intelligence (AI), which may be able to evaluate data and make recommendations, cannot match the level of individualized care and assistance that human caregivers can provide.

Making end-of-life decisions requires nuanced ethical and moral judgments that are challenging to quantify and include in AI systems. When deciding whether to withhold or discontinue life-sustaining therapy, for instance, it is important to examine the advantages and disadvantages of such interventions as well as the patient’s values, beliefs, and preferences. These are individualized and context-specific elements that an AI system cannot fully account for. Can we really expect HAL to make these decisions for us?

The loss of human autonomy and control is a problem when AI is used to make end-of-life decisions. Patients and their relatives might believe that a computer is making decisions against their wants and morals, which would make them feel helpless and distrustful. The standard of treatment offered may be compromised, and patients and their loved ones may experience more emotional anguish as a result. Some are suggesting a hybrid model of AI and human input, but how does that work and how will it be tested?

While AI has potential applications in healthcare, the difficult decisions surrounding end-of-life care necessitate empathy and moral judgment that are unique to humans. As a result, it is best left in the care of people who can give patients and their families individualized attention and support.

Follow this author on Substack

PATIENT ADVISORY

Medika Life has provided this material for your information. It is not intended to substitute for the medical expertise and advice of your health care provider(s). We encourage you to discuss any decisions about treatment or care with your health care provider. The mention of any product, service, or therapy is not an endorsement by Medika Life

Pat Farrell PhD
Pat Farrell PhDhttps://medium.com/@drpatfarrell
I'm a licensed psychologist in NJ/FL and have been in the field for over 30 years serving in most areas of mental health, psychiatry research, consulting, teaching (post-grad), private practice, consultant to WebMD and writing self-help books. Currently, I am concentrating on writing articles and books.

DR PATRICIA FARRELL

Medika Editor: Mental Health

I'm a licensed psychologist in NJ/FL and have been in the field for over 30 years serving in most areas of mental health, psychiatry research, consulting, teaching (post-grad), private practice, consultant to WebMD and writing self-help books. Currently, I am concentrating on writing articles and books.

Patricia also acts in an editorial capacity for Medika's mental health articles, providing invaluable input on a wide range of mental health issues.

Buy this author on Amazon

Connect with Patricia

Website

Facebook

Twitter

Youtube

All articles, information and publications featured by the author on thees pages remain the property of the author. Creative Commons does not apply and should you wish to syndicate, copy or reproduce, in part or in full, any of the content from this author, please contact Medika directly.