Skip to Content

Unpredictable: AI’s Limitation in End-Of-Life Decision Making

We frequently discuss personalized medicine, yet the concept of personalized death remains largely unexplored.

End-of-life decisions pose significant challenges and anxieties for both patients and healthcare providers. Despite surveys indicating a preference for dying at home, individuals in developed nations often pass away in hospital settings, including acute care facilities. Various factors contribute to this disparity, such as underutilization of hospice services, often stemming from delayed referrals. Healthcare professionals may hesitate to broach end-of-life discussions, potentially due to concerns about infringing on patient autonomy or lacking the necessary training to navigate these sensitive topics.

The fear of death encompasses multiple dimensions. Drawing from my experience as a physician specializing in palliative care, I have identified three primary fears among patients: the fear of pain, the fear of separation, and the fear of the unknown. While living wills and advanced directives offer a semblance of control over the process, they are frequently either uncommon or inadequately detailed, leaving family members burdened with challenging decisions.

In addition to the emotional burden faced by families, research has shown that surrogate decision makers often struggle to accurately predict the preferences of terminally ill patients. Personal biases, intertwined with familial roles and belief systems, can cloud their judgment, complicating the decision-making process.

Is there a possibility of alleviating this burden on families and healthcare providers by delegating end-of-life decisions to automated systems? And if so, should we pursue this avenue?

Artificial Intelligence in End-Of-Life Decision Making

Conversations surrounding a “patient preference predictor” have gained traction within the medical community, with recent advancements in AI technology blurring the lines between theoretical bioethical debates and practical applications. However, the integration of end-of-life AI algorithms into clinical practice remains a work in progress.

A notable study by researchers from Munich and Cambridge introduced a machine-learning model known as the Medical ETHics ADvisor, designed to offer guidance on complex medical ethical dilemmas. The selection of specific moral principles for training the algorithm underscores a fundamental challenge in developing end-of-life decision support systems: the choice of underlying values guiding these algorithms.

Unlike diagnostic algorithms with clear-cut outcomes, end-of-life decisions lack a universally objective metric for training AI models. While predictive algorithms could aim to discern individual preferences, the scarcity of comprehensive datasets incorporating relevant factors beyond medical history poses a significant hurdle. The emergence of advanced language models like ChatGPT offers new possibilities for analyzing previously inaccessible data.

In the absence of empirical data, hypothetical training scenarios raise questions about the reliability of predicting real-life preferences. Moreover, establishing the minimum threshold of accuracy deemed acceptable for end-of-life algorithms presents a critical ethical dilemma, particularly when conveying algorithmic recommendations to patients and families in distressing situations.

The opacity of many machine learning algorithms further complicates matters, as the inability to decipher the rationale behind algorithmic decisions hampers transparency and accountability. While AI holds promise in augmenting decision-making processes, fostering a proactive approach to end-of-life choices can reduce reliance on personalized algorithms. Embracing personal agency in decision-making may obviate the need for algorithmic interventions altogether.