
Rebecca Weintraub Brendel, head of Harvard Medical School’s Center for Bioethics.
Veasey Conway/Harvard Staff Photographer
Health
It’s terminal cancer. Should AI determine the next steps?
The emergence of advanced language models is igniting dialogue on the potential expansion of technology in patient care and the implications for our humanity.
Artificial intelligence is currently utilized in healthcare settings to assist in the analysis of imaging results, including X-rays and scans. Nevertheless, the recent introduction of advanced large-language AI models has prompted discussions about extending the application of this technology into other facets of patient care. In this edited dialogue with the Gazette, Rebecca Weintraub Brendel, head of Harvard Medical School’s Center for Bioethics, examines end-of-life choices and emphasizes the significance of recognizing that just because we are capable, it doesn’t necessarily imply we should.
When discussing artificial intelligence and end-of-life choices, what key questions arise?
End-of-life decision-making shares similarities with other types of decision-making; fundamentally, we adhere to what patients wish, as long as they are competent to make those judgments and their desires are medically appropriate — or at least not directly harmful.
A potential complication occurs if a patient is so unwell that they cannot articulate their preferences. The second obstacle involves comprehending both intellectually and emotionally the implications of the decision.
Individuals occasionally assert, “I would never want to live that way,” but their decision-making may differ under various conditions. Patients who have dealt with progressive neurological diseases such as ALS for an extended period often realize when they have reached their threshold. They may not be experiencing depression or fear and are prepared to make their choices.
Conversely, depression is notably common in some cancer patients, leading many to reconsider their desire to end their lives once their symptoms are addressed.
If a young person declares, “If I lose my legs, I wouldn’t wish to live,” should we recognize evolving viewpoints as we approach the end of life?
When confronted with circumstances that disrupt our sense of physical integrity and self-identity as fully functional beings, it’s understandable, even anticipated, that our ability to cope may become overwhelming.
However, there are indeed profoundly challenging injuries where, after a year, individuals report improved quality of life compared to before, even in cases of severe spinal cord injuries and quadriplegia. Therefore, we can endure considerably, and our potential for transformation, and hope must be taken into consideration.
So, how do we, as healers of the mind and body, assist patients in making choices about their end-of-life care?
“I’d be hard-pressed to say that we’d ever want to give away our humanity in making decisions of high consequence.”
For individuals with chronic conditions, standard care involves ongoing decision-making, and AI could play a supportive role. Yet, at the moment of diagnosis — should I pursue treatment or choose palliative care from the outset — AI may provide insights into potential outcomes, the extent of impairments, whether pain can be alleviated, or what the critical juncture might be for an individual.
Hence, the ability of AI to aggregate and evaluate vastly more data than the human mind can manage — devoid of biases from fear, anxiety, duty, or personal relationships — could yield perspectives that may be advantageous.
What about patients who are incapacitated, without family or advance directives, leaving the decision to the care team?
We need to approach these decisions with humility. Access to information can be profoundly beneficial. In cases where an individual will never regain capacity, we are limited to several choices. If we truly lack knowledge about their preferences, particularly if they were someone averse to treatment and preferred not to be admitted to a hospital, or had few connections, we tend to assume they would not have sought treatment for a life-threatening condition. Nonetheless, we must recognize that we are basing many assumptions on limited knowledge, even if our actions aren’t inherently wrong. Possessing a clearer understanding of possible future scenarios is essential to making such decisions, which is again, where AI could contribute.
I’m less hopeful regarding the efficacy of large-language models in making decisions about capacity or determining what someone might have preferred. For me, it’s fundamentally about respect. We honor our patients and strive to make informed estimations, realizing that human beings are complex, and often deeply troubled yet also capable of being cherished and ideally, loved.
Are there actions that AI should be prohibited from taking? I imagine it could generate end-of-life recommendations rather than merely compiling information.
We must exercise caution in distinguishing between “is” and “ought” in decision-making.
If AI indicates a less than 5 percent chance of survival, that alone does not dictate what actions we should take. In instances of severe tragedies or violent assaults, we would interpret that 5 percent likelihood quite differently than in a case where someone has been managing a chronic disease and expresses, “I don’t want to endure this any longer, nor do I wish to impose this on others. I’ve had a fulfilling life.”
“If AI told you that there is less than 5 percent chance of survival, that alone is not enough to tell us what we ought to do.”
In terms of diagnostic and prognostic evaluations, AI has begun to surpass human practitioners, yet that does not resolve the essential question of how we interpret those findings regarding what our default principles should be concerning human behavior.
can assist us in being more open and responsible while honoring each other by clearly stating that, as a community, if certain situations arise, unless informed otherwise, we will not intervene. Or we will, when we perceive there’s a good possibility of recovery.
I do not wish to undervalue AI’s potential influence, but we cannot relinquish our duty to prioritize human significance in our choices, even when they are data-driven.
Should such choices consistently be made by humans?
“Consistently” is a very assertive term, but I would find it difficult to argue that we should ever forfeit our humanity when making significant decisions.
Are there domains of medicine where human involvement is essential? Should a baby’s initial encounter with the world always occur through human hands? Or should we primarily concentrate on the quality of care provided?
I would prefer the presence of people, even if a robot conducts the surgery resulting in better outcomes. We should preserve the human significance of crucial life moments.
Another inquiry that arises is, what does it mean to be a doctor, a caregiver, a healthcare provider? We possess a wealth of information, and the information imbalance has contributed to medical and healthcare professionals being held in high regard. However, it’s also about how we utilize that knowledge, excelling as diagnosticians, exhibiting excellent bedside manners, and supporting patients during their difficult times. How do we redefine the profession when the skills we believed we excelled at may no longer be our strongest attributes?
At some point, we may need to examine the role of human interaction within the system. Does it introduce biases, and how essential is the processing by human intellect? Will a large language model generate new information, identify a new diagnostic category, or recognize a disease entity? What responsibilities should patients and doctors have toward one another in a highly technological era? These are significant questions that require our attention.
Are these discussions taking place?
Indeed. Within our Center for Bioethics, one of our focus areas is examining how artificial intelligence addresses some of our enduring challenges in healthcare. Technology typically follows the path of capital and resources, while advancements in LLMs and AI could enable us to provide care to large groups of individuals in regions where physicians are inaccessible within a day’s travel. Holding ourselves accountable regarding equity, justice, and promoting global health is crucial.
There are ethical leadership questions in medicine. How do we ensure that the outputs from LLMs and future AI developments align with the individuals we aspire to be? How should we educate ourselves to ensure that the principles of healing professions remain central in care delivery? How do we find equilibrium between public health and individual health, and how does this manifest in different countries?
So, when we discuss patients in under-resourced areas and the capabilities of AI versus what it truly means to be human, we need to be aware that in certain regions of the world, being human often equates to suffering and lacking access to care?
Absolutely, because, more than ever, we have the capability to address this. As we create tools that can bring about significant changes in practical and affordable manners, we must consider, “How do we accomplish this while adhering to our principles of justice, care, and respect for individuals? How do we ensure that we do not neglect those in need when we have the ability to assist?”