Used wisely and to its fullest (positive) potential, augmented intelligence can extend the empathy quotient, making the wise healer more expansive. AI can unlock tremendous insight and perspective for the savvy, self-aware health provider.
But what of the company that prioritizes using this powerful technology to become more efficient and faster – using powerful tools like ChatGPT and GenAI to primarily bolster the bottom line without adding value? Without policy bumper guards, AI might become a 21st-century dehumanizing sharp pencil in dispassionate bureaucrats’ hands.
“Digital technologies are changing the healthcare sector at an unprecedented pace,” notes Stan Kachnowski, PhD, HITLAB chair and host of the program’s ongoing Innovators Summit 2023 that brings together digital health leaders. “It is a priority area for the health ecosystem at large.”
At the 2023 Fall HITLAB New York City Conference, Downstate Health Science University CEO Dr. David Berger, offered counsel where medical institutions can shift functions that can be replaced by AI-driven tools. Berger shared a simple, thoughtful chart developed by Vineeta Agarwala, MD, PhD, a general partner at venture capital firm Andreessen Horowitz (known as a16z).
The Timeless Tussle Between Bureaucracy and Humanity
Vice President Kamala Harris spoke recently in London about the sharp double-edged sword of AI in health delivery and why the government will need to create business guidelines:
“AI has the potential to do profound good to develop powerful new medicines to treat and even cure the diseases that have for generations plagued humanity, to dramatically improve agricultural production to help address global food insecurity and to save countless lives in the fight against the climate crisis. But just as AI has the potential to do profound good, it also has the potential to cause profound harm.”
Why is the Veep worried? Are her comments overreacting or reacting to people’s irrational fears? Is the administration in Washington sending a message to big businesses that the executive branch sees potential risk to people’s health via AI? This hand-wringing is more than just bluster. Just look at AI’s entry into the access to care payer process: perhaps an algorithm gone amuck?
UnitedHealthcare and Cigna Healthcare are already facing consumer backlash after claims that these payer behemoths use automated data to deny beneficiaries needed medical care. These allegations have already ignited a broader conversation on how insurers perhaps overly rely on artificial intelligence algorithms when processing claims or prior authorization requests.
Wounded reputations, customer outcries, and diminished care reinforce why companies must not install AI in the decision-making corporate motherboard at the expense of the human touch. Indeed, having already touched the hot stove, CIGNA stepped forward to acknowledge that human connection is key to maximizing the potential of AI to advance the healing process.
Humanity in the Loop
“Above all else, a human must be in the loop,” suggests Andy Fanning, vice president of intelligent automation, AI enablement, and business transformation at The Cigna Group, in a statement. “Our AI solutions must be used to augment, never replace, the human experience — allowing experts to spend more time in the areas where they can apply their expertise.”
Ever evolving, the health sector is poised to take a giant leap forward by inviting augmented intelligence and large language models like ChatGPT into the fold as technology partners in care decisions, clinical trial design, drug discovery, health information, patient diagnosis, manufacturing medical supply, and, yes, the dicey decision-making process of payer insurance claims. The operative word must remain ‘augment’ – with these new technologies never fully replacing the human element in service of technological potential.
All concerned observers hope smart technologies will advance diagnoses, treatment plans, and preventive care, improve medical records accuracy and refine cost management. Machine learning algorithms – like newbies on the job scene – require mentors. Technology cannot work in a vacuum. People must have their hands on its wheel in creating algorithms.
As the adage goes, “Garbage in, garbage out.” AI applications are only as practical as the cognitive sophistication of their mentor. But who are the people doing the mentoring? Pencil pushers? Bean counters? Compassionate healers? AI and ChatGPT have a prominent place in healthcare, but their absolute power over the delicate balance of humanity in care and corporate objectives is still uncharted territory. Do we succumb to our fears? Do we embrace inevitable advances, no matter the cost? That remains a “people” call.
The US and EU are navigating the fine line between intelligent innovation and protecting patient interests as they grapple with regulating GenAI. The outcome of these discussions is pivotal in shaping the future of health, ensuring its safe and responsible integration into medical practice and safeguarding patient well-being and data privacy.
Change is a Perpetual Threat
Always looking around the corner at the future of innovation’s impact on humanity, John Nosta, a global health and technology go-to, writes in Psychology Today:
“The intersection of AI and human cognition is as much about philosophy and ethics as it is about technology. If history is any guide, every significant technological advancement brings with it societal trepidation. The printing press, electricity, and the internet all were met with a mix of awe and apprehension. The introduction of AI into our cognitive domain is no different.”
For centuries, change has threatened the status quo – how we live and earn our livelihoods. Blacksmiths faced the horseless carriage with understandable fear. Today, toll booth clerks and fast-food workers see automated toll and ordering systems as a threat. Innovation will often create employment disruption. It will also help us to redeploy talent in new ways.
Almost 50 years ago, physicians fought the passage of legislation establishing Medicare and Medicaid with full-page New York Times ads. For most, it signaled the rise of “socialized medicine” and the end of their decision-making power. Today, it’s embraced by Americans – regardless of political affiliation – as a social benefit and a fundamental right. AI is the health professionals’ new nemesis. It might question their clinical calls or offer added perspective. As baseball Hall of Famer Yogi Berra said, “The future ain’t what it used to be.”
We must not draw lines of entrenchment in the necessarily symbiotic relationship between empathy and augmented intelligence. The yearning for healing and the desire to heal has always called upon human touch, compassion, and knowledge. AI offers the greatest potential to serve as an extension of our intellectual, experiential, and emotional capabilities of any technological innovation we’ve seen to date.
ChatGPT and GenAI provide tremendous value, but they should be tools to enhance and complement human capabilities rather than replace them.
Empathy is at the Heart and Soul of Health
To understand the profound significance of human empathy in health, we must first acknowledge its irreplaceable role. Empathy is the cornerstone of effective patient care. It’s the ability to understand and recognize the feelings of the “other,” offering solace, compassion, and a comforting presence when people and their families face medical challenges.
When someone receives a life-altering diagnosis or undergoes a challenging medical procedure, the human touch provides reassurance and support. Empathetic clinicians, with their ability to connect on an emotional level, instill trust, alleviate fear, and foster a healing connection. No machine, no matter how intelligent, can replicate the power of human empathy.
AI and ChatGPT as Extensions of Human Capabilities
But AI and ChatGPT, in particular, may take a front seat in healthcare. They serve as invaluable extensions of our cognitive abilities. These technologies excel in tasks that require vast data analysis, pattern recognition, and information retrieval. By handling the data-intensive aspects of health synthesis, health professionals can focus on what they do best—providing amazing medical (and compassionate) care.
AI is not a competitor but a collaborator. It can sift through mountains of medical data, identify trends, and suggest potential treatment options, enabling healthcare providers to make more informed decisions. AI can rapidly process radiological images, analyze genetic data, consider the possibilities of undiagnosed rare diseases, and even predict disease outbreaks, all of which contribute to more accurate diagnoses and better patient outcomes.
One of the world’s most forward-looking thinkers on AI in medicine is Tom Lawry, former national director of AI at Microsoft, author of the business bestseller Hacking Healthcare, and now an advisor to health leaders suggests:
“Generative AI is the latest flavor of many flavors of what’s known as artificial intelligence. So, I like keeping things simple. So to your point, let’s assume AI is really related to IT systems, the sense, comprehend, act and learn. Probably, more importantly, it’s intelligence demonstrated by software with the ability to depict or mimic human brain functions. And I want to emphasize mimic human brain functions, not replace.”
Acknowledging this delicate dance is critical to harnessing the full potential of ChatGPT and GenAI in health. AI can enhance health efficiency and accuracy. Yet, it should always work in conjunction with human empathy. The two can harmoniously coexist, with AI as a valuable tool in the healthcare provider’s toolkit.
We must ensure that technology remains a faithful servant of empathy, not a quick and efficient replacement. AI can handle repetitive tasks and hone data-driven insight – it can scrape data from unwieldy electronic medical records. But human touch provides emotional support, compassion, and a connection that cannot be replicated. ChatGPT can make the consumer seeker comfortable mining information around sensitive and potentially embarrassing questions. But getting medical help requires a relationship between healer and seeker.
Cognitively Sharp Physicians and the Future of Patient Care
Cognitively sharp individuals embracing AI and ChatGPT as extensions of their abilities can leverage technologies most effectively. More and more medical schools will need to shift their curricula to focus on the psychological power of empathy in healing and patient adherence. More and more physicians – with their iconic white coats and rank-signaling stethoscopes – will need to hone people skills to secure their positions as medical superstars.
AI will assist in diagnosing complex diseases, suggesting personalized treatment plans, and even providing real-time information during surgical procedures. When healthcare professionals integrate AI into their workflow, they become empowered with a wealth of data-driven insights, enabling them to make more precise decisions and deliver better patient care.
The Needed Partnership
The connection between human empathy and AI is not a zero-sum game. The essence of humanity in health delivery will continue to lead the way, with AI acting as a supportive ally. The health industry will thrive in this ever-evolving landscape by calling for a harmonious balance between providers’ emotional intelligence and AI’s fact-finding possibilities. United, human and machine will drive innovation, improve patient outcomes, and ensure humanity remains at the heart of health. In the delicate dance between human empathy and AI, the patient must always be the center of our focus and the defining voice that guides response.