Today’s provocative discussions about technology’s replacement of the physician are both interesting and relevant. But the context this for discussion may extend to outside of the walls of the hospital to include the courtroom.
Medical malpractice “standards of care” are the generally accepted norms and practices that healthcare professionals are expected to follow when providing medical treatment to patients. These standards are based on the medical community’s collective knowledge and experience, as well as on established medical guidelines, protocols, and best practices. It’s the basis for our expectation of quality care, or at least average care.
The specific standards of care that apply to a particular case depend on various factors, such as the patient’s medical condition, the nature of the treatment being provided, and the relevant laws and regulations. For example, a doctor performing surgery would be expected to follow established surgical protocols and guidelines, while a psychiatrist treating a patient with depression would be expected to follow established guidelines for the treatment of mental illness. In any instance, if often is a human standard — as a personal judgment or interpretation of an aspect of technology.
This leads to a fundamental medical and legal question: How do we define and debate the evolving standards of care in the context of available medical technology—particularly artificial intelligent and platforms like ChatGPT?
The first and critical perspective is the liability for using artificial intelligence and machine learning in medicine. While certainly in flux and given the rapid emergence of GPT, the concerns are significant and relevant. And even papers published only months ago fall short for a comprehensive and timely discussion. A 2021 paper provides a succinct analysis.
The relatively unsettled state of AI/ML and its potential liability provide an opportunity to develop a new liability model that accommodates medical progress and instructs stakeholders on how best to respond to disruptive innovation.
These issues are arriving quickly than expected and the inertia of progress will demand action. But it’s essential to look beyond this point in time and consider the trajectory of AI in medicine. There’s little doubt that AI will become “augmented intelligence” that will expand the cognitive domain of all clinicians. Artificial intelligence and language models like GPT have the potential to advance the practice of medicine by helping clinicians make more accurate and informed decisions. AI can assist in medical imaging, clinical diagnosis, and other areas where data analysis is critical.
In fact, one can argue that AI in medicine is here. Even Harvard Business School recognizes the fundamental reality of AI in medicine today.
Medical artificial intelligence (AI) can perform with expert-level accuracy and deliver cost-effective care at scale. IBM’s Watson diagnoses heart disease better than cardiologists do. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps now detect skin cancer with expert accuracy. Algorithms identify eye diseases just as well as specialized physicians. Some forecast that medical AI will pervade 90% of hospitals and replace as much as 80% of what doctors currently do.
Radiology is another good example where AI is driving significant changes. From workflow to post-scan image reconstruction, radiology is at the leading edge of how AI-based medicine is shifting from an option to an imperative. And today, there are over 500 FDA approved AI algorithms with the vast majority in radiology.
A fundamental question that emerges: what is the expectation of care are given a growing body of evidence for the utility of AI? Should every differential diagnosis have a “computer assist” as part of the process? Or should the distant lub dub of a heart sound live only in the ear of the clinical or be cognitively amplified by technology? And most importantly, what are the consequences for failing to leverage lifesaving technology that has clinical validation and availability?
Today, new questions will be asked regarding the best care, the available care, and the standard of care that medicine will be held up to. New standards and expectations for excellence will challenge the core capabilities in the practice of medicine. The cognitive domain of the clinician—once held as sacrosanct—will come under scrutiny as AI offers the accuracy and speed that is fundamental to care. The path is defined by ambiguity. But what maybe be most important about that early path are the guardrails that are put in place for all stakeholders.