Digital Health

From AI Excitement to Execution: Why Health Leaders Must Now Master the “How”

Artificial intelligence is advancing in health care faster than almost any other technology in modern medical history. According to research from McKinsey & Company, artificial intelligence could generate as much as $100 billion annually across healthcare systems worldwide, through improved clinical decision support and workflow efficiency, as well as advances in drug development and population health analytics. The promise is extraordinary, and the pace of implementation shows little sign of slowing.

History, however, offers a useful caution. Breakthrough technologies in medicine rarely achieve their full potential simply because they exist. Their real impact depends on whether the institutions responsible for health-care delivery know how to adopt them wisely, integrate them responsibly and align them with their mission to improve patient health.

Artificial intelligence now stands at that same threshold. The industry has moved beyond fascination with what algorithms can do and entered a more demanding phase: determining how these tools should be evaluated, governed, and integrated into the environments where care is delivered. At the same time, some health professionals are turning to AI – not to augment their knowledge – but assuming the information is patient-care ready.

Across the health ecosystem, leaders are discovering that the most important questions about artificial intelligence are not technological. They are organizational, ethical and operational. Which AI systems genuinely improve clinical decision-making? Which tools strengthen the efficiency of hospitals and health systems? Which innovations introduce complexity without delivering measurable benefit?

Answering those questions requires a perspective that bridges policy leadership, real-world care delivery, and the scientific foundations of biomedical informatics. That convergence of experience sits at the center of a “Views From the Top” mainstage discussion at the HIMSS Global Health Conference & Exhibition, where some 35,000 leaders whose work spans the global health ecosystem will examine how organizations can recognize the true value proposition of artificial intelligence applications before embedding them into health-care systems.

The perspectives shaping this discussion reflect three essential dimensions of responsible artificial intelligence in health: governance frameworks that guide innovation, operational insights from large-scale health care delivery, and scientific rigor grounded in biomedical informatics. Together, these vantage points illuminate the path from technological promise to practical value.

Governing Innovation in a Rapidly Changing Health Ecosystem

Digital transformation in health rarely succeeds simply because technology exists. It succeeds when organizations develop leadership frameworks capable of evaluating innovation, managing risk and aligning new tools with patient-centered goals.

Few leaders have observed the evolution of digital health across as many national systems and institutional environments as Hal Wolf, president and chief executive officer of HIMSS, Ran Balicer, MD, PhD, chief innovation officer of Clalit Health Services and Isaac Kohane, MD, PhD, chair of biomedical informatics at Harvard Medical School. The three will step onto the mainstage at HIMSS to share their “View from the Top” in a session titled: “Recognizing the ‘Value Proposition’ Criteria While Selecting AI Applications.”

Image provided by HIMSS

Through his work with global government health ministries, hospital networks, and technology innovators worldwide, Wolf has consistently emphasized that technological progress must be anchored in governance and trust.

“Digital health transformation is not about technology alone. It is about leadership, governance, and the trust that allows innovation to improve care,” Wolf has said in discussions about global digital health transformation.

Artificial intelligence intensifies this leadership challenge because its influence extends far beyond traditional clinical tools. AI systems increasingly operate across multiple layers of healthcare delivery. Some applications assist clinicians by analyzing medical data or suggesting treatment options. Others function within hospitals’ and health systems’ operational infrastructure, helping manage patient flow, prioritize diagnostic reviews, and allocate scarce resources.

These operational algorithms rarely capture headlines; however,  they shape the environment in which health care is delivered. Decisions about which cases are reviewed first, how clinicians allocate their attention, and how health systems manage capacity can profoundly influence patient outcomes.

For leaders responsible for health systems, artificial intelligence cannot be treated as simply another technological upgrade. It must be evaluated through governance structures capable of understanding how algorithms function, what assumptions shape their recommendations, and how their use aligns with institutional priorities.

Without that oversight, innovation risks amplifying complexity rather than improving care. Instead of informing, it can spread misinformation.

Aligning Artificial Intelligence With the Values of Medicine

Governance provides the policy foundation for responsible adoption of artificial intelligence, but real-world implementation reveals a second challenge: ensuring that AI systems operate effectively within healthcare delivery itself.

Large population health systems increasingly use advanced analytics to anticipate risk, manage chronic disease, and allocate clinical resources across diverse communities. Within these environments, artificial intelligence is no longer a theoretical innovation. It is already influencing how health organizations prioritize patients, coordinate care and deploy limited resources.

That operational perspective is central to Ran Balicer, MD, PhD, of Clalit Health Services, one of the world’s most advanced data-driven health systems. The Clalit integrated infrastructure connects hospitals, clinics, and community health programs through longitudinal datasets that support predictive analytics at the national scale.

Experience within such systems reinforces an important insight: artificial intelligence models do not function independently of human judgment. They reflect priorities embedded in their design and the assumptions guiding their deployment.

“Algorithms are opinions embedded in code,” Balicer has observed in discussions about the role of artificial intelligence in population health.

In practice, this means that AI systems interpret clinical data through frameworks shaped by human choices. The way a model defines risk, prioritizes cases, or recommends interventions reflects decisions about what matters most within a healthcare environment.

Those decisions carry ethical implications. When artificial intelligence helps determine which patients receive immediate attention or which cases are escalated for further review, transparency about how algorithms function becomes essential to maintaining trust among clinicians and patients alike. The scientific frontier of health-care AI reinforces that concern.

Isaac Kohane, MD, PhD, who has also served as a co-author of the Institute of Medicine Report on Precision Medicine, which has served as the template for national efforts, has spent decades exploring how machine learning can advance medicine while preserving the judgment that defines clinical practice. His research emphasizes that artificial intelligence in healthcare must align with the ethical traditions and professional responsibilities of medicine.

“AI systems in medicine must ultimately reflect the values of the profession they serve,” Kohane has written in discussions about AI alignment in biomedical informatics.

This perspective highlights a crucial distinction between technological capability and clinical responsibility. Many AI models entering healthcare environments were originally designed for broader computational tasks rather than the nuanced realities of patient care. Medicine operates within a landscape shaped by uncertainty, empathy, and accountability, and technologies introduced into that environment must reflect those values.

Ensuring that artificial intelligence aligns with the principles guiding health-care delivery, therefore, represents one of the most important scientific and ethical challenges facing the future of health.

The Discipline Required to Make Innovation Matter

The health sector has experienced waves of technological enthusiasm before. Electronic health records promised seamless information exchange, but then introduced administrative burdens on health professionals when implemented without thoughtful workflow design. Data analytics promised unprecedented insight, but sometimes led to fragmentation when systems failed to communicate across institutions.

Artificial intelligence now stands at a similar moment in the evolution of health technology.

Its capabilities in supporting decision-making flow are extraordinary, yet realizing them will require disciplined leadership to evaluate, integrate and govern AI tools within health-care delivery systems. Health leaders must learn to ask deeper questions before embracing the next algorithmic breakthrough. What problem does this system truly solve? How does it strengthen clinical practice? What assumptions guide its recommendations? How does its use advance the mission of improving patient health?

These questions move the conversation beyond technological novelty toward operational practicality. It’s among the many reasons these three global leaders step to the HIMSS stage together.

Artificial intelligence will undoubtedly reshape the health ecosystem in the years ahead. Its long-term impact, however, will not be determined solely by the sophistication of algorithms or the speed of technological progress. Along with how to leverage AI, ChatGPT and LLMs, users require heightened cognitive awareness.

It will be determined by whether the health community develops the discipline and ability required to translate innovation into systems that strengthen care, support clinicians and improve the health of the populations they serve.

The real story of artificial intelligence in health is no longer about what machines can do. It is about how wisely the health sector chooses to use them.

Gil Bashe, Medika Life Editor

Health advocate connecting the dots to transform biopharma, digital health and healthcare innovation | Managing Partner, Chair Global Health FINN Partners | MM&M Top 50 Health Influencer | Top 10 Innovation Catalyst. Gil is Medika Life editor-in-chief and an author for the platform’s EcoHealth and Health Opinion and Policy sections. Gil also hosts the HealthcareNOW Radio show Healthunabashed, writes for Health Tech World, and is a member of the BeingWell team on Medium.

Recent Posts

The Shift from Pure Modernity to Human-Centered Modernity

Throughout the history of science, it has rarely been the case that any phenomenon has…

17 mins ago

We Have to Earn Better Vaccine Coverage Rates

Mandates and strong recommendations have been the key to successful vaccination programmes protecting people for…

23 mins ago

Brain Organoids: Promise, Limits, and What Comes Next

Brain organoids, sometimes called “mini-brains,” are three-dimensional clusters of human brain cells grown in labs from pluripotent stem…

33 mins ago

How Transactional Medicine Threatens the Future of Your Health

Patients rarely describe healing in technological terms. They speak instead about whether someone listened, if…

5 days ago

Is Your LLM Mentor Human Enough?

In every professional and personal sphere—be it business, medicine, engineering, or parenting—we inherently need a…

3 weeks ago

India: The Growing Focal Point for Health Innovation

India is no longer simply a market to watch. It is a nation shaping the…

3 weeks ago

This website uses cookies. Your continued use of the site is subject to the acceptance of these cookies. Please refer to our Privacy Policy for more information.

Read More