There is a moment at the HIMSS Global Health Conference when the conversation shifts. It moves away from what artificial intelligence can do and toward how it is already being used. Not in controlled pilots or planned rollouts, but in real time, by countless clinicians making decisions under pressure. Artificial intelligence is no longer a future state. It is present, embedded and influencing care before many organizations have fully decided how it should be governed. The industry is not lacking innovation. It is navigating its consequences.
Health systems are not stepping into artificial intelligence from a place of calm or control. In the United States, spending now exceeds $4.5 trillion, with a significant share tied up in administrative work that adds complexity more than clarity. Clinicians are caring for more patients, navigating more data and making more decisions under pressure than ever before. The system is stretched. Artificial intelligence is entering at a moment when change is no longer a choice.
The discussion drew on the experience of three leaders who are not observing this shift. They are guiding it. Hal Wolf leads HIMSS, influencing digital health policy and implementation across more than 100 countries. Isaac Kohane, MD, PhD, Chair of Biomedical Informatics at Harvard Medical School, has spent four decades defining how data informs clinical care. Ran Balicer, MD, Chief Innovation Officer at Clalit Health Services, operates within one of the world’s most integrated health systems, where data and care are aligned across generations.
These are not just star panelists. They are system-wide architects. What emerged from the hour-long conversation was not what artificial intelligence can do. It was a recognition that it is already doing more than most systems are prepared to guide and govern.

Dr. Kohane captured the tension immediately. “I think that we have to worry about the fact that we’re going both too slow and too fast.”
That statement reflects a reality many leaders feel but rarely express. Governance takes time because it must. Patient safety, validation and accountability require structure. Practice moves in real time. Clinicians do not have the luxury of waiting for perfect systems.
“They’re so desperate to do right by their patients to use other resources,” Dr. Kohane adds.
That instinct is not a weakness. It reflects a commitment to doing what is right for the patient. When clinicians turn to external AI tools, they are seeking clarity, speed, and confidence in their decisions. Artificial intelligence is already present at the point of care, shaping how physicians assess information, validate thinking, and move forward. The system is not adopting AI. The system is catching up.
This creates a condition that is difficult to measure and even harder to manage. Different clinicians use different ChatGPT platforms. Those tools produce different answers. Different assumptions shape those answers. Over time, consistency erodes. The system begins to operate with multiple definitions of truth (and the risk of varied outcomes).
Dr. Kohane’s warning is not about misuse. It is about misguided permanence. “The worst outcome will be if the worst parts of medicine get concrete poured over it, by AI.”
Artificial intelligence does not fix a system; without leadership, it accelerates the integration of incorrect assumptions. If workflows are inefficient, they become more efficiently inefficient. If bias exists in data, it becomes more precise. If fragmentation defines care, it scales.
This is not a failure of technology. It is a mirror held up to system-wide leadership.
Hal Wolf, among the health sector’s leading policy and operational voices, grounded this moment in proven experience. Health care has seen this pattern before. When internet connectivity entered hospitals, clinicians moved faster than governance. They created access where it was needed. Systems responded later. Risks were discovered after adoption.

Artificial intelligence now follows that same trajectory, though at far greater speed and with far greater consequences. Web connectivity gave quick access to information. Artificial intelligence influences how that information is interpreted and acted upon.
“We have to go faster,” Mr. Wolf said. “But there needs to be structure around it.”
That is the leadership challenge of this moment. Speed without structure creates exposure. Structure without speed creates irrelevance. The tension between the two is not something to resolve. It is something to manage continuously.
The industry has predictably responded to artificial intelligence. It has started where risk is lowest and return is clearest. Documentation, scheduling and revenue cycle optimization have become the entry points. These applications reduce burden and improve efficiency. They are necessary. However, they are not transformational.
The shift occurs when artificial intelligence moves into clinical decision-making. At that point, the question is no longer whether the system works. The question becomes whether it should be trusted.
Who owns a decision informed by an algorithm? How is accuracy validated? What happens when a clinician disagrees with a recommendation? These are not technical questions. They are questions of accountability. Artificial intelligence does not assume responsibility. It does not carry consequence. That remains with leadership.
Dr. Balicer reframed the conversation, shifting how the room thought about artificial intelligence. “There’s no such thing as AI neutrality. Algorithms are just opinions embedded in code.”

That insight is easy to acknowledge and difficult to operationalize. Every model reflects choices. What data is included? What outcomes are prioritized? What trade-offs are accepted? Those decisions are embedded in the system, shaping how it interprets information.
When a health system adopts an AI tool, it is not simply implementing technology. It is adopting a perspective.
At Clalit Health Services, alignment across payer and provider creates a system where priorities are consistent. Even there, external AI models introduce new assumptions. Those assumptions may not align with the system’s goals. If leadership does not define its own values, it inherits someone else’s.
This becomes real in proactive care. Artificial intelligence enables systems to identify patients at risk before they present. It allows for earlier intervention, often improving outcomes.
It also creates a new kind of pressure. “The toughest choice is what not to do,” Dr. Balicer said.
That statement deserves more attention than it receives. Health care has been built around responding to need. Artificial intelligence introduces the ability to anticipate it. When every patient can be flagged, every risk predicted and every intervention suggested, the system is no longer constrained by insight. It is constrained by capacity.
Artificial intelligence expands what can be done. It does not expand who can do it. Leadership becomes the act of choosing who does what based on validated data.
There is a moment that captures this shift. Imagine a primary care physician starting the day not with a schedule of patients who have called for appointments, but with a list generated by AI identifying individuals who are likely to experience clinical complications in the next six months. Some will develop chronic conditions. Some will require hospitalization. Some can be helped now – preventively.
The physician cannot see them all. Artificial intelligence expands what is possible. Leadership decides what is essential and permissible.
The industry often responds to complexity with activity. Organizations pilot, test and explore. They engage broadly without committing deeply. This creates motion. It rarely creates progress. Pilots are nothing more than experiments. At some point, leadership must decide what to scale, what to stop and what defines value.
Hal Wolf grounded the conversation in discipline. Without a defined, shared objective, effort becomes noise. Pilots create learning, though they often avoid decision-making. Leadership requires clarity. What problem are we solving? What outcome defines success? What are we willing to prioritize? Without those answers, artificial intelligence adds another layer of complexity to an already complex system.
Dr. Kohane brought the conversation back to the discipline of leadership. It cannot remain abstract. It must be informed by experience.
“Go and pay a few bucks and use three or four of the models… get a feel for what this does,” Dr. Kohane advised.
That is not a call for technical fluency. It is a call for leadership proximity. Leaders cannot guide what they do not understand. Artificial intelligence does not behave consistently across models. It produces different answers, shaped by different assumptions. Without direct engagement, those differences remain hidden, and leadership becomes removed from the very decisions it is responsible for guiding.
This is where many organizations hesitate. Artificial intelligence feels complex and complexity invites delegation. At this moment, delegation creates distance. Leadership is required to move closer, not further away.
Artificial intelligence is not reducing the role of leadership. It is redefining it.

This is not a gradual transition. It is already underway. Artificial intelligence is embedded in workflows, shaping decisions and influencing behavior in real time. The system is adapting whether leadership is ready or not.
The question is no longer whether artificial intelligence will shape the future of health. It will. The question is whether leadership will shape how it is applied.
Artificial intelligence will not fix health. It will scale whatever we allow it to touch. The question is whether it will scale what is best in health or what we have yet to fix.


