AI in 2026 – Boom, Bust or Backlash in Healthcare?

It was the fall of 2022 when large language models and Generative AI burst out of research labs and onto Main Street. Since then, every day seems to bring another AI breakthrough that challenges how work gets done.

In my role advising organizations on AI strategy and deployments, I see a consistent pattern among healthcare leaders: excitement about what AI could unlock, paired with exhaustion from the volume of noise, pressure, and competing claims.

Welcome to 2026.

As predictions flood inboxes and social feeds, focused on what AI might do next, I want to ground the conversation in something more useful. Rather than forecasting outcomes, let’s focus on three forces already at work—forces that will determine whether AI delivers real value in healthcare or quietly stalls.

Will 2026 be a year of boom, bust, or backlash?

The honest answer is yes.

Boom: Early Wins—and an AI Arms Race

Let’s start with what’s working.

Healthcare is seeing real, if narrow, gains from AI:

  • Ambient documentation reduces administrative burden
  • Imaging and pathology tools iare mproving speed and consistency
  • Operational and revenue cycle applications driving incremental efficiency

These are not moonshots. They are targeted solutions addressing specific pain points. And they matter.

At the same time, healthcare is now firmly in an AI arms race.

Every EHR vendor, medical device company, life sciences firm, and digital health startup is racing to declare itself “AI-native.” Roadmaps are packed with copilots, assistants, agents, and automation claims. No vendor wants to be perceived as falling behind.

That pressure is accelerating innovation—but it’s also compressing timelines, encouraging over-promising, and pushing organizations to adopt faster than they can realistically absorb.

Boom energy is real.

But it is also uneven and fragile.

Prediction: Within two years, most AI used in provider organizations will arrive embedded inside core systems and devices already in use. Intelligence will not be something teams “add on”; it will be something they inherit.

Recommendation: Understand where AI is already embedded across your vendor ecosystem and what’s coming next. Engage early through advisory councils or pilots. Engage and prepare clinicians before introducing these capabilities into workflows. AI should never arrive as a surprise.

Bust: When Pilots Multiply, but Value Doesn’t

Generative AI has dominated innovation agendas, yet only a fraction of pilots ever reach sustained production. A survey cited by MIT reports that roughly 95% of business AI pilots fail to generate measurable returns.

This is not evidence that AI lacks value.

It is evident that many organizations lack discipline.

High failure rates are normal in early markets. Technology matures. Tools improve. But value only materializes when leaders focus on fundamentals: design, data readiness, workflow integration, and ownership.

Most AI initiatives fail not because the technology doesn’t work, but because success is never clearly defined. Projects are launched out of curiosity, vendor pressure, or fear of being left behind. Clinical impact, operational accountability, and economic value are clarified too late—if at all.

Equally damaging is the underestimation of the human systems AI enters. Healthcare work is relational, regulated, and trust-dependent. When AI is introduced without redesigning workflows, preparing staff, or clarifying responsibility, it creates friction—not relief. Adoption then stalls quietly.

Prediction: In 2026, organizations will run fewer AI pilots—but with much higher expectations. Boards and executives will require clearer evidence of clinical, workforce, or financial value before approving new initiatives.

Recommendation: Move from “fail fast” to “fail before you scale.” Define success upfront, assign ownership early, and redesign workflows in tandem with technology. AI initiatives without a credible path to value should be halted immediately.

Backlash: Fear, Workforce Anxiety, and the Trust Gap

The most underestimated force shaping AI’s trajectory in 2026 is neither technical nor financial.

It’s human.

History offers context. When automobiles first appeared, they were seen as dangerous and socially disruptive. Red Flag laws required people to walk ahead of vehicles waving flags and capped speeds at just a few miles per hour. These laws weren’t about innovation—they were about fear, control, and adjustment.

Healthcare AI is entering a similar phase.

Workforce research shows healthcare workers are among the most cautious about AI adoption, citing concerns about trust, transparency, and job impact. This caution is not irrational. Healthcare has a long history of technology being imposed rather than co-designed.

As a result, scrutiny is increasing—particularly from labor organizations and state legislators. Recent bills, including those limiting AI’s role in clinical decision-making and licensed practice, reflect not anti-innovation sentiment, but unresolved trust and knowledge gaps.

Innovation does not scale without trust.

In 2026, AI scrutiny will intensify, especially with labor organizations and at the state legislative level.

As I write this, the Chair of the New York State Senate Committee on Internet and Technology just introduced a bill (S7263) to “protect patients and front-line care workers from the adverse effects of AI tools in risky or untested settings.”  The bill prohibits chatbots from performing the duties of licensed nurses and puts strong guardrails around the use of AI in healthcare settings.”

I often write about the need for a balanced approach to defining both the “gas and guardrails” that guide AI’s use in health and medicine. Incentives and safeguards are equally important.

Prediction: Expect increased legislative activity and labor engagement around AI in healthcare throughout 2026. Such actions should not be dismissed simply as anti-innovation. They reflect something deeper: a trust and knowledge gap that needs to be closed.

Recommendation: Create durable AI value by investing in workforce and consumer education. Clinicians need clarity—not just on how AI works, but on how it supports professional judgment rather than replaces it.

From Awe to Analytical

The year ahead will test the resolve of leadership. Transformation in healthcare is rarely linear—and never clean.

Vendors will continue to showcase breakthroughs. The hype will continue. But 2026 is not the year for cheerleading.

It is the year for realism.

The most effective leaders are moving from awe to analysis—recognizing that AI value does not come from the technology itself, but from the opportunity it creates to rethink how work gets done.

In that sense, AI value is—and always will be—a uniquely human process.

PATIENT ADVISORY

Medika Life has provided this material for your information. It is not intended to substitute for the medical expertise and advice of your health care provider(s). We encourage you to discuss any decisions about treatment or care with your health care provider. The mention of any product, service, or therapy is not an endorsement by Medika Life

Tom Lawry
Tom Lawryhttps://www.tomlawry.com/
Tom is the Managing Director of Second Century Tech and formerly served as Microsoft’s National Director for AI in Health & Life Sciences as well as Director of Worldwide Health. He is also the best-selling author of Hacking Healthcare and the newly released Health Care Nation – The Future Is Calling and It’s Better Than You Think.
More from this author

RELATED ARTICLES

RECENTLY PUBLISHED