January 6, 2026

The year ahead will test how quickly health care organizations can turn AI ambition into real change.

The future of health care is here. Are leaders prepared for the rapid pace at which this transformation is unfolding?

 By Aaron Sorensen, Senior Partner at Lotis Blue Consulting – in partnership with SullivanCotter

Originally published by Healthcare Business Today


AI is advancing at a pace that defies intuition. Most people think in linear terms: steady, incremental progress. But AI is improving exponentially. The capabilities of AI models are increasing even as computing becomes faster and cheaper – and these forces compound each other. As Jensen Huang, CEO of Nvidia, noted in a recent interview, “Every new generation of AI is not just better; it is building the next generation.” Progress is literally layering on itself, as he points out that AI tools have become “100 times more powerful” in just two years.

This is why many describe the moment as a new Industrial Revolution. In the past, machines replaced physical labor. Today, AI is taking on tasks that once required human cognition and judgment. As models continue to advance, computation costs will decline, and applications will start to become more abundant and integrated into the foundations of daily life. Some estimates suggest that AI may soon play a dominant role in generating first-pass summaries, drafts, analyses, translations, and technical scaffolding that underpin modern knowledge work.

And by the latter half of the decade, the convergence of AI and robotics will reshape physical work as well, from logistics and pharmacy operations to elements of clinical workflow. AI agents will coordinate tasks, initiate next steps, and remove friction across complex clinical and administrative workflows and systems. It seems inevitable that autonomous vehicles will pick up patients at their homes, drive to appointments, and robots will take on the role of patient access specialists and care coordinators, both in and outside the home, in the not-too-distant future.

The question for health care is no longer whether AI will transform the workforce, but whether organizations will be ready for the speed of the shift already underway.

Health Care as an Industry Is Behind the Curve

Despite AI’s rapid evolution, health care has traditionally been slower to adopt it – particularly in clinical roles where the impact could be most significant. While new research shows that this may be starting to change, deeply risk-averse cultures, complex regulations, and fragmented data have created a protective posture that sometimes slows experimentation and rewards caution within the industry.

Abdelwanis and colleagues, in a recent Safety Science review, aptly captured this reality: “Organizational challenges such as infrastructure limitations, inadequate leadership support, and regulatory constraints remain significant barriers to AI adoption in clinical practice.”

Meanwhile, other industries have moved forward. A decade ago, self-driving cars were treated as implausible. Today, full self-driving capability demonstrates that iterative improvement, despite setbacks, can lead to meaningful autonomous performance.  Tesla’s vehicle safety data shows that vehicles operating with Full Self-Driving experience substantially fewer collisions than national human-driving benchmarks. Progress didn’t come from avoiding risk; it came from learning through it.

Health care has struggled to build similar momentum – and for good reason. Although AI can already outperform humans in pattern recognition, summarization, and administrative processing, adoption is slowed by concerns about safety, changes to professional roles, unclear regulatory pathways, and in some cases, patient uneasiness in utilizing AI for care delivery. Payers also introduce administrative friction that could be alleviated by AI and automation. However, the deeper issue is structural. The industry must balance its commitment to patient safety while exploring the appropriate incentives and operating models necessary to accelerate responsible innovation around AI.

Changing the Narrative – From Fear to Elevating Purpose and Practice 

To move forward, health care must shift its narrative about AI. Much of today’s discourse centers on risk. Will AI make mistakes? Will roles be diminished? Will the clinician’s craft be devalued? However, this framing overlooks the real opportunity: returning people to the purpose of their work, rather than the tasks that have accumulated around it.

Huang articulated this distinction clearly during a recent interview by arguing that jobs are built around a core purpose: creating value or addressing a human need. But over time, layers of tasks accumulate, documentation grows, and administrative work expands. Eventually, the mechanics of the job overshadow its purpose and the meaning that humans derive from it. AI’s real power, Huang suggested, is not in replacing people, but in stripping away everything that was never the point of the job in the first place.

To illustrate the idea, Huang revisited a widely cited prediction made nearly a decade ago. In 2016, Geoffrey Hinton, often referred to as the “godfather of AI,” warned that people should reconsider training as radiologists because AI would soon outperform humans in image recognition. At the time, the prediction fueled concerns that AI would render the profession obsolete entirely. The irony, Huang noted, is that the opposite has happened. The number of radiologists has increased, and today, nearly every radiologist utilizes AI in some capacity.

The explanation lies in returning to purpose. The purpose of a radiologist is not to study images for their own sake; it is to diagnose disease. Image analysis is a task in service of that goal. As AI has made image interpretation faster and more precise, radiologists have been able to read more studies, handle greater complexity, and support higher clinical volumes. Better productivity has improved economics for hospitals, which in turn has driven demand for more, not fewer, radiologists.

Recent workforce projections published in the Journal of the American College of Radiology suggest continued growth in the U.S. radiology profession over the coming decades. Furthermore, meaning and purpose, as evidenced by decades of research in the psychological literature, represent the highest-order drivers of engagement and joy from work.

The lesson extends well beyond radiology. Clinicians did not go into medicine to type notes, navigate prior-authorization portals, or click endlessly through EHR menus. These tasks are artifacts of the system, not expressions of clinical purpose. When AI automates documentation, coding, summarization, scheduling, pattern matching, and protocol retrieval, clinicians can operate more consistently at the top of their license – diagnosing, interpreting, communicating, and caring.

This shift is more than just cultural – it’s structural. AI becomes the first draft of everything. The assistant works ahead of the clinician, not behind. The system tracks what matters so humans can focus on what matters most.

What to Expect in 2026 – How AI Will Reshape the Workforce

If recent years were marked by pilots and experimentation, 2026 will be the year AI becomes integrated into the everyday fabric of health care work. AI will also begin to show a step-change impact in health care by moving from information gathering and pattern recognition to reasoning and judgment. The shift will be apparent in current and new health care jobs, leadership expectations, care models, team structures, workforce strategies, learning programs, and daily workflows.

In 2026, the most visible clinical workforce impact will be in the administrative “time sinks” that divert clinicians away from patient care. Research examining physician workflow and time allocation found that documentation and administrative work consume nearly twice as much time as direct patient care.

The biggest shift is that AI will increasingly produce the first draft of clinical work (notes, summaries, and orders), while clinicians concentrate on higher-level tasks such as validation, interpretation, and decision-making.

Ambient technology will rapidly improve and listening and documentation will become mainstream. Evidence is already accumulating that ambient documentation technology is associated with reduced clinician burnout and improved well-being. In practice, this means physicians and APPs will spend less time in the EHR after hours and more time with patients (and with clinical reasoning and decision-making rather than administrative clerical work).

Decision support will expand from imaging into everyday care pathways. AI’s pattern-recognition advantage will continue to strengthen diagnostics and prioritization workflows. Radiology has demonstrated earlier proof points than other specialty areas, with AI tools increasingly supporting scan prioritization, detection, and, in some cases, workflow efficiency—augmenting clinicians rather than replacing them. The workforce effect is subtle but powerful: faster reads and better triage support more favorable outcomes, change staffing models, and raise demand for clinicians who can supervise and integrate AI outputs responsibly.

Nursing and care team workflows will start to be redesigned to automate repetitive tasks. The American Hospital Association highlights that automation can free meaningful portions of repetitive work and posits that GenAI can be a productivity lever in clinical operations – especially when leaders move beyond pilots into workflow redesign. In 2026, expect to see more virtual nursing, AI-assisted triage, and predictive tools that help teams anticipate patient deterioration, manage capacity, and coordinate follow-up care. This will support clinicians as they look to “top-of-license” work.

AI governance will also emerge as a core clinical competency. As predictive and generative tools spread, hospitals will formalize oversight, including accuracy evaluation, bias assessment, and post-implementation monitoring, because clinical leaders will be held accountable for safe performance in production, not just pilot success

How AI Will Impact Non-Clinical and Administrative Work in 2026

In 2026, administrative functions are expected to see faster “hard ROI” adoption because the work is often rules-based, high-volume, and measurable. The change will not simply be efficiency, it will be job redesign. Specifically, fewer roles will be responsible for shepherding transactional workflows, and there will be more roles focused on handling exceptions, ensuring quality, and maintaining governance.

Contact centers and patient access will shift to AI-augmented service. The AHA points to real-world examples where GenAI-augmented call centers have reduced wait times and improved first-call resolution, a preview of 2026 gains that will begin to scale: fewer rote calls handled by humans, and more complex cases escalated to people with better context and tools.

Revenue cycle capabilities will move from “processing” to “exception handling.” Administrative teams will increasingly supervise automated drafting, sorting, and routing (including claims preparation, documentation support, and appeals packets), intervening when edge cases arise. The AHA also cites how AI-enabled appeals processes reduce handling time and misrouting, exactly the kind of measurable workflow where adoption tends to accelerate.

Clinical documentation integrity (CDI) and coding support are becoming increasingly reliant on AI-driven solutions. Expect CDI functions to lean more heavily on AI assistance and embedded guidance tools as systems push for accuracy and completeness at scale. CDI emphasizes scalable approaches to documentation accuracy and improvement, which are fertile ground for AI copilots that reduce manual lookup and standardize best practices.

AI-focused workforce capability-building will also become formalized programs, driven by collaboration with progressive HR leaders and executive leadership. 2026 is the year many organizations will standardize baseline AI literacy – especially in areas such as privacy, transparency, monitoring, and human-in-the-loop expectations. The responsible-use principles from the American Association of Medical Colleges underscore the broader direction: human-centered use, transparency, privacy protection, and ongoing evaluation – concepts that will increasingly appear in onboarding and role expectations, extending well beyond clinicians.

In a health system where clinical talent will always be in short supply for the foreseeable future, AI can be viewed as one way to accelerate the balance of labor supply and demand. It’s an opportunity to solve what continues to be the often-cited number one challenge in health care: access to clinicians that practice at the top of their license.


Frequently Asked Questions

How will AI meaningfully change health care by 2026?

By 2026, AI is expected to move beyond experimental pilots and become embedded in the daily operations of health care organizations. Rather than replacing clinicians, AI will increasingly function as an augmentation layer—supporting clinical decision-making, automating administrative tasks, improving patient access, and enabling more predictive and personalized care. The most visible impact will likely be in workflow efficiency, capacity management, revenue cycle optimization, and earlier identification of clinical risk, helping organizations operate more sustainably under mounting financial and workforce pressures.

Which areas of health care are most likely to benefit first from AI?

Near-term value from AI is most likely in areas with high data volume, repeatable processes, and clear performance metrics. These include revenue cycle management, scheduling and access optimization, documentation support, population health analytics, imaging interpretation, and early warning systems for clinical deterioration. These use cases tend to deliver measurable ROI without requiring radical changes to care delivery models, making them more feasible for broad adoption.

Will AI replace physicians or other clinicians?

The article emphasizes that AI is far more likely to **reshape** clinical roles than replace them. Physicians and clinicians will remain central to diagnosis, judgment, empathy, and complex decision-making. AI tools are designed to reduce cognitive load, surface relevant insights faster, and handle routine or administrative tasks. The organizations that succeed will be those that intentionally design AI to complement clinical expertise rather than compete with it.

What operational challenges does AI adoption create for health systems?

AI adoption introduces challenges related to workflow integration, data quality, governance, accountability, and change management. Even highly capable algorithms can fail to deliver value if they disrupt clinician workflows, lack trust, or are poorly aligned with operational realities. Organizations must also address issues such as model oversight, bias mitigation, data privacy, cybersecurity, and ongoing performance monitoring. These challenges make AI as much an organizational transformation as a technical one.

Why is governance so critical to successful AI use in health care?

As AI systems increasingly influence clinical and operational decisions, governance becomes essential to ensure safety, fairness, accountability, and regulatory compliance. Effective governance defines who approves AI use cases, how models are validated, how performance is monitored over time, and how clinicians can escalate concerns. Without clear governance, organizations risk inconsistent adoption, loss of trust, regulatory exposure, and unintended harm.

How does AI affect clinician trust and engagement?

Clinician trust is one of the most decisive factors in whether AI delivers real value. Trust is built when AI recommendations are transparent, explainable, and demonstrably accurate, and when clinicians retain appropriate decision authority. Conversely, “black box” tools imposed without engagement or training often face resistance. Successful organizations actively involve clinicians in AI selection, testing, and refinement so tools are seen as partners rather than threats.

What skills will health care leaders need in an AI-enabled future?

Health care leaders will increasingly need hybrid skills that span clinical, operational, and digital domains. This includes understanding AI capabilities and limitations, asking the right questions of vendors and data teams, managing ethical and governance considerations, and leading workforce change. Leaders do not need to become data scientists, but they must be fluent enough to guide strategy, prioritize investments, and align AI initiatives with organizational goals.

How should organizations prioritize AI investments?

The article suggests prioritizing AI initiatives that address the organization’s most pressing pain points rather than chasing novelty. High-value opportunities typically align with access constraints, margin pressure, clinician burnout, or quality variation. Clear success metrics, strong executive sponsorship, and phased implementation help ensure that AI investments translate into measurable impact rather than isolated proofs of concept.

What role does data quality play in AI success?

AI systems are only as effective as the data that feeds them. Inconsistent documentation, fragmented systems, biased datasets, or outdated information can significantly degrade model performance. Organizations must invest in data governance, interoperability, and standardization to support reliable AI outputs. Improving data quality often delivers benefits beyond AI by strengthening analytics, reporting, and decision-making overall.

What distinguishes organizations that succeed with AI from those that struggle?

Organizations that succeed with AI treat it as a strategic capability rather than a standalone technology. They align AI initiatives with enterprise priorities, invest in governance and change management, engage clinicians early, and continuously measure impact. Those that struggle often focus narrowly on tools without addressing workflow integration, trust, leadership alignment, or long-term sustainability.

What does the future of AI-enabled health care ultimately look like?

By 2026 and beyond, AI-enabled health care is likely to be quieter and more embedded than today’s hype suggests. Success will look less like dramatic disruption and more like incremental gains—smoother operations, more proactive care, reduced clinician burden, and better use of scarce resources. Over time, these cumulative improvements can meaningfully reshape how care is delivered, financed, and experienced by both patients and clinicians.

Share This: