In partnership with

AI-Dx - Your weekly dose of healthcare innovation

Estimated reading time: 4 minutes

Your AI tools are only as good as your prompts.

Most people type short, lazy prompts because writing detailed ones takes forever. The result? Generic outputs.

Wispr Flow lets you speak your prompts instead of typing them. Talk through your thinking naturally - include context, constraints, examples - and Flow gives you clean text ready to paste. No filler words. No cleanup.

Works inside ChatGPT, Claude, Cursor, Windsurf, and every other AI tool you use. System-level integration means zero setup.

Millions of users worldwide. Teams at OpenAI, Vercel, and Clay use Flow daily. Now available on Mac, Windows, iPhone, and Android - free and unlimited on Android during launch.

TL;DR

  • Stanford-Harvard ARISE network published 10 clinical AI predictions for 2026

  • Three concerning: AI arms race (insurers vs providers), 90% AI-generated notes losing clinical reasoning, first malpractice lawsuit

  • Progress: AI agents expanding access, human-machine collaboration research, prospective deployment results

  • Key tension: Deployment outpaces regulation; administrative AI growing faster than clinical AI

  • Bottom line: AI becomes ubiquitous but not always in ways that improve outcomes

The Stanford-Harvard ARISE research network just published 10 predictions for clinical AI in 2026.

Some point toward genuine progress. Others reveal deployment patterns that benefit vendors more than patients.

Together, they sketch a future where AI becomes ubiquitous in healthcare - but not always in ways that improve outcomes.

The 10 predictions:

  1. First malpractice lawsuit where AI plays significant role

  2. AI agents for urgent care hit mainstream

  3. Labeling systems for clinical data integrated into workflows

  4. Health systems locked in AI-bot arms race

  5. FDA explores new regulatory mechanisms, no significant progress

  6. More people getting medical advice from AI than humans

  7. 90% of clinical note text is AI-generated (ambient scribe)

  8. Continued results from prospective deployments of clinical AI CDS

  9. Frontier models outperform humans; research focuses on collaboration

  10. Scribe market growth; capabilities expand to workflow tasks

Here are three predictions that reveal deeper tensions:

Prediction #4: The AI Arms Race

Insurers deploy AI to deny claims faster. Health systems deploy AI to appeal denials faster.

Neither side improves patient care. Vendors sell tools to both sides.

This is perhaps the most troubling prediction because it describes AI optimizing a system that doesn't serve patients.

The administrative burden of prior authorizations, claims denials, and coding optimization already consumes enormous healthcare resources. AI that accelerates this arms race makes the system more efficient at being dysfunctional.

The AI succeeds technically, it does exactly what it's designed to do. But the system it's optimizing doesn't benefit patients.

Prediction #7: Documentation Without Reasoning

When 90% of clinical notes are AI-generated, we face a documentation crisis of a different kind.

The notes may be complete, detailed, billable. But they may not reflect actual clinical reasoning.

Ambient AI captures what was said during the encounter. It doesn't necessarily capture:

  • What the clinician was thinking but didn't verbalize

  • The differential diagnoses considered and ruled out

  • The clinical judgment that led to specific decisions

  • The uncertainties and contingencies informing the plan

Documentation optimized for AI generation risks becoming documentation optimized for AI consumption - structured, complete, but potentially disconnected from messy reality of clinical reasoning.

Prediction #1: Liability Without Clarity

Not "if" but "when." As clinical AI deployment accelerates, liability exposure grows.

The first malpractice case where AI played a significant role will set legal precedent for how courts allocate responsibility between clinicians, health systems, and AI vendors.

The legal framework doesn't exist yet. The first case will establish it.

Will clinicians be held responsible for AI errors they couldn't detect? Will health systems face liability for deploying unvalidated tools? Will AI vendors bear responsibility for harmful outputs?

These questions don't have clear answers. The legal framework will be established by litigation rather than thoughtful policy development.

The progress amid the problems:

Not all predictions are troubling.

AI agents for urgent care could expand access if deployed thoughtfully.

Prospective deployment results will help us understand what actually works in practice versus validation studies.

Research focusing on human-machine collaboration addresses the right question: not whether AI is better than humans, but how they work together most effectively.

Scribe capabilities expanding to downstream workflow tasks could reduce administrative burden if integrated well.

My take:

These predictions reveal several troubling patterns:

Deployment is outpacing regulation and legal framework development. We're implementing AI systems whose liability implications and regulatory requirements aren't yet clear.

Administrative AI is growing faster than clinical AI. More investment and deployment energy goes into tools that optimize billing, coding, and claims processing than into tools that directly improve clinical decision-making.

Documentation automation is creating questions about clinical reasoning transparency. When AI generates notes, how do we ensure clinical reasoning is captured and available when needed?

The technology is advancing faster than our frameworks for deploying it responsibly, regulating it effectively, and ensuring it serves patients rather than just optimizing existing dysfunction.

The question for healthcare organizations isn't whether AI will transform clinical practice (it already is).

The question is whether we'll guide that transformation toward outcomes that actually improve care, or let it unfold driven by vendor interests and administrative optimization rather than clinical benefit.

What matters most:

We're past the hype phase where everyone agreed AI would change everything. We're before the clarity phase where we know exactly what it does well.

That middle phase is uncomfortable. But it's where the real work happens.

ARISE's predictions force us to confront an uncomfortable reality: AI deployment in healthcare is being shaped more by what's profitable to vendors and what reduces administrative burden than by what improves patient outcomes.

If we don't change that trajectory, we'll get exactly the healthcare AI system these predictions describe - ubiquitous, efficient at administrative tasks, and largely disconnected from the clinical reasoning and patient care we actually need it to support.

Dr. Bhargav Patel, MD, MBA

Physician-Innovator | AI in Healthcare | Child & Adolescent Psychiatrist

P.S. Which prediction concerns you most? Or which one do you think will have the biggest impact on your practice?

Hit reply and let me know… I'm curious which of these 10 predictions resonate most with clinicians in different specialties.

Keep Reading