AI-Rx - Your weekly dose of healthcare innovation
Estimated reading time: 3 minutes
TL;DR:
JAMA study shows AI scribes save 13 minutes/day but 70% of notes contain errors (avg 2.9 per note).
Most common: omissions - information that vanishes from records. Meanwhile, 600+ health systems have deployed them to 100,000+ clinicians.
The audit trails that would catch these errors? Deleted immediately. Trust in healthcare AI is dropping while deployment accelerates.
Header Image Suggestion: Create a split-screen visual:
Left side: Clean, professional AI scribe interface with "✓ Note Generated"
Right side: Same note with red highlighting showing omissions, errors
Text overlay: "70% contain errors"
Last month, JAMA published the largest multisite study of AI scribes ever conducted.
8,500+ clinicians. Five major health systems. Two years of data. Three different vendor platforms.
The headline finding?
AI scribes saved clinicians 13 minutes of EHR time per day. That's a 3% reduction.
The finding nobody's talking about?
A separate MedStar Health study found 70% of AI scribe-generated notes contained errors. Average of 2.9 per note.

The most common error type? Omissions.
Information discussed in the visit that simply vanished from the medical record.
These are the hardest errors to catch. A clinician would have to remember everything said in a 15-minute visit and notice what's missing (while seeing 20 more patients that day).
Meanwhile:
600+ health systems now use Microsoft's ambient scribe
100,000+ clinicians are using these tools
Mayo Clinic rolled it out to 2,000+ clinicians
UCSF expanded to 575+ physicians
This isn't a pilot anymore. This is infrastructure.
The Audit Trail Problem
Here's what almost nobody knows:
Most AI scribe systems delete the audio and transcript data immediately after generating the note.
Why? Liability. If the recording doesn't exist, it can't be subpoenaed. If the transcript is gone, no one can audit it later.
The data you'd need to verify error rates is designed to disappear.

The Trust Collapse
In 2024: 52% of Americans were open to AI in healthcare
In 2026: 42% open
The technology got better. The trust got worse.
A new Ohio State national survey shows:
Belief that AI makes healthcare efficient: 64% → 55%
This while being told AI will "save medicine"

What Clinicians Actually Got
From the JAMA study on AI scribe time savings:
✓ Documentation time dropped 16 minutes
✓ Revenue gain: $167 per clinician per month
✗ Pajama time (after-hours documentation): didn't change
✗ Only 32% actually used the tool for half their visits
✗ 68% barely touched it
The workload paradox:
Hospitals used the modest time savings to add half an additional patient visit per week. The time AI gives back, the system takes away.
The Questions Nobody's Answering
While we scale AI scribes to hundreds of thousands of clinicians:
❓ What's the error rate compared to human documentation?
❓ Are we measuring omissions and hallucinations?
❓ Do we have audit mechanisms?
❓ Are we tracking disparities across patient populations?
The data suggests we're making procurement decisions based on vendor pitches, not independent evidence.
What You Can Do
If you're a clinician:
Review AI-generated notes carefully before signing
Report errors through your institution's reporting system
Ask leadership about error rate monitoring
If you're in hospital leadership:
Require vendors to disclose error rates and performance metrics
Implement audit processes that can verify accuracy
Don't delete audio/transcript data immediately—create audit trails
If you're a patient:
Request copies of your clinical notes
Review them for accuracy
Report errors you find
We're not against AI in healthcare. We're against deploying tools at scale without measuring what they actually produce.
70% error rates on documentation aren't acceptable… even if they save 13 minutes per day.
Physician-Innovator | AI in Healthcare | Child & Adolescent Psychiatrist
Continue reading about it here:
Topaz npj Digital Medicine commentary: https://www.nature.com/articles/s41746-025-01895-6