The Builder’s Notes: The Healthcare AI Projects That Actually Worked (And Why No One Talks About Them)
Everyone shares their AI lighthouse projects. Nobody shares the boring workflows that actually delivered ROI. Here are 5 unglamorous AI wins that paid back in 6 months.

Healthcare conferences are full of AI success stories that never happened.
The keynote speaker shows slides about their “...
The Builder’s Notes: The Healthcare AI Projects That Actually Worked (And Why No One Talks About Them)
Everyone shares their AI lighthouse projects. Nobody shares the boring workflows that actually delivered ROI. Here are 5 unglamorous AI wins that paid back in 6 months.

Healthcare conferences are full of AI success stories that never happened.
The keynote speaker shows slides about their “AI-powered clinical transformation.” Six months later, the project is quietly shelved. The press release stays up. The reality gets buried.
I’ve sat through dozens of these presentations. The pattern is consistent: impressive demos, inspiring vision, zero discussion of whether it actually worked in production.
What you don’t hear about:
The hospital that spent $800K on an AI sepsis early warning system that physicians ignored because it generated 40 false alerts per day.
The health system that deployed AI prior authorization and discovered it auto-approved treatments that should have required medical director review.
The academic medical center that built an AI discharge summary generator that produced clinically accurate notes in the wrong voice, wrong format, and wrong level of detail for their workflow.
These failures don’t make it to conference stages.
But there’s another category that also doesn’t make it to conferences: the boring AI projects that actually worked.
No sexy use case. No breakthrough innovation. No “AI transforms healthcare” narrative.
Just unglamorous workflow automation that paid back investment in 6 months and kept running for years.
Here are five of them.
Project 1: Insurance Eligibility Verification (Boring, Profitable)
What it does: Checks patient insurance eligibility and benefit details automatically when appointments are scheduled.
Why it’s unglamorous: This isn’t clinical AI. It’s not saving lives. It’s just automating a phone call that registration staff used to make manually.
Why nobody talks about it: You can’t write a press release about “AI checks if patient has active insurance.” It doesn’t sound innovative. It sounds like basic automation that should have existed 20 years ago.
Why it actually worked:
Problem it solved:
- Registration staff spending 15–20 minutes per patient calling insurance companies
- Patients showing up for appointments with inactive insurance
- Surprise bills when insurance doesn’t cover expected services
- Claim denials due to eligibility issues discovered after service delivery
Implementation:
- AI agent calls insurance verification APIs automatically when appointment scheduled
- Checks: active coverage, copay amount, deductible status, prior auth requirements
- Flags issues before appointment (inactive coverage, wrong plan on file, prior auth needed)
- Updates patient account in EHR with current benefit details
Metrics that mattered:
- Registration time per patient: 15 min → 3 min (80% reduction)
- Appointments with insurance issues discovered at check-in: 12% → 2%
- Claim denials due to eligibility issues: 8% → 1%
- Revenue cycle improvement: $2.3M annually (fewer denials, faster payment)
Cost:
- Implementation: $120K (vendor SaaS + integration)
- Annual licensing: $60K
- Payback period: 4 months
- 5-year ROI: 2,100%
Why it worked when sexy projects failed:
- Solved a clear, measurable problem: Registration staff were overwhelmed, insurance issues were constant
- Required no behavior change: Staff still verified insurance, just faster with better data
- Had obvious success metrics: Time saved, denials reduced, revenue improved
- Wasn’t trying to replace clinical judgment: Just automated a phone call
- Worked with existing EHR: Simple API integration, no custom development
The lesson: The best AI projects aren’t the ones that sound revolutionary. They’re the ones that eliminate 15 minutes of annoying work that everyone hated doing anyway.
Project 2: No-Show Prediction and Outreach (Unglamorous, Effective)
What it does: Predicts which patients are likely to no-show for appointments and automatically reaches out to confirm or reschedule.
Why it’s unglamorous: Predicting no-shows isn’t clinically interesting. It’s operational efficiency. Healthcare AI thought leaders don’t write about operational efficiency.
Why nobody talks about it: Because the headline “AI reduces no-shows by 30%” doesn’t get clicks. Everyone assumes this already exists. (It doesn’t, in most hospitals.)
Why it actually worked:
Problem it solved:
- 15–20% no-show rate across outpatient clinics
- Lost revenue from unused appointment slots
- Wasted provider time (physician ready, patient doesn’t show)
- Patients who need care not getting it
Implementation:
- Model trained on historical no-show data: patient demographics, appointment type, day/time, past behavior, distance to clinic
- Predicts no-show probability for each scheduled appointment
- High-risk patients (>40% probability) receive automated SMS/call 48 hours before appointment
- Offers easy rescheduling option via text
- Fills newly-opened slots from waitlist automatically
Metrics that mattered:
- No-show rate: 18% → 12% (33% reduction)
- Provider utilization: 73% → 81% (more patients seen per day)
- Patient satisfaction: Up 8 points (patients appreciated reminder + easy reschedule)
- Revenue impact: $1.8M annually (more appointments kept, waitlist patients seen sooner)
Cost:
- Implementation: $80K (vendor platform + SMS integration)
- Annual licensing: $45K
- Payback period: 3 months
- 5-year ROI: 1,800%
The surprising insight:
They tracked why patients no-showed after intervention. Top reasons:
- Forgot (37%) — reminder helped
- Transportation issue (22%) — reminder allowed earlier reschedule
- Feeling better (18%) — offered telehealth alternative
- Couldn’t get time off work (12%) — offered evening/weekend slots
- Other (11%)
The AI didn’t just predict no-shows. It gave the hospital time to solve the underlying problem before appointment time.
Why it worked:
- Clear success metric: No-show rate is easy to measure
- Didn’t require staff training: Automated outreach happened in background
- Patients liked it: Reminders were helpful, not annoying
- Easy integration: SMS/call system plugged into scheduler
- Continuous improvement: Model retrained quarterly on new no-show data
The lesson: Sometimes the most impactful AI isn’t in the exam room. It’s in the operational workflows that make healthcare accessible.
Project 3: Radiology Report Structured Data Extraction (Invisible, Valuable)
What it does: Reads free-text radiology reports and extracts structured findings for downstream analytics and clinical decision support.
Why it’s unglamorous: This is data plumbing. It doesn’t diagnose disease. It doesn’t save lives directly. It just makes data usable.
Why nobody talks about it: Because “AI converts unstructured text to structured data” sounds like IT infrastructure work, not healthcare innovation.
Why it actually worked:
Problem it solved:
- Radiology reports are narrative text (example: “Small nodule noted in right upper lobe, measuring approximately 8mm”)
- EHR only captures structured impression codes (example: “Lung nodule”)
- Clinical decision support can’t trigger on text (example: “8mm nodule requires 3-month follow-up per Fleischner criteria”)
- Quality reporting requires manual chart review to find specific findings
Implementation:
- NLP model reads every radiology report
- Extracts: anatomical location, finding type, size measurements, comparison to prior studies
- Writes structured data to EHR discrete fields
- Triggers clinical decision support rules (follow-up recommendations, incidental finding alerts)
- Populates quality dashboards automatically
Metrics that mattered:
- Incidental findings with no documented follow-up: 23% → 4%
- Quality reporting accuracy: Manual chart review → Automated (eliminated 200 hours/month)
- Clinical decision support triggering: 15% of relevant cases → 94%
- Malpractice risk reduction: 18 “lost to follow-up” cases prevented in first year
Cost:
- Implementation: $150K (NLP model + EHR integration)
- Annual licensing: $75K
- Payback period: Avoided one malpractice claim ($800K+ potential exposure)
- 5-year ROI: Immeasurable (prevented patient harm)
The unexpected benefit:
They discovered they could track radiologist performance on incidental finding documentation. Some radiologists mentioned findings in narrative but didn’t code them. Others coded them but didn’t specify size/location clearly.
The AI extraction revealed documentation quality gaps they didn’t know existed. Led to targeted training and template improvements.
Why it worked:
- Solved a genuine problem: Incidental findings were being missed
- Invisible to end users: Radiologists kept writing reports normally
- Measurable safety impact: Fewer patients lost to follow-up
- Enabled other improvements: Quality reporting, decision support, performance feedback
- No workflow disruption: Extraction happened in background
The lesson: The most valuable AI might be the infrastructure layer nobody sees. If it enables 10 other improvements downstream, that’s transformative even if it’s “just” data extraction.
Project 4: Discharge Medication Reconciliation (Tedious, Critical)
What it does: Compares hospital discharge medications to patient’s pre-admission medications and flags discrepancies for pharmacist review.
Why it’s unglamorous: Medication reconciliation is nobody’s favorite task. It’s tedious, time-consuming, and gets skipped when units are busy.
Why nobody talks about it: Because “AI helps with med rec” doesn’t sound like innovation. It sounds like finally doing what we should have been doing all along.
Why it actually worked:
Problem it solved:
- Medication reconciliation required manually comparing two lists (admission meds vs discharge meds)
- High error rate: medications accidentally continued that should be stopped, or stopped that should continue
- Time-consuming: 15–20 minutes per discharge
- Often incomplete when nurses are overwhelmed
Implementation:
- AI reads admission medication list and discharge medication list
- Identifies: medications started, stopped, dose changes, unexplained discrepancies
- Flags for pharmacist review: “Patient was on lisinopril 20mg daily pre-admission, not on discharge list — intentional or oversight?”
- Generates reconciliation report with specific questions
- Pharmacist reviews and approves/corrects in 3–5 minutes
Metrics that mattered:
- Medication discrepancies caught before patient discharge: 4% → 23%
- Pharmacist time per medication reconciliation: 18 min → 6 min
- Patient callbacks for medication confusion: 12% → 4%
- Preventable adverse drug events: 8 cases in 6 months → 1 case
Cost:
- Implementation: $90K (NLP + EHR integration)
- Annual licensing: $50K
- Payback period: 5 months (time savings + prevented ADEs)
- 5-year ROI: 1,200%
The clinical impact story:
82-year-old patient admitted for pneumonia. On 8 home medications. Hospitalists started antibiotics, continued most home meds, held anticoagulant during acute illness.
At discharge, anticoagulant wasn’t restarted (oversight).
AI flagged: “Patient on apixaban pre-admission for atrial fibrillation. Not on discharge list. Intentional hold or error?”
Pharmacist caught it. Anticoagulant restarted before patient left hospital.
Two weeks later, patient had no adverse events. Without AI flag, patient would have been home on incorrect medications.
Why it worked:
- Prevented actual harm: Multiple medication errors caught before discharge
- Made tedious work faster: Pharmacists still did the review, just more efficiently
- Improved compliance: Easier process meant it actually got done every time
- Clear value proposition: Safety improvement + time savings
- Integrated into existing workflow: Pharmacists already reviewed med lists, AI just prepared the comparison
The lesson: Sometimes AI’s value isn’t doing something new. It’s making sure the important thing that should happen actually happens every time.
Project 5: Prior Authorization Auto-Routing (Bureaucratic, Essential)
What it does: Reads prior authorization requests and automatically routes them to the right person based on complexity, payer requirements, and clinical specialty.
Why it’s unglamorous: This is administrative AI. Not clinical. Not patient-facing. Just moving paperwork around faster.
Why nobody talks about it: Because “AI routes prior auths” isn’t a compelling conference talk. It’s infrastructure. It’s plumbing.
Why it actually worked:
Problem it solved:
- Prior authorization requests land in general queue
- Staff manually reviews each request to determine: which payer, which service, does it need medical director review, which specialist should handle it
- Mis-routing causes delays (sent to wrong reviewer, bounced back, re-routed)
- Complex cases sit in queue while staff handles simple ones first
- No prioritization based on urgency
Implementation:
- AI reads prior auth request: patient demographics, payer, service requested, clinical information
- Determines: payer-specific requirements, clinical complexity, required reviewer credentials
- Auto-routes to appropriate queue: simple pharmacy auths to pharmacy techs, complex surgical auths to medical directors, specialty-specific requests to relevant specialists
- Prioritizes by urgency: urgent/expedited requests flagged, routine requests processed in order
Metrics that mattered:
- Prior auth turnaround time: 4.2 days → 1.8 days (57% reduction)
- Mis-routing rate: 18% → 3%
- Staff time on routing decisions: 25% of workload → 5%
- Patient satisfaction (able to schedule procedures sooner): Up 12 points
- Revenue impact: $1.2M annually (procedures scheduled faster, fewer cancellations)
Cost:
- Implementation: $100K (NLP + workflow automation)
- Annual licensing: $55K
- Payback period: 6 months
- 5-year ROI: 1,400%
The operational insight:
They tracked what happened after better routing:
- Simple requests approved same day (staff could focus without complex cases mixed in)
- Complex requests got to right expert immediately (no bouncing between reviewers)
- Urgent cases never got buried (automatic prioritization)
- Staff moral improved (less time on tedious routing, more time on clinical review)
The AI didn’t make the approval decision. It just made sure the right person saw it at the right time.
Why it worked:
- Clear, measurable improvement: Turnaround time cut in half
- Staff loved it: Eliminated tedious routing decisions
- Patients benefited: Got procedures scheduled faster
- Revenue positive: Faster approvals meant faster scheduling
- Easy to validate: Could audit routing decisions to ensure accuracy
The lesson: The least sexy AI — the stuff that optimizes bureaucratic workflows — often has the highest ROI because it eliminates pure waste.
What These Projects Have in Common
None of them are “AI-powered clinical breakthroughs.” All of them are operational automation applied to annoying workflows.
Common success factors:
1. They Solved Clear, Measurable Problems
Not “transform healthcare.” Specific problems:
- Registration takes too long
- Patients don’t show up
- Incidental findings get lost
- Medication errors at discharge
- Prior auths take too long
If you can’t measure the problem, you can’t measure if AI fixed it.
2. They Didn’t Require Behavior Change
Staff kept doing their jobs. AI just:
- Made tedious tasks faster (insurance verification, no-show prediction)
- Made difficult tasks easier (medication reconciliation)
- Made time-consuming tasks automatic (report extraction, prior auth routing)
The best AI is invisible. Workflow looks the same, just works better.
3. They Had Obvious Business Cases
- Insurance verification: Fewer denials = more revenue
- No-show prediction: More patients seen = more revenue
- Report extraction: Fewer malpractice claims = lower risk
- Med reconciliation: Fewer adverse events = lower cost + better outcomes
- Prior auth routing: Faster approvals = faster procedures = more revenue
CFO could understand ROI in 5 minutes.
4. They Weren’t Clinically Risky
None of these projects involved AI making clinical decisions:
- Not diagnosing disease
- Not recommending treatments
- Not interpreting imaging
- Not predicting patient outcomes
They automated administrative and operational tasks where error consequences were lower.
5. They Integrated Easily
All of them plugged into existing systems:
- Insurance verification: API calls to payers
- No-show prediction: SMS/call integration
- Report extraction: NLP on existing text
- Med reconciliation: Reading existing EHR data
- Prior auth routing: Workflow automation
No custom EHR development required. No complex data pipelines.
Why Nobody Talks About These
Reason 1: They’re Not Impressive
“AI checks insurance eligibility” doesn’t get you invited to keynote HIMSS.
“AI prevents patient harm through medication reconciliation” is important but not exciting.
Healthcare AI thought leaders want to talk about breakthrough innovations, not operational efficiency.
Reason 2: They’re Not Defensible
These aren’t proprietary innovations. Any vendor can build insurance verification AI. Any hospital can deploy no-show prediction.
There’s no moat. No competitive advantage. Just blocking and tackling.
Reason 3: They Don’t Make Good Stories
The narrative arc of “we automated tedious workflow and saved 15 minutes per patient” doesn’t inspire.
Healthcare wants transformation stories: “AI detected sepsis 6 hours earlier and saved a life.”
Even when that story isn’t true.
Reason 4: They Don’t Attract Venture Capital
VCs want billion-dollar markets and 10x innovations.
“We reduce no-shows” isn’t a venture-backable pitch.
“We’re going to replace radiologists with AI” is. (Even if it’s not true.)
Reason 5: They Don’t Win Awards
Innovation awards go to breakthrough ideas, not operational improvements.
Nobody’s winning “Most Innovative Healthcare AI” for prior auth routing.
But prior auth routing might be the most valuable AI in the building.
What This Means for Your AI Strategy
If you’re a healthcare CIO or CMIO evaluating AI projects:
Stop asking: “What’s the breakthrough innovation?”
Start asking: “What annoying workflow takes too long and shouldn’t exist?”
Stop looking for: AI that replaces clinical judgment
Start looking for: AI that eliminates administrative waste
Stop measuring: Clinical accuracy on benchmark datasets
Start measuring: Time saved, denials reduced, patients seen faster
Stop deploying: Sexy pilots that never scale
Start deploying: Boring automation that pays back in 6 months
The Projects You Should Actually Build
Based on these five successes, here are the healthcare AI projects most likely to work:
1. Insurance Prior Auth Status Tracking
- Auto-check status of submitted prior auths
- Alert staff when payer needs additional info
- Predict approval likelihood based on historical patterns
- ROI: Faster approvals, fewer missed requests
2. Appointment Reminder Optimization
- Send reminders at optimal time for each patient (not one-size-fits-all)
- Use channel patient prefers (SMS vs email vs call)
- Offer easy reschedule if patient can’t make it
- ROI: Reduced no-shows, better patient satisfaction
3. Lab Result Routing
- Flag critical lab results for immediate physician review
- Route routine results to appropriate care team member
- Identify results requiring follow-up action
- ROI: Faster critical result handling, fewer missed follow-ups
4. Supply Chain Demand Forecasting
- Predict supply needs based on scheduled procedures
- Alert when inventory running low before stockout
- Optimize ordering to reduce waste from expiration
- ROI: Lower supply costs, fewer procedure delays
5. Staff Scheduling Optimization
- Predict census and acuity to optimize staffing levels
- Reduce overstaffing (cost) and understaffing (burnout)
- Account for historical patterns, seasonality, local events
- ROI: Lower labor costs, better staff satisfaction
None of these are revolutionary. All of them would pay back in months.
The Question You Should Be Asking
Not “What AI should we deploy?”
But “What task do our staff hate doing that AI could automate?”
Talk to registration staff. They’ll tell you insurance verification is tedious.
Talk to schedulers. They’ll tell you no-shows wreck the schedule.
Talk to pharmacists. They’ll tell you med reconciliation takes too long.
Talk to prior auth team. They’ll tell you routing decisions are mind-numbing.
The best AI projects aren’t the ones that sound innovative. They’re the ones that solve problems people actually have.
The Uncomfortable Truth
Healthcare AI fails when it tries to do something impressive.
Healthcare AI succeeds when it does something boring.
The projects that make conference keynotes are usually the ones that failed in production.
The projects that actually work are the ones nobody bothers to talk about.
If you want healthcare AI that delivers ROI, stop chasing breakthroughs.
Start automating the annoying workflows everyone hates.
That’s where the value is.
Building AI that actually works instead of AI that sounds impressive. Every Tuesday and Thursday in Builder’s Notes.
Deployed boring AI that actually delivered ROI? Drop a comment with what worked — let’s build the catalog of unglamorous wins.
Piyoosh Rai is Founder & CEO of The Algorithm, building native-AI platforms for healthcare, financial services, and government sectors. After watching dozens of impressive AI pilots fail while boring automation projects paid back in months, he writes about the unglamorous infrastructure work that actually delivers value.
Further Reading
For why healthcare AI projects actually fail:
Epic, Oracle, and Cerner Are Blocking Healthcare AI. Here’s the Proof.
For what happens when AI models drift over time:
The Builder’s Notes: Your “Cutting-Edge” Healthcare AI Is Already Out of Date
The Builder’s Notes: Healthcare AI Projects That Actually Worked (And Why No One Talks About Them) was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.