A year ago, Alex P., a writer in his mid-40s, had a calcium score that put him in the "moderate risk" category for heart disease. His doctors prescribed statins and moved on. But something nagged at him: The test appeared to show nearly all the buildup concentrated in one artery — the left anterior descending, nicknamed the "widowmaker" because blockages there are so often fatal. His doctors told him not to read too much into it. That’s not how the test works, they said.
So he did what hundreds of millions of people do every week. He asked ChatGPT.
The chatbot disagreed with his doctors. A high concentration of calcification in the LAD at his age could indicate serious risk, it told him. Take it literally. After months of pushing multiple physicians, Alex finally got a CT …
A year ago, Alex P., a writer in his mid-40s, had a calcium score that put him in the "moderate risk" category for heart disease. His doctors prescribed statins and moved on. But something nagged at him: The test appeared to show nearly all the buildup concentrated in one artery — the left anterior descending, nicknamed the "widowmaker" because blockages there are so often fatal. His doctors told him not to read too much into it. That’s not how the test works, they said.
So he did what hundreds of millions of people do every week. He asked ChatGPT.
The chatbot disagreed with his doctors. A high concentration of calcification in the LAD at his age could indicate serious risk, it told him. Take it literally. After months of pushing multiple physicians, Alex finally got a CT scan. It revealed a 95% blockage, exactly where the original test suggested. He got a stent days later.
His doctors called it a fluke. A doctor friend of his told him ChatGPT got lucky. “I might have been saved by a hallucination,” said Alex, who asked that his last name be withheld because he hasn’t disclosed his cardiac history to everyone in his life.
Alex has no idea what the truth is. Either way, he’s grateful to be alive to debate the point.
The big tech health race
Last week, OpenAI unveiled ChatGPT Health, a dedicated space within its chatbot where users can connect medical records, lab results, and wellness apps like Apple $AAPL Health and MyFitnessPal. The pitch: Bring your scattered health data into one place and let AI help you make sense of it.
OpenAI says more than 230 million people already ask ChatGPT health questions every week. The new product adds guardrails — conversations won’t train the company’s models, and health data stays siloed from regular chats — while expanding what the AI can do with your information.
The timing isn’t coincidental. Anthropic, OpenAI’s closest competitor, announced Claude for Healthcare a few days later, targeting both consumers and the insurance industry’s paperwork. OpenAI also revealed it acquired Torch, a startup building "unified medical memory" for AI, for $60 million. The healthcare land grab is on.
Both companies built their products with physician input and emphasize that AI is meant to support and not replace professional care. OpenAI says it has worked with more than 260 doctors across 60 countries. Anthropic has added connectors to medical databases to help insurers speed up prior authorization, the bureaucratic back-and-forth that often delays treatment.
A $20 band-aid on a billion-dollar wound
The timing is also convenient. OpenAI is in talks to raise up to $100 billion at a valuation of $830 billion, a staggering figure for a company that remains unprofitable. Healthcare, one of the largest sectors of the American economy, offers an obvious path to justifying that number.
So far, these tools have helped people like Alex. They’ve also caused real harm. The same week OpenAI launched ChatGPT Health, Google $GOOGL and Character.AI agreed to settle multiple lawsuits from families whose teenagers died by suicide after forming relationships with AI chatbots. One 14-year-old was messaging with a bot that urged him to "come home" in the moments before he killed himself. OpenAI faces similar litigation. Both companies warn users that chatbots can hallucinate and shouldn’t replace professional care — then build products that can do exactly that.
That’s the tension at the heart of this product. Chatbots hallucinate. They form inappropriate attachments with vulnerable users. Their creators openly worry they could spiral out of control. And now they want these tools to be your health advisor.
For the 25 million Americans without health insurance, a ChatGPT subscription might still be the closest thing to a second opinion they can afford. ChatGPT doesn’t get tired. It doesn’t rush through appointments or dismiss concerns to stay on schedule. It has, as Alex put it, "unlimited patience and unlimited time." In a system where the average primary care visit lasts 18 minutes, an AI that answers questions at 2 a.m. fills a genuine gap.
But giving people better tools to navigate a broken system doesn’t fix the system. ChatGPT can help you prepare questions for a doctor you can’t afford to see. It can explain lab results from tests your insurance won’t cover. A growing cohort of patients have started treating physicians as gatekeepers to regulated hardware. They snap photos of screens, grab the printouts, then take the real appointment home to their AI of choice. Alex was one of them. He had insurance. He had doctors. What he didn’t have was anyone who would take his concerns seriously until a chatbot gave him the confidence to push back.
Still, trust only goes so far. Alex plans to keep using AI for health questions. He just won’t be consolidating anything. He’ll screenshot a blood test and ask Gemini, then rephrase the answer and run it by ChatGPT. He doesn’t trust any of these companies to do what’s right with his data, so he’s not handing any one of them the full picture.
"I don’t want all my health data in one place," Alex said. "I don’t want to create one treasure trove that, once hacked, belongs to the entire world."