OpenAI has introduced ‘ChatGPT Health’, a new feature that allows users to upload and analyse medical documents, interpret data from wearable devices, and seek diet, exercise, and even GLP-1 advice through integrations with wellness apps.
“ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the trade-offs of different insurance options based on your healthcare patterns,” OpenAI said in a blog post announcing the launch.
Photo: Courtesy of OpenAI
OpenAI has launched a waitlist for the new feature, which will initially only be available to a small group of early users. Although the company says it plans to expand ChatGPT Health to all its users in the coming weeks, onl…
OpenAI has introduced ‘ChatGPT Health’, a new feature that allows users to upload and analyse medical documents, interpret data from wearable devices, and seek diet, exercise, and even GLP-1 advice through integrations with wellness apps.
“ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the trade-offs of different insurance options based on your healthcare patterns,” OpenAI said in a blog post announcing the launch.
Photo: Courtesy of OpenAI
OpenAI has launched a waitlist for the new feature, which will initially only be available to a small group of early users. Although the company says it plans to expand ChatGPT Health to all its users in the coming weeks, only US users will be able to upload medical records and sync with certain apps, due to European data privacy laws. OpenAI also cautioned that the chatbot is “not intended for diagnosis or treatment” or to replace medical care.
It’s the latest major move by a US tech company into the lucrative health market, as AI’s rapidly evolving ability to interpret huge amounts of personal health data moves in step with consumers’ ever-growing wellness obsession. But experts caution that the launch raises several privacy and safety concerns over sharing sensitive health data and receiving health advice, and could minimise the selling points of traditional wearable health trackers.
Privacy concerns
OpenAI says it plans to keep the new health feature within its own encrypted space on ChatGPT, separate from users’ non-health chats, and add enhanced privacy features. Users can also delete their ChatGPT Health conversations and uploaded files at any time. But the platform may use context from users’ non-Health chats, like “a recent move or lifestyle change”, when it deems it “helpful” for a Health chat, OpenAI said in a blog post unveiling the launch.
OpenAI says it will not use conversations within ChatGPT Health to train its foundation models, but this is not the case in its existing chatbot. The company says that if users start a health-related conversation in ChatGPT, it will suggest moving the conversation to the Health space so the additional privacy protections apply.
By stopping short of medical diagnoses and prescribing treatment, ChatGPT Health avoids being classified as a medical device in the US and the regulations that come with it. Like some wearable trackers, the new feature operates in a regulatory gray zone that avoids more intense oversight by being classified as an information-focused wellness product rather than a device designed to diagnose or treat disease.
Regulators in the US and Europe are split on their approach to AI regulation. On Wednesday, the same day that OpenAI launched ChatGPT Health, the FDA in the US announced it will ease the regulation of AI-enabled digital health products to promote their adoption.
But what’s viable in the US may require material changes to launch into the EU and UK markets — AI tools used for medical purposes may qualify as medical devices under UK law, which would mean OpenAI has to register with the UK’s medicines regulator and potentially undertake further assessments before launching in the UK, legal experts say. If the chatbot is classified as a medical device, the EU’s AI Act will treat it as a high-risk AI system, triggering additional obligations.
“It’s no surprise that the marketing is plastered with disclaimers about the tool being designed to support, not replace medical care; and that it’s not intended for diagnosis or treatment,” says Ali Vaziri, partner at law firm Lewis Silkin. “As well as trying to influence the extent to which it is considered a regulated medical device, these disclaimers are vital given accuracy issues and the potential for AI to hallucinate — when it comes to health, errors in AI outputs have the potential to quite literally be a matter of life and death.”
And where more data is shared, Silkin warns that, from a legal perspective, this always means more risk. “It introduces more opportunities for data to be compromised by expanding the attack surface. It can also mean less control, with unforeseen secondary uses of data becoming more likely,” Silkin adds.
A win for wellness brands, a challenge for wearables
Upon launch, ChatGPT Health will allow users to connect with Apple Health through iOS to sync their health and fitness data; the Function health app for lab test insights and nutrition plans; MyFitnessPal for nutrition advice and recipes; WeightWatchers for GLP-1 personalised meal ideas; AllTrails and Peloton for workout suggestions; and Instacart, which will convert the meal plans the chatbot suggests into shoppable lists.
This Instacart integration enables health and wellness brands to drive more sales through AI chat, against the backdrop of OpenAI’s wider plans for an integrated checkout, where users can complete purchases of personalised product recommendations within their chats.
“You could argue this is the starting gun for truly agentic commerce,” says Max Sinclair, CEO of Azoma AI, who points to the trust ChatGPT could earn among consumers seeking answers about their most personal issues. By this logic, if consumers’ most trusted source for health guidance becomes ChatGPT Health, which is equipped with all their data like blood tests, diet, sleep and activity, its recommendations for health supplements and wellness products could carry more weight for consumers than sponsored ads in search, or even influencer recommendations.
In the near term, this means wellness brands should prioritise AI optimization to up their chances of becoming the chatbot’s recommended product, healthcare marketing experts say. “Some of the most impactful work here is unglamorous, such as metadata, content consistency, and structured libraries,” Julie O’Donnell, global head of digital at healthcare consultancy Inizio Evoke, says. “Being honest, these are areas where many health and wellness brands have historically been weak. In an AI-driven landscape, that has to change. You don’t need to be the loudest or the biggest spender — brands that work smarter and build credibility systematically can make meaningful gains.”
Experts are divided on what this shift to AI-driven health means for traditional trackers, however. Some warn that while the first generation of health tracking devices, such as Whoop and the Oura ring, brought consumers data, if ChatGPT Health becomes the go-to platform for interpreting health data and personalised recommendations, wearables and their apps could be reduced to mere sensory data suppliers.
But US-based health-tracking ring Oura, which offers wearers an AI-powered “Oura Advisor” within its app, tells Vogue Business that it views the new feature as validation of user demand and a tool to “complement” its own Advisor.
“Oura Advisor is unique in that it’s built on each member’s continuous biometric data and long‑term baselines inside the Oura app, so it can turn real patterns and behaviors into contextual, actionable recommendations with clear guardrails around clinical adjacency,” a spokesperson for the company said. “In that sense, general‑purpose AI tools and Oura Advisor are complementary — one offers broad information, while the other translates your personal data, measured directly from a wearable, into tailored guidance.”
And if consumers increasingly feed their wearables data into general-purpose AI models, experts say it places greater onus on wearables brands to improve the quality and provenance of the data they collect.
“Many wearable metrics are proxies or estimates of underlying physiological processes, and as conversational interfaces take center stage, the accuracy and stability of those inputs become critical,” says Billie Whitehouse, CEO of Wearable X. “Generalized models like GPT can provide broad reasoning and synthesis, but they do not always replace the domain-specific insights that some wearable platforms have spent years refining.”
Accuracy concerns
OpenAI says it’s worked with more than 260 clinical physicians over the past two years to help shape ChatGPT Health’s responses. While specific AI software that’s already in use in clinical settings is regulated and tested for use in healthcare, general-purpose chatbots are not required to meet the same standards. Large language models (LLMs) like ChatGPT draw on thousands of internet sources, some more reliable than others.
Doctors, on the other hand, consult the latest peer-reviewed, empirical research, which often resides behind paywalls in medical journals. They’re also trained to be empathetic — something AI chatbots have historically struggled with, and a crucial factor in managing the mental health risks associated with obsessively tracking our health.
Healthcare experts, like those in other areas of AI development, say it’s crucial that human practitioners stay in the loop. While ChatGPT Health could help save doctors’ time, it’s vital that a trained licensed physician signs off on the output, they say.
“Misleading statements about medicine have been a concern for a while. On one hand, ChatGPT could be more accurate than social media. On the other hand, patients should absolutely not self-diagnose because a vast amount of nuance is incorporated by physicians to calibrate a diagnosis,” says Dr. Charlie Cox, a consultant at Reborne Longevity. “It’s important that there are very clear guardrails in place — particularly for more vulnerable individuals — to reduce risk of misdiagnosis, but on balance the net benefit should be positive.”