ChatGPT Health lets you connect medical records to an AI that makes things up
Technology
News

ChatGPT Health lets you connect medical records to an AI that makes things up

AR
Ars Technica
about 21 hours ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 8, 2026

New feature will allow users to link medical and wellness records to AI chatbot.

On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for “health and wellness conversations” intended to connect a user’s health and medical records to the chatbot in a secure way.

But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It’s a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.

Despite the known accuracy issues with AI chatbots, OpenAI’s new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.

OpenAI says more than 230 million people ask health questions on ChatGPT each week, making it one of the chatbot’s most common use cases. The company worked with more than 260 physicians over two years to develop ChatGPT Health and says conversations in the new section will not be used to train its AI models.

“ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life,” wrote Fidji Simo, OpenAI’s CEO of applications, in a blog post.

But despite OpenAI’s talk of supporting health goals, the company’s terms of service directly state that ChatGPT and other OpenAI services “are not intended for use in the diagnosis or treatment of any health condition.”

It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations.”

The SFGate report on Sam Nelson’s death illustrates why maintaining that disclaimer legally matters. According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to healthcare professionals. But over 18 months of conversations, ChatGPT’s responses reportedly shifted. Eventually, the chatbot told him things like “Hell yes—let’s go full trippy mode” and recommended he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment.

While Nelson’s case did not involve the analysis of doctor-sanctioned healthcare instructions like the type ChatGPT Health will link to, his case is not unique, as many people have been misled by chatbots that provide inaccurate information or encourage dangerous behavior, as we have covered in the past.

That’s because AI language models can easily confabulate, generating plausible but false information in a way that makes it very difficult for some users to distinguish fact from fiction. The AI models that power services like ChatGPT use statistical relationships in training data (like the text from books, YouTube transcripts, and websites) to produce plausible responses rather than necessarily accurate ones. Moreover, ChatGPT’s outputs can vary widely depending on who is using the chatbot and what has previously taken place in the user’s chat history (including notes about previous chats).

Then there’s the issue of unreliable training data, which companies like OpenAI use to create the models. Fundamentally, all major AI language models rely on information pulled from sources of information collected online. Rob Eleveld of the AI regulatory watchdog Transparency Coalition told SFGate: “There is zero chance, zero chance, that the foundational models can ever be safe on this stuff. Because what they sucked in there is everything on the Internet. And everything on the Internet is all sorts of completely false crap.”

So when summarizing a medical report or analyzing a test result, ChatGPT could make a mistake that the user, not being trained in medicine, would not be able to spot.

Even with these hazards, it’s likely that the quality of health-related chats with the AI bot can vary dramatically between users because ChatGPT’s output partially mirrors the style and tone of what users feed into the system. For example, anecdotally, some users claim to find ChatGPT useful for medical issues, though some successes for a few users who know how to navigate the bot’s hazards do not necessarily mean that relying on a chatbot for medical analysis is wise for the general public. That’s doubly true in the absence of government regulation and safety testing.

In a statement to SFGate, OpenAI spokesperson Kayla Wood called Nelson’s death “a heartbreaking situation” and said the company’s models are designed to respond to sensitive questions “with care.”

ChatGPT Health is rolling out to a waitlist of US users, with broader access planned in the coming weeks.

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

Ars Technica