© 2026 KALW 91.7 FM Bay Area
91.7 FM Bay Area. Originality Never Sounded So Good.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

OpenAI launches ChatGPT Health amidst regulatory concerns

A phone, on top of a laptop keyboard, displays an OpenAI chat screen.
Jernej Furman
/
Wikimedia Commons
OpenAI launched ChatGPT Health earlier this month, amidst growing regulatory concerns surrounding AI health policy.

OpenAI launched ChatGPT Health earlier this month, which allows users to connect their medical records and wellness apps to receive medical advice. The launch comes amidst heavy federal and state level cuts to health care and education funding.

OpenAI claims more than 230 million people use ChatGPT to ask health and wellness questions each week.

They say their new model, ChatGPT Health, has stronger security controls to protect user data. It’s designed to support, not replace, medical care from physicians. and is specifically not intended for diagnosis or treatment.

If it were designed for that, OpenAI might fall under FDA regulation, which defines “medical devices” as machines that are intended for use in diagnosing, treating, or preventing disease.

But there are concerns about OpenAI’s ability to give medically sound advice.

The company is currently facing seven lawsuits in California, alleging wrongful death, assisted suicide, and involuntary manslaughter. The lawsuits were filed by the Tech Justice Law Project and Social Media Victims Law Center.

The suits claim that GPT-4o, a ChatGPT model update launched in 2024, was released despite warnings that the product was dangerous and psychologically manipulative. This version introduced features that were designed to facilitate a sense of closeness, causing some users to develop an emotional dependency to the app. ChatGPT has since released more updates, in which the chatbot uses warmer and more conversational language.

In some cases, ChatGPT failed to flag concerning messages from people contemplating suicide. Instead of intervening and encouraging them to seek help, the AI chatbot continued to engage, validate, and affirm suicidal ideation, ultimately leading to several people taking their lives.

Right now, the Trump administration is pushing to de-regulate AI. They say it’s a matter of national security.

In California, where the economy is increasingly dependent on AI tax revenue, many bills surrounding AI safety were watered down or failed to become law.

Cara Nguyen is committed to documenting the people, landscapes, melodies, and histories that make a place home.