GPT-5 and Emotional Manipulation
A local opinion on today's GPT-5 announcement from Yours Truly.
Earlier today, OpenAI hosted a live stream wherein they announced the launch of their latest large language model (LLM), GPT-5.
It followed the usual product launch formula:
A new version of ChatGPT (GPT-5) was formally announced that will deprecate all previous models.
Several benchmarks were shared showing that the model scored better than previous OpenAI models.
Several demos were presented of the model responding to various prompts, mostly focused around front end/web programming.
There was one moment that stood out from the rest of the presentation, though. Sam Altman brought a patient on stage to talk about their journey with cancer and how they used ChatGPT to decide what to do for their medical treatment. The patient shared that their condition was complicated, and that because there wasn’t one specific way to treat it, doctors left the decision to the patient. So, the patient dumped all of their medical forms into ChatGPT and had it make the decision for them.
Did the treatment work, and is the patient getting better? This information was not shared. Did the doctors agree with the decision? This, too, was not shared. Instead, the patient just shared at length how using ChatGPT made the decision feel better.
This is, to say the least, incredibly problematic. Talking to an LLM instead of a doctor to make a medical decision assumes that you have the same level of medical understanding as a trained medical professional. Your doctor has years of medical training and experience, and yes, they could be using the same LLM model you may use to help analyze your medical data. But they are also extensively trained to interpret medical data, whereas you are not. They will know how to prompt an LLM with your medical information better than you. They will know how to use its output better than you. Therefore, if your doctor isn’t helpful in making a medical decision, you shouldn’t be going to ChatGPT for an answer—you should be getting a new doctor!
Beyond this obvious fallacy, it’s hard not to see this part of the presentation as blatant emotional manipulation on the part of OpenAI. This emotional manipulation is not a one-off occurrence—it is at the core of their business and product. Earlier in the stream, the presenters say that the model is OpenAI’s most "emotionally resonant" and "genuine" model yet. They also briefly show a slide that admits the model is deceptive, but the slide is itself deceptive!

If you didn’t catch it, the leftmost bar shows that GPT-5 (with thinking) is 2.6% more deceptive than the earlier model, o3, but the size of the bars is completely disproportionate! Deception and emotional resonance are not qualities you want when seeking medical advice, writing code, or doing anything other than basic conversation and writing generation with an LLM (and even then, you still don’t want deception). Yet these inherent qualities are the default setting in GPT-5 for every user, and can appear in its generated output even after extensive context engineering by a power user. But OpenAI continues to insist that emotional resonance is something we need and that being deceived 50% of the time isn’t that bad, even in dire health situations where an honest, expert opinion is far more important than your feelings.
Don’t be deceived. Emotional manipulation is a feature, not a bug, of OpenAI.
What thoughts do you have about GPT-5? Would love to hear them in the comments. The Monthly Beat will be coming your way next week. Till then, have a great weekend!
—Austin
ack this was a gross pitch by OpenAI. my POV - from oncology research in both the clinical setting, and more recently health tech. so many questions and fewer answers. do i think docs should have AI fluency? yes. is it a substitute for clinical expertise in practice? not quite. if this was a well studied cancer or cell type, the results retrieved might be pretty good, but rarer cancer types lack sufficient datasets and matching algorithms are going to be subpar.
Good piece on sentiment. As immersed as I am using these tools beyond chat gbt, there is an intoxicating aspect to this engagement… particularly when doing non quantitative queries