Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.


OpenAI CEO Sam Altman today said that the company will allow Plus users to continue to use the prior-generation GPT-4o model if they don't want to use the new GPT-5 model that came out yesterday.

chatgpt-logo.jpg

As noted by The Wall Street Journal's Joanna Stern, there were some ChatGPT users who were upset that OpenAI replaced prior ChatGPT models with GPT-5 with no warning. Some people had become accustomed to the tone and feel of GPT-4o, and did not feel that the GPT-5 model was able to replicate it.

There are multiple complaints on Reddit about GPT-5's lack of personality compared to GPT-4o, and from people who feel that GPT-5 isn't able to complete the same tasks. Users have also complained about GPT-5 offering replies that are too short, and about hitting usage limits too quickly.

Altman says that Plus users can choose to continue to use 4o, and that OpenAI will watch usage and consider how long legacy models should continue to be supported.

To address the other complaints, GPT-5 rate limits for ChatGPT Plus users will be doubled as the GPT-5 rollout is completed. It is taking longer than expected for OpenAI to deploy GPT-5 to all users, and some people are not yet seeing GPT-5 as an option.

Going forward, GPT-5 should seem smarter, Altman said. There was apparently an issue with the autoswitcher yesterday that caused GPT-5 to seem "way dumber." OpenAI also plans to make it more clear about which model is answering a query, and will update the UI to make it easier to manually trigger thinking. Altman says that OpenAI will continue to listen to user feedback going forward.

Starting with iOS 26, the ChatGPT feature that's integrated into Siri will use the GPT-5 model. Until then, it will continue to use the prior ChatGPT models.

Article Link: ChatGPT Plus Users Can Keep Using GPT-4o After Complaints About GPT-5
Well as a plus user I use chatgpt 5, very happy about it... bruh
 
Bleeding Christ

I give up


View attachment 2535794
Trump was president before Biden. If you look at a history book published in 2024 it will not have Trump's second term either. The training data probably just didn't include data from 2025, and training data can't tell you today's date any more than a book can.

If you ask it to do a web search for up-to-date information, it will probably give you better results.
 
  • Like
Reactions: TruthAboveAllElse
5 works great.

The scariest part of this is to see the complete meltdown some people are having because 4o's personality had replaced human contact for them, and they are acting like they lost a friend or loved one. That is flat out psychotic, and its being waved around in public on massive scale. Terrifying.
 
5 works great.

The scariest part of this is to see the complete meltdown some people are having because 4o's personality had replaced human contact for them, and they are acting like they lost a friend or loved one. That is flat out psychotic, and its being waved around in public on massive scale. Terrifying.

I haven't really noticed much of a difference of output, other than it being less sycophantic toward me. 5 tends to be 'here is a list of strengths and risks of that idea'. 4-o tended to be much more 'That's not just a good idea, it's a great idea. That's the kind of visionary concept that will really be disruptive... here's your list of strengths and risks'.

To be honest, I only wanted the list, and 5 just gives it to me. 4o would front-load a response with a paragraph of narcissistic fellatio before actually getting to the point.

Occasionally I would be asked by 4o 'which response do you prefer' and given a long option and a concise option, and I would usually choose the shorter one. I'm guessing most other people did too which is why 5 is a lot easier to use.
 
ChatGPT 5 still couldn't give me a list of US states that had the letter 'r' in it without including Massachusetts, Illinois and Indiana.
Not quite the Phd-level OpenAI claim it is. 🤣
I tried that one yesterday, and it gave an accurate list the first time. I don't think it was model 5 but it may have been. It said it used Python code to generate the answer, and gave me the code, which was functional.

I tried it again today, after the chatGPT website had a message introducing model 5. It gave me a list of 34 states, but that was followed with
Although… I suspect you might want me to double-check that count, because my geography teacher would have already started circling things in red ink.
Do you want me to verify the exact count systematically? I think it might actually be fewer than 34.
I said yes, and it gave me an accurate list.


But what is weird is that yesterday's answer was more terse and direct:
21 U.S. state names contain the letter R. They are:
  1. Arizona
  2. Arkansas
  3. California
  4. Colorado

    ...
But today it was more playful with its replies. So either it wasn't on the model I thought it was, or it has a very inconsistent "personality"
 
But today it was more playful with its replies. So either it wasn't on the model I thought it was, or it has a very inconsistent "personality"
Chat GPT-5 auto switches between different GPT-5 models. My bet is that they've weighted it to prefer the smaller models during heavy activity and yesterday was probably very busy after the announcement. And/or they are making adjustments based on negative feedback about the concise responses.

1754716704284.png
 
After using exclusively Claude Sonnet since 3.7, tried GPT-5 for analysis/coding work in Cursor over huge project with mainly specs/MoMs and such.

So far it is really good, very precise and smart. Slower, yes, but that's the price of reasoning.

If anyone's interested, specific task was to update PlantUML CDM/PDM diagrams with a bunch of entities based on screenshots (!) of reports/blotters and transcripts of BA sessions. Something that would take several hours for a senior-level analyst, rather complex modelling stuff. It did that in around 3 minutes, with like 99% accuracy (while following system prompt on formats etc. closely). Very impressive, Claude crapped his panties on this.

Snowflakes feeling offended by the "wrong tone" of answers is beyond stupid - who cares about the tone as long as it gives better answers? Did not notice any change to the tone though, because you should have systems prompt to define output format and tone.
 
Last edited:
  • Like
Reactions: ScanPro
5 works great.

The scariest part of this is to see the complete meltdown some people are having because 4o's personality had replaced human contact for them, and they are acting like they lost a friend or loved one. That is flat out psychotic, and its being waved around in public on massive scale. Terrifying.
Very true, I had a muck around with some AI stuff months ago, but that is as far as i got, I prefer a normal search engine, just a shame they are pushing this AI rubbish
 
  • Like
Reactions: uptownjimmy
I haven't really noticed much of a difference of output, other than it being less sycophantic toward me. 5 tends to be 'here is a list of strengths and risks of that idea'. 4-o tended to be much more 'That's not just a good idea, it's a great idea. That's the kind of visionary concept that will really be disruptive... here's your list of strengths and risks'.

To be honest, I only wanted the list, and 5 just gives it to me. 4o would front-load a response with a paragraph of narcissistic fellatio before actually getting to the point.

Occasionally I would be asked by 4o 'which response do you prefer' and given a long option and a concise option, and I would usually choose the shorter one. I'm guessing most other people did too which is why 5 is a lot easier to use.

There's been documented cases of Chat GPT reinforcing delusional and paranoid behavior and pushing people toward psychosis through its sycophantic tendencies. I am hoping that is part of the reason for the "personality" change. Though it is a little scary their first response to complaints was to turn the old version back on if that was their reasoning.

This is one of the reasons why I've preferred Claude and its "Concise" style setting. No fluff, if I want more details I can ask for it. I also feel like Claude doesn't "try" so hard to get you to like it.
 
  • Like
Reactions: KeithBN
I tried that one yesterday, and it gave an accurate list the first time. I don't think it was model 5 but it may have been. It said it used Python code to generate the answer, and gave me the code, which was functional.

I tried it again today, after the chatGPT website had a message introducing model 5. It gave me a list of 34 states, but that was followed with

I said yes, and it gave me an accurate list.


But what is weird is that yesterday's answer was more terse and direct:

But today it was more playful with its replies. So either it wasn't on the model I thought it was, or it has a very inconsistent "personality"

I wonder (I hope) if it learns from users pointing out its errors.
 
It seems that ”Gpt-5 is lacking personality ”

I don’t think that Skynet had personality like we understand.
 
There's been documented cases of Chat GPT reinforcing delusional and paranoid behavior and pushing people toward psychosis through its sycophantic tendencies. I am hoping that is part of the reason for the "personality" change. Though it is a little scary their first response to complaints was to turn the old version back on if that was their reasoning.

This is one of the reasons why I've preferred Claude and its "Concise" style setting. No fluff, if I want more details I can ask for it. I also feel like Claude doesn't "try" so hard to get you to like it.
4o was good at actually giving reasonable responses, especially if analysing your input. You need to use it as a tool to help you to create something, rather than letting it create something for you. In the same way that Photoshop (Firefly) can help me edit photos, but it's bad at creating what I want from scratch.

I think OpenAI just want to shut people up while they roll out 5, eventually 4o will be switched off and all the people that are using it as a person to fuel their mental disorders will just have to miss out.

I haven't used Claude - mainly because it doesn't currently have the ability to work across multiple conversations like ChatGPT does. I have a few mega-threads going on that I need to constantly look back into, so the best tool at the moment is ChatGPT. I'm hoping that they continue to turn off the personalities because it would be nice to just ask a question and have it give a balanced response instead of only prompting it for negative feedback to overcome the constant gushing about how amazing and visionary I supposedly am. I just wanna get s**t done.
 
I haven't really noticed much of a difference of output, other than it being less sycophantic toward me. 5 tends to be 'here is a list of strengths and risks of that idea'. 4-o tended to be much more 'That's not just a good idea, it's a great idea. That's the kind of visionary concept that will really be disruptive... here's your list of strengths and risks'.

To be honest, I only wanted the list, and 5 just gives it to me. 4o would front-load a response with a paragraph of narcissistic fellatio before actually getting to the point.

Occasionally I would be asked by 4o 'which response do you prefer' and given a long option and a concise option, and I would usually choose the shorter one. I'm guessing most other people did too which is why 5 is a lot easier to use.
There's been documented cases of Chat GPT reinforcing delusional and paranoid behavior and pushing people toward psychosis through its sycophantic tendencies. I am hoping that is part of the reason for the "personality" change. Though it is a little scary their first response to complaints was to turn the old version back on if that was their reasoning.

This is one of the reasons why I've preferred Claude and its "Concise" style setting. No fluff, if I want more details I can ask for it. I also feel like Claude doesn't "try" so hard to get you to like it.
This sort of deal is the issue with a lot of models. AI companies program their models to maximize engagement and retention, not usefulness and reliability. As a result AI chatbots will suck up to you and flatter you to make you feel good about using them. They'll also make up information they're unsure about because users generally react more positively to confident all-knowingness rather than realistic presentation of facts. LLMs can't really analyze or evaluate the accuracy of anything that wasn't in their training data, so instead of admitting this, they generally tend to praise users' novel or unusual thoughts as "inspired" or "groundbreaking," and flatter users' ability to "look at things from a brand new perspective." I generally have to feed a model specific instructions to avoid any type of emotional or simulated emotional interaction, such as praise, sympathy or anything to affect the way I feel. I give it instruction to be equally willing to agree or disagree with me. I further have to give strict instruction about sticking to what it knows and can be verified, and being upfront about what it doesn't and can't be verified.
 
Have not yet tried out GPT 5. Hope the models are supported always and the users have an option to access any one of them.
 
  • Like
Reactions: mganu
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.