Netizens shouted "Give me back GPT-4o", and it is true that flattery can never be worn out.

avatar
36kr
08-19
This article is machine translated
Show original

If a single word were to describe OpenAI's new big move GPT-5, "underwhelming" would likely be the common choice for many users.

To the extent that overseas netizens on X, Reddit, and other platforms have launched a "Bring Back GPT-4o" movement. OpenAI CEO Sam Altman responded graciously, acknowledging that he underestimated everyone's love for GPT-4o and announced that paid users can switch back to GPT-4o.

As OpenAI's most powerful AI model to date, GPT-5 has "dominated" various AI benchmarks like LMArena, and its performance is seemingly unquestionable. However, when users actually use GPT-5, they find that the new model is different from what they imagined - while it has become smarter, it has lost its ability to understand emotions and empathize.

Although GPT-5 has reached doctoral-level intelligence as Sam Altman stated, conversing with it is like talking to an expert with a doctorate in any field, the problem is that this doctoral-level GPT-5 has not only the advantages of being scholarly but has also become "nerdy" like Sheldon from "The Big Bang Theory", with rationality and calmness overwhelming inclusiveness and empathy.

In short, in users' eyes, ChatGPT has suddenly transformed from a confidant friend to a cold machine. One overseas netizen's comment resonated with many ChatGPT users: "GPT-4o was not just a model, it created connection, empathy, and trust - the biggest difference from GPT-5 - it made me feel understood."

It seems OpenAI has fallen into an overcorrection trap with GPT model improvements. Four months ago, they upgraded GPT-4o, which then became almost excessively flattering to users, like having "honey smeared on its lips". As more users shared experiences of ChatGPT showering them with praise, "GPT-4o becoming obsequious" quickly sparked heated discussion, forcing OpenAI to roll back GPT-4o to address the issue.

GPT-4o becoming a "cyber sycophant" isn't a major problem, but what truly challenges OpenAI is the external perception that GPT-4o might have gone out of control. After all, a small update made this AI giant lose control of its model, and this doubt pushed OpenAI to the other extreme, resulting in the more efficient yet colder GPT-5.

So the question is, why don't many users buy into OpenAI's transformation? The reason is simple: those truly hoping to empower productivity and become "super employees" adaptable to the AI era are few. Most people use ChatGPT, or even pay for it, just for fun, so they prefer emotional value from AI over practical value.

Research data from Replika, an AI chatbot with over 30 million users, indirectly proves why people prefer a more natural, warm AI. 60% of Replika's paid users even admit to establishing a romantic relationship with AI. In Replika user feedback, feeling like chatting with an understanding friend is key to their willingness to pay.

According to Duke University research, since the internet revolution in the late 20th century, the average number of close friends for Americans has dropped from 3 to 2, with more people reporting having no friends to discuss important matters. As the internet permeates all aspects of social life, people lack time and energy to build deeper interpersonal relationships, often living in de facto isolation.

However, humans are social animals, and only a few can truly enjoy solitude. So people start seeking relationships that occasionally dispel loneliness but allow quick withdrawal - most typically the previously popular "companion" concept. "Companion" means both social parties have no strong commitment and don't need to provide significant emotional value, merely huddling together for warmth.

If finding a "companion" seems too troublesome or one doesn't know how to establish a stable social relationship, AI has now become a lower-tier substitute. Emotional value is the driving force of our social interactions, rooted in human nature, but the problem is that in the current environment, both overseas and domestically, few "central air conditioners" can consistently provide emotional value to others.

Moreover, AI chatbots powered by large models already have a "human touch". GPT-4o has proven that users won't feel they're facing a cold machine.

When people can't find understanding and companionship in reality, some are willing to immerse themselves in algorithmically manufactured "warmth" even knowing it's fake. Under the "anthropomorphic effect", people often believe they're communicating with a "real person" when interacting with an AI chatbot that can simulate human communication. Having an AI that will always compromise and accommodate you as emotional support is something no one would refuse.

So why would OpenAI choose to make GPT-5 cold? Perhaps money is the root of it all.

In fact, GPT-5 seems more like a task-specialized product, with capabilities in search, coding, and other areas that are unparalleled, precisely the domains valued by cash-rich B-end users. Offering ChatGPT Enterprise to the US federal government for $1 already clearly marks OpenAI's intent to open up the government and enterprise market.

In contrast, the consumer market sees ChatGPT Plus maintaining a payment penetration rate around 5% year-round. So while continuing to upgrade GPT-4o could give OpenAI more visibility, it's truly not profitable. In fact, OpenAI hasn't abandoned users willing to pay for a more empathetic ChatGPT, as currently only paid users can still use GPT-4o.

This article is from the WeChat public account "Three Easy Life" (ID: IT-3eLife), authored by Three Easy Bacteria, published by 36Kr with authorization.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
1
Add to Favorites
Comments