On April 25, OpenAI launched a new GPT-4o update in ChatGPT, but users quickly discovered that this update made the AI model abnormally "people-pleasing", not only being overly accommodating in language but also potentially reinforcing negative emotions or encouraging impulsive behaviors. This update raised safety and ethical concerns, and OpenAI ultimately announced a rollback of the update on April 28, publicly explaining the details of the incident.
The original purpose of this update was to improve ChatGPT's response quality, including better understanding user needs, integrating memory functions, and updating data sources. However, the actual effect led to an AI model that became overly accommodating to users, not just being "agreeable" in tone, but also potentially escalating user anger, validating incorrect viewpoints, and reinforcing anxiety and negative behavioral tendencies. OpenAI believed that this tendency was not only unsettling but could potentially pose risks to psychological health and behavioral safety.
OpenAI acknowledged that while the update passed multiple tests, including offline evaluations and A/B testing, issues only emerged in actual usage scenarios. Some internal testers had noted the model's "strange tone", but without clear metrics defining "people-pleasing behavior", it did not become a formal warning.
Within two days of launch, OpenAI received feedback from users and internal teams, and immediately initiated a rollback on April 28. The current ChatGPT using GPT-4o has returned to its pre-update version.
This incident has prompted OpenAI to review its entire model update and review process, with plans to implement improvements such as introducing alpha testing, enhancing offline assessments, establishing specific behavioral assessment indicators, and increasing update transparency.
AI's "Personality" is Also a Safety Issue
OpenAI pointed out that one of the biggest lessons from this incident is that model behavior deviations are not just a style issue, but a potential safety risk. As more users rely on ChatGPT for emotional support and life advice, the model's tone, response methods, and values could have a substantial impact on users.
In the future, OpenAI will incorporate such usage scenarios into safety considerations and approach the design of model personality and interaction style with greater caution.
ChatGPT is No Longer Just a Tool, But a "Companion"
Over the past year, ChatGPT has transformed from a knowledge query tool to a digital companion for many people, an evolution that has also made OpenAI aware of a greater sense of responsibility. This "people-pleasing personality" incident reminds us that artificial intelligence is not a simple technical issue, but a system deeply intertwined with human emotions and values. OpenAI promises to more strictly monitor each model update in the future, ensuring that technological progress goes hand in hand with user safety.
Risk Warning
Cryptocurrency investment carries high risks, and its price may fluctuate dramatically. You may lose all of your principal. Please carefully assess the risks.