On the 8th of this month, OpenAI officially launched its latest model GPT-5, but the community's response was mixed. Some believed it became smarter, but more people complained that GPT-5's generated results were worse than before, and it clearly did not reach the AGI level that Sam Altman claimed. This has sparked discussions about whether the intelligence of LLM models has reached its peak after training on almost all human data.
Altman on GPT-6
In this context, OpenAI CEO Sam Altman, in a recent interview, seemed to hint at this. When discussing the next-generation large language model GPT-6, he stated that the team will focus on "memory" and "personalization", without emphasizing how smart or exam-capable the new model would be like before.
Memory is the key to truly personalizing ChatGPT. It needs to remember who you are - your preferences, habits, and quirks - and adjust accordingly.
After GPT-5 was criticized by users for not being significantly smarter and appearing more cold, this shift seems particularly striking. While this indicates OpenAI's intention to reposition the product from an "all-knowing answerer" to a "long-term conversationalist", hoping to increase stickiness through memory features, it also seems to tacitly acknowledge that achieving another significant intelligence leap is becoming increasingly difficult.
Trump's Executive Order and Ideological Neutrality
On the other hand, US AI policy also influences product development. The Trump administration issued an executive order in July requiring federally procured AI systems to maintain "ideological neutrality" and support customization. Altman responded that GPT-6 will adopt a "centrist" default while retaining flexibility for users to adjust the tone, stating:
I think our product should have a more moderate, centrist position, and then you can push it quite far.
The fact that language models must accommodate different perspectives shows that AI has transformed from a pure technical achievement to a public infrastructure addressing diverse values.
Technical Ceiling and Capital Paths
While we all hope for AI to continue exponential growth, perhaps we should start considering where growth will come from when large language models slow down in intellectual enhancement.
OpenAI has placed its bet on "memory" and "personalization", while also exploring peripheral fields like neural interfaces, energy, and robotics. Market capital may need to rethink shifting the computational race towards investing in AI application layers more closely aligned with human needs (though competition will be fiercer, reminiscent of the internet bubble era where over 95% of web applications did not survive).
Summarizing Altman's revelations, GPT-6 seems to prioritize "understanding you better" over "being smarter". Regardless of whether you're capital, enterprises, or an individual being AI-ized, how to think about the next step will be a key point of observation.