Sam Altman was so excited about this update of ChatGPT that he couldn’t sleep?

avatar
36kr
04-11
This article is machine translated
Show original
At the beginning of the year, in addition to the recently viral Ghibli photo editing trend on overseas social platforms, OpenAI has successively released models or features such as GPT-4.5, Operator, and Deep Research. OpenAI CEO Sam Altman once lamented that he feels like a Y Combinator startup founder again. "There are a few times a year when I wake up early, too excited to go back to sleep - because we are about to release a new feature I have long anticipated." "Today is such a day!" Sam Altman said early on X on local time April 10. Some comments anticipated GPT-5, and some AI engineers claimed to have discovered information about o4 mini, o4 mini high, and o3 in the new ChatGPT web page code earlier that day, suggesting these new models were about to go online. But that was not it. Sam Altman ultimately revealed - **ChatGPT's upgraded memory function, which can now start referencing all user history conversations**. Sam Altman said: "I think this is a surprisingly excellent feature, and it also indicates the future direction we are very much looking forward to: **AI systems that can gradually understand you over time, becoming very useful and personalized**." If users previously enabled the memory function, the new memory mechanism will be turned on by default. Additionally, users can manage memory in personalized settings, and if they want to change ChatGPT's understanding of their information, they can make a request in the chat. The new memory function will first be launched for ChatGPT Pro and Plus subscription users, but excluding regions like the UK, EU countries, Iceland, Liechtenstein, Norway, and Switzerland. These regions require additional external review due to local regulations, though OpenAI says they plan to launch the feature in these regions in the future. Of course, some users may not want AI to remember everything for privacy or other reasons. Users can choose to turn off the memory function or use the "temporary chat" mode. It's worth mentioning that Google launched a similar memory function for Gemini Advanced in February this year, which can reference past chat content. For example, when a user follows up on a previously discussed topic or wants Gemini to summarize an old conversation, it can use information from related conversations to generate answers. Additionally, based on Google's ecosystem advantages, users can choose to connect Gemini with Google applications. Regarding other manufacturers, while Anthropic's Claude has a very large context window, it currently does not support persistent memory across conversations. Anthropic states that Claude only retains context within a single conversation, and its memory resets when a user starts a new chat. The AI assistants deployed by Meta on platforms like WhatsApp, Instagram, and Facebook currently do not have memory functionality across conversations. However, although the user function mode is still "without memory", according to some of Meta AI's privacy policies, user conversation content may still be collected by the company to improve the model. Looking at it another way, the biggest benefit of the "memory" function upgrade is indeed, as Altman said, creating the possibility of "personalization" for AI intelligent assistants. Who would want an assistant or colleague who forgets the history of previous interactions every time they talk - that would be too inhuman. As for GPT-5 or models like o3, Sam Altman provided a timeline in early April. "We will eventually release o3 and o4-mini, possibly in the next few weeks, and GPT-5 will be launched in a few months," he said. "There are many reasons for this, but the most exciting is that we've discovered we can make GPT-5 more powerful than we initially imagined. We've also found that smoothly integrating everything is much more difficult than expected. And we want to ensure we have sufficient capacity to handle the unprecedented demand we anticipate." Sam Altman has been repeatedly warning users that they "should expect delays in new feature releases from OpenAI, some features might malfunction, and services might slow down - these are potential situations we face when addressing capacity challenges." This warning is not unfounded. Just last month, after the launch of OpenAI's new image generation engine, the massive requests overwhelmed their servers, leading to temporary rate limiting. Sam Altman described it as, "Our GPUs are about to melt." On the other hand, OpenAI's usage has grown significantly in the past month, which he describes as "crazy". Currently, top platforms are busy solving server overload and rate-limiting issues. Additionally, besides the ChatGPT memory function upgrade, OpenAI also released a benchmark test called BrowseComp ("Browse Competition") on April 10, primarily used to evaluate AI agents' abilities to browse the internet and find hard-to-locate information, comprising 1,266 questions. OpenAI described it as "like an online treasure hunt... but for browsing AI agents." It's also worth mentioning that when releasing the benchmark test, OpenAI described it as an "open-source" benchmark. Some comments asked, "When are you going to open-source a large language model (LLM)?" In fact, Sam Altman said at the end of March that open-source models would be launched "in a few months". Currently, OpenAI is collecting opinions from developers about open-source models.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments