Media reports: OpenAI's GPT-5 training encounters obstacles, time delays and high costs, data shortage

avatar
36kr
13 hours ago
This article is machine translated
Show original
The next leap for AI seems to be delayed in reporting. According to a report in The Wall Street Journal on the 20th, OpenAI's new artificial intelligence project GPT-5 (codenamed Orion) is facing major difficulties. The project has been developed for over 18 months, with huge costs, but has not yet achieved the expected results. Insiders revealed that OpenAI's biggest "sponsor" Microsoft originally expected to see the new model by the middle of 2024. OpenAI has conducted at least two large-scale trainings, each taking months and consuming massive amounts of data, but new problems have arisen each time, and the software has failed to reach the researchers' expected results. The analysis suggests that there may not be enough data in the world to make it smart enough.

Staggering Costs, Slow Progress of the GPT-5 Project

Analysts previously predicted that tech giants could invest $1 trillion in artificial intelligence projects in the coming years. There are also estimates showing that a 6-month training of GPT-5 would cost about $500 million just in computing costs. OpenAI CEO Sam Altman said that the cost of future AI models is expected to exceed $1 billion. However, people familiar with the project said:

"Although Orion's performance is somewhat better than OpenAI's current products, it is not enough to justify the huge operating costs."

In October this year, OpenAI's $157 billion valuation was largely based on Altman's forecast, in which he previously said that GPT-5 would be a "major breakthrough" and that GPT-4 performed like a smart high school student, but the eventual GPT-5 would actually be more like a PhD in some tasks. The report said that GPT-5 should be able to unlock new scientific discoveries and complete everyday human tasks such as scheduling or flights. Researchers hope it will make fewer mistakes than existing AI, or at least acknowledge "doubts", as current models may hallucinate. However, there is no fixed standard for "when it will become smart enough AI", it is more a matter of feeling. So far, the GPT-5 under development still does not feel strong enough. Altman said in November that "no product called GPT-5 will be released by 2024".

Data Shortage as the Main Bottleneck

To avoid wasting huge investments, researchers are trying to minimize the chances of failure through small-scale trials. However, the GPT-5 plan seems to have had problems from the start. In mid-2023, OpenAI began a training run, which was also a test of the proposed new design for Orion. But the process has been slow, indicating that larger-scale training may take a very long time, which in turn will make the cost extremely high. OpenAI's researchers decided to make some technical adjustments to enhance Orion, and they also found that to make Orion smarter, they need more high-quality, diverse data. Model testing is an ongoing process, and large-scale training runs may take months, with trillions of tokens being "fed" to the model. However, data such as news articles, social media posts, and scientific papers on the public internet is no longer sufficient. As Datology AI CEO Ari Morcos said:

"It's becoming very expensive, and it's hard to find more data of the same high quality."

To solve this problem, OpenAI has chosen to create data from scratch. They hire software engineers, mathematicians and other professionals to write new code or solve math problems, which are then used as training data. The company also collaborates with experts in fields such as theoretical physics to explain how they would handle the most challenging problems in their field, but this process is very slow, and the training of GPT-4 used about 13 trillion tokens. Even with 1,000 people writing 5,000 words a day, it would only produce 1 billion tokens in a few months. OpenAI has also begun to develop "synthetic data", using AI-generated data to train Orion, and believes it can avoid failures by using data generated by its other AI model o1.

Google Catching Up, OpenAI Panicking?

This year, with Google launching its most popular new AI application NotebookLM, OpenAI has become more panicked. Due to the stagnation of Orion, the company has started developing other projects and applications, including a slimmed-down version of GPT-4 and Sora, which can create AI-generated videos. However, insiders say this has led to a need for the teams developing new products and the Orion researchers to compete for limited computing resources. In addition, OpenAI is also developing more advanced reasoning models, believing that by allowing AI to "think" for longer, it can solve complex problems not encountered during training. However, these new strategies also face challenges. Apple researchers have found that reasoning models, including OpenAI's o1, are likely just imitating training data rather than truly solving new problems. In addition, o1's method of generating multiple answers has greatly increased operating costs. Nevertheless, OpenAI is still relentlessly pushing forward with the development of GPT-5. On Friday, Altman announced plans for a new reasoning model that is smarter than any previous product, but did not reveal when or whether a model worthy of being called GPT-5 will be released. This article does not constitute personal investment advice, does not represent the platform's views, the market has risks, and investment requires caution. Please make independent judgments and decisions. This article is from the WeChat public account "Wall Street Insider", author: Huang Wenwen, authorized by 36Kr for release.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments