For decades, the International Collegiate Programming Contest (ICPC) has been considered the "Olympics" of computer programming. However, this year, the limelight was stolen by two "non-human" competitors: OpenAI's GPT-5 and Google DeepMind's Gemini 2.5 DeepThink.
GPT-5 and Gemini 2.5 Deep Think, as participating models, were subject to official ICPC rules and organizational oversight, participating in the same problem-solving process as human competitors. Although they didn't directly compete against the student teams, they delivered impressive results:
● GPT-5 scored full marks, solving all 12 questions, which is equivalent to the "gold medal" level.
● Gemini 2.5 Deep Think solved 10 of the 12 problems in 677 minutes, also reaching the gold medal level. According to Google, this result would be second in the world among humans.
It's worth noting that the gold medal-winning teams at this year's ICPC came from Saint Petersburg State University, the University of Tokyo, Beijing Jiaotong University, and Tsinghua University. Even these elite teams didn't manage to get all the answers right (the best score was 11/12). In other words, this was the first time AI had surpassed its competitors in this type of algorithmic competition.
ICPC: The Programmer's Olympics
ICPC is the world's premier undergraduate programming competition. Since the 1970s, it has brought together the world's top algorithmic talent from universities. This year, the ICPC finals saw teams from 103 countries and 139 universities compete. The competition rules are deceptively simple:
● Each team consists of three university students;
● Solve 12 algorithm problems within 5 hours;
● Ranking is based on the number of questions solved and the time taken.
But the challenge behind this competition far exceeds that of typical programming competitions. ICPC challenges often involve cutting-edge algorithms in graph theory, number theory, dynamic programming, combinatorial optimization, network flows, and other fields. They test not only coding speed but also mathematical proficiency and teamwork. Over the years, ICPC gold medal-winning teams have almost always become core technical talent at global tech companies.
Precisely because of the authority and challenge of ICPC, the entry of AI into this year's competition is particularly symbolic: it directly pushes AI into the most rigorous algorithmic arena.
GPT-5 gives perfect answers, Gemini 2.5 solves questions that humans can't answer C
According to OpenAI, GPT-5 received no special training for the ICPC competition, nor did it use any external tools. Like other human teams, it was given the same PDF challenge questions, submitted its answers through the official grading system, and completed all of them within five hours.
The results were astonishing: 11 questions were passed in one go, and the only difficult question was solved on the 9th submission, ultimately achieving a perfect score of 12/12 - remember, the strongest human team this year scored 11/12, and GPT-5 directly scored full marks, which is extremely rare in the history of ICPC.
Based on this, OpenAI also shared the results of GPT-5 on the X platform:
"We officially competed in the ICPC AI track, also with a five-hour deadline to solve 12 problems, with the answers judged in real time by the ICPC evaluation system. The results showed that 11 of the 12 problems were passed on the first submission, while the most difficult problem was solved only on the ninth submission. Ultimately, GPT-5 solved all 12 problems, while the best human team only managed 11."
At the same time, Google also announced the details of the Gemini 2.5 Deep Think competition: 8 problems solved within 45 minutes; 10 problems solved within 3 hours; even more shockingly, Gemini successfully solved problem C within the first half hour of the competition - a difficult problem that no university team had solved.
The problem, reportedly, requires finding a configuration of pipe switches in a complex network of multiple reservoirs and pipelines that allows all reservoirs to be filled in the shortest possible time. Each pipe can be open, closed, or partially open, creating an almost infinite number of possible combinations, making the search for the optimal solution extremely difficult.
Faced with this problem, Gemini 2.5 Deep Think's solution is "clever":
1. First, set a "priority value" for each reservoir, indicating the degree to which it should be allocated relative to other reservoirs;
2. After given priority values, find the optimal pipeline configuration through dynamic programming;
3. Further apply the minimax theorem to transform the problem into finding the "most constrained" priority combination;
4. Finally, in the convex optimization space, nested three-part search is used to quickly converge to the optimal solution.
This approach isn't the "standard approach" in the official competition solution, but rather the model's own deduction. In other words, Gemini demonstrated original algorithmic thinking that transcends memorization. Google emphasized in a blog post that this wasn't just a correct answer, but a "creative breakthrough."
Why is this time so significant?
In fact, the high scores of large models in various exams and benchmarks are no longer news:
● LLMs like ChatGPT and Gemini repeatedly score high on human exams like the SAT, bar exam, and TOEFL;
● In July this year, Gemini won the gold medal at the International Mathematical Olympiad (IMO);
● LLM has already topped the charts in various NLP and logical reasoning benchmarks.
However, these achievements are often questioned as relying on memorizing training data or relying on brute force searches with massive computing power. However, live algorithm competitions like ICPC are different: first, the problems are novel and almost impossible to appear in the training corpus; second, they require a combination of mathematical modeling, reasoning, and code implementation; and most importantly, solutions must be found within a limited timeframe, rather than requiring offline, slow thinking.
The performance of GPT-5 and Gemini 2.5 Deep Think in the ICPC competition demonstrates their ability to reason on the spot, perform abstract modeling, and solve problems creatively, which is more significant than simply scoring high on standardized tests. Many AI engineers on social media lamented, "In the past, we worried that AI would just memorize question banks; now, it's beating human champions in live competitions. It feels like witnessing a moment of 'human-machine intelligence parity.'"
This isn't the end, but a beginning. Whether AI can extend this capability to solve more complex real-world problems remains to be seen, but one thing is certain: AI is no longer just a "code-writing assistant" but truly possesses the power to compete head-on with human intelligence.
This article comes from the WeChat public account "CSDN" , compiled by Zheng Liyuan, and published by 36Kr with authorization.