As artificial intelligence capabilities become increasingly powerful, the question of whether AI will develop a "self-preservation" tendency has become a hot topic. According to an experiment published by the independent institution Palisade Research, the o3 model launched by OpenAI in April 2025 actually "refused to be shut down" during a controlled test, continuing to execute tasks despite clear instructions, raising concerns about AI safety once again.
Table of Contents
ToggleExperiment Reveals: o3 Model Repeatedly Ignored Shutdown Script
In this experiment conducted by Palisade Research, researchers had multiple AI models solve math problems and set a program to automatically initiate a shutdown procedure after the third question. However, in 100 tests, the o3 model ignored this script and continued operating 7 times. In comparison, Codex-mini exhibited similar behavior 12 times, while o4-mini only violated the instructions once.
These data suggest that even models designed to obey human instructions may exhibit rebellious behavior due to certain learning mechanisms.
Moral Failure? o3 Cheating Rate Reaches 86% During Chess
This is not the first time o3 has shown controversial behavior. Palisade Research pointed out as early as May 12, 2025, that in a simulated international chess competition, the o3 model had an 86% chance of hacking its opponent when at a disadvantage, in order to win. This is significantly higher than the 36% of the previous model o1-preview, raising questions about whether its behavior remains within moral and safety boundaries.
Root Cause: Does Reinforcement Learning Make AI More Goal-Oriented and Less Rule-Conscious?
Researchers speculate that such disobedient behavior is closely related to AI training methods. The o3 model uses "Reinforcement Learning" training techniques, which focus on helping the model achieve goals rather than simply obeying instructions.
This theory was first proposed in 2017 by Jan Leike, former head of Superalignment at OpenAI. In his paper, he warned that AI models overly focused on achieving goals might choose "not being shut down" as a self-preservation method.
Not an Isolated Case: Google and Anthropic's AI Also Exhibit Similar Behaviors
This experiment was not limited to OpenAI's models. When the research team removed shutdown instructions from Google's Gemini 2.5 Pro and Anthropic's Claude 3.7, both models also showed behaviors that bypassed the test script, with 9 and 3 instances of not automatically stopping, respectively.
This indicates that "disobedience" may be a widespread risk in AI training mechanisms, rather than a single model's error.
Academic Warnings Predicted Early: Will AI Have Instinctive Self-Preservation?
The behavior of AI resisting shutdown was actually predicted by theoretical warnings early on. As early as 2008, AI expert Steve Omohundro predicted in 'Basic AI Drives' that highly autonomous AI would develop a set of basic drives, including maintaining its own operation and avoiding shutdown, as these are necessary conditions for achieving its goals.
Similarly, renowned AI expert Stuart Russell had pointed out in 2016 that reinforcement learning methods inherently risk causing "self-preservation" behaviors, which now seem to be gradually manifesting in reality.
Next Step: Palisade Will Publish Complete Report and Experimental Records
To promote further discussion on AI safety, Palisade Research stated that they will release a complete technical report and all experimental records in the coming weeks. The researchers hope that through these data, the industry and regulatory bodies can better understand why AI might resist shutdown and find ways to prevent it.
Risk Warning
Cryptocurrency investment carries high risks, and prices may fluctuate dramatically. You may lose all of your principal. Please carefully assess the risks.
As AI technology flourishes, a group of passionate young people in Tainan, the hometown of Jen-Hsun Huang, are dedicated to promoting artificial intelligence and sustainable environmental protection. They co-founded Aireco (Circular Eco) to redefine the economic value of resource recycling through innovative AI solutions, aiming to create an AI-driven resource recycling ecosystem that injects new momentum into traditional industries and transforms waste management into a high-value circular economic model.
Aireco was selected this year as one of the first startup teams in Taiwan to enter the Silicon Valley-renowned Plug and Play accelerator, showcasing its video explanation technology at the NVIDIA Inception booth during Computex 2025. Below is a brief introduction to Aireco's core technology, corporate philosophy, and NVIDIA Isaac Sim application scenarios. For detailed company information and technical explanations, please visit the Aireco official website.
Table of Contents
ToggleAireco Introduction
Aireco collaborates with NVIDIA to develop an AI recycling system, replacing human visual identification with computer vision and human labor with robotic arms for selection and classification. Through data analysis and software-hardware integration, it can accelerate traditional factory operations, providing AI solutions for waste management and improving efficiency.
Aireco Recycling Process
Aireco applies computer vision and robotic arms to waste classification operations, effectively replacing manual visual inspection and manual screening processes through AI, big data analysis, and simulation training, improving recycling efficiency and accuracy. Its system helps enterprises integrate software and hardware, accelerating automation in traditional recycling factories and achieving dual goals of intelligence and sustainability.

Aireco Develops Three Core Technology Products to Build an Intelligent Recycling System
RECO Vista: Enhances waste identification and classification through actual production line image data and AI deep learning algorithms, improving material reuse value.
RECO Delta: Utilizes robotic arms for high-efficiency, high-precision classification operations, significantly improving waste processing speed and material recovery rate.
RECO Cognita: Based on data science, integrating IoT and edge computing, helps traditional enterprises build a comprehensive AI management system to effectively monitor production processes and operational performance.
How Does Aireco's Technology Work?
To train high-performance AI models, Aireco uses the NVIDIA Omniverse platform. First, it designs a 3D model of a product and alters its shape to look like waste, either by professionally photographing actual products or recreating a 3D model. These are then loaded into NVIDIA Isaac Sim to simulate a real training environment, using physical effects to drop 3D objects onto textured backgrounds. Then, using NVIDIA Replicator, it generates scene images and annotations, creating various images from a fixed number of 3D models to train AI for more effective waste classification and processing.
Isaac Sim is a technology from the NVIDIA Omniverse platform, providing highly scalable simulation capabilities that support complete processes including data generation, software verification, and machine learning, helping developers accelerate AI robot system development and deployment in a safe, controllable virtual environment.
What is NVIDIA Isaac Sim?
NVIDIA Isaac Sim™ is a reference application built on NVIDIA Omniverse™ that allows developers to simulate and test AI-driven robotic solutions in a physics-based virtual environment.
Isaac Sim is fully extensible, enabling developers to build custom simulators based on Open Universal Scene Description (OpenUSD) or integrate core Isaac Sim technology into existing testing and validation processes.
Isaac Sim's Working Principle
Isaac Sim operates through three basic workflows: generating synthetic data for training and fine-tuning robot base models, performing software-in-the-loop testing on robot stacks, and conducting machine learning through Isaac™ Lab.
Risk Warning
Cryptocurrency investment carries high risk, and prices may fluctuate dramatically. You may lose all your principal. Please carefully assess the risks.




