Ethereum co-founder Vitalik Buterin shared his unique views on AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) on Twitter, emphasizing the need to focus on AI tools rather than pursuing the development of superintelligent life forms that could replace humans. He expressed significant concerns about the excessive development of AGI and ASI.
Table of Contents
ToggleAGI as an independent AI that can maintain civilization
Vitalik defined AGI as a very powerful AI. He stated that if all humans suddenly disappeared and this AI was installed in robots, it would be able to operate independently and maintain the development of the entire civilization. He added that this concept would evolve from the "tool-like" nature of traditional AI to a "self-sustaining life form".
Vitalik pointed out that current technology cannot simulate such a scenario, and we cannot truly test whether AI can maintain civilization without humans. It is also difficult to define the standards of "civilizational development", which conditions represent the continued operation of civilization, and these problems are inherently complex. However, this may be the main distinguishing feature that people can directly identify between AGI and ordinary AI.
(Note: A self-sustaining life form refers to a biological entity or life system that can independently obtain and utilize resources to maintain its life activities, adapt to environmental changes, and continue to exist under certain conditions.)
Emphasizing the importance of intelligent assistance tools, not letting AI replace humans
Vitalik's definition of Artificial Superintelligence (ASI) is when the progress of AI exceeds the value that human participation can provide, and reaches a stage of complete autonomy and higher efficiency. He gave the example of chess, which has only truly entered this stage in the last decade, where AI's level has surpassed the best performance achieved through human-AI collaboration. Vitalik admitted that ASI frightens him, as it means that humans may genuinely lose control of AI.
Vitalik stated that instead of developing superintelligent life forms, it would be better to focus on developing tools that can enhance human intelligence and capabilities. He believes that AI should assist humans, not replace them. He thinks this development path can reduce the risk of AI becoming uncontrollable, while also improving the overall efficiency and stability of society.
Risk Warning
Cryptocurrency investments are highly risky, and their prices may fluctuate dramatically. You may lose your entire principal. Please carefully evaluate the risks.