Vitalik is afraid of AGI and ASI: Humans should prioritize intelligence enhancement tools instead of letting AI replace humans

avatar
ABMedia
12-23
This article is machine translated
Show original

Ethereum co-founder Vitalik Buterin shared his unique views on AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) on Twitter, emphasizing the need to focus on AI tools rather than pursuing the development of superintelligent life forms that could replace humans. He expressed significant concerns about the excessive development of AGI and ASI.

AGI as an independent AI that can maintain civilization

Vitalik defined AGI as a very powerful AI. He stated that if all humans suddenly disappeared and this AI was installed in robots, it would be able to operate independently and maintain the development of the entire civilization. He added that this concept would evolve from the "tool-like" nature of traditional AI to a "self-sustaining life form".

Vitalik pointed out that current technology cannot simulate such a scenario, and we cannot truly test whether AI can maintain civilization without humans. It is also difficult to define the standards of "civilizational development", which conditions represent the continued operation of civilization, and these problems are inherently complex. However, this may be the main distinguishing feature that people can directly identify between AGI and ordinary AI.

(Note: A self-sustaining life form refers to a biological entity or life system that can independently obtain and utilize resources to maintain its life activities, adapt to environmental changes, and continue to exist under certain conditions.)

Emphasizing the importance of intelligent assistance tools, not letting AI replace humans

Vitalik's definition of Artificial Superintelligence (ASI) is when the progress of AI exceeds the value that human participation can provide, and reaches a stage of complete autonomy and higher efficiency. He gave the example of chess, which has only truly entered this stage in the last decade, where AI's level has surpassed the best performance achieved through human-AI collaboration. Vitalik admitted that ASI frightens him, as it means that humans may genuinely lose control of AI.

Vitalik stated that instead of developing superintelligent life forms, it would be better to focus on developing tools that can enhance human intelligence and capabilities. He believes that AI should assist humans, not replace them. He thinks this development path can reduce the risk of AI becoming uncontrollable, while also improving the overall efficiency and stability of society.

(The threat of job loss due to generative AI: Will Amazon workers be completely replaced by fleets of machines?)

Risk Warning

Cryptocurrency investments are highly risky, and their prices may fluctuate dramatically. You may lose your entire principal. Please carefully evaluate the risks.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments