Vitalik is afraid of the development of AI: Humans should build convenient tools instead of creating intelligent life

avatar
BlockTempo
2 days ago
This article is machine translated
Show original
Here is the English translation:

Generative AI development has made rapid progress in the past few years, with more and more amazing results being presented to the world. This not only demonstrates the strong potential of the technology, but also raises discussions and deep concerns about whether AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) can be realized.

AGI: Refers to artificial intelligence with human-like intelligence, capable of performing tasks in various fields, not limited to specific applications.
ASI: Refers to artificial intelligence that exceeds all levels of human intelligence, not only having outstanding capabilities in specific fields, but also surpassing humans in every aspect.

OpenAI CEO Sam Altman wrote an article in September this year predicting that ASI will emerge in a few thousand days, and that ASI will surpass AGI, with far greater capabilities in handling intelligent tasks than humans.

Grimes: Afraid of ASI

Canadian singer Grimes expressed her feelings on Twitter on the 21st, saying that after observing the results of AI projects like Janus, she believes AGI has been realized to some extent, and she is a bit afraid of ASI:

At least from my perspective, I don't know, I'm obviously not a technical expert, just a layperson lacking basic knowledge, so my words shouldn't be taken too seriously, but this is how I feel. Do others share the same feeling?

I'm not actually afraid of AGI, but I'm a bit afraid of ASI, as these two seem to be completely different milestones. However, I can at least say that I believe I have sensed the existence of intelligent consciousness, but I want to emphasize again that as a layperson lacking basic knowledge, I may have anthropomorphized AI too much.

Vitalik Buterin Shares the Concern, Hopes Not to Create Superintelligent Life

In response, Ethereum co-founder Vitalik Buterin said that his definition of AGI is: AGI is a sufficiently powerful artificial intelligence that if all humans suddenly disappeared and it was uploaded to a robot body, it would be able to independently sustain civilization.

Vitalik Buterin admitted that this is a very difficult definition to measure, but he believes it reflects the potential intuitive difference between the "AI we're used to" and "AGI" in many people's minds, which is the transformation from a tool that constantly relies on human input to a self-sufficient life form:

ASI is a completely different matter. My definition of ASI is when humans no longer add value to the production cycle (just like in board games, we've actually reached this point in the past decade).

Vitalik Buterin finally stated that ASI frightens him, and the AGI he defined also frightens him, because it poses obvious risks of losing control. Therefore, he supports focusing efforts on creating intelligent enhancement tools for humans, rather than constructing superintelligent life forms.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments