This article is machine translated
Show original
I just finished reading Google's latest AGI (Artificial General Intelligence) paper, and its perspective is so bold and crypto-native that I almost felt like I was reading a cryptocurrency project's white paper.
Let me summarize some of the key points:
1. AGI will ultimately be a decentralized autonomous organization (DAO), not a CEO. We often fantasize about waking up one morning to an omnipotent deity like GPT-10. However, the paper points out that the future AGI is likely to be decentralized. Just as no one person in a company excels at everything, AGI will be a network of numerous, complementary "specialized agents." This network has no single central point, and superintelligence emerges from the intense trading and collaboration among the agents.
In other words, AGI is not a single entity, but a kind of "market state." 2. AGI's governance should rely on "smart contracts," not "law." As models evolve from a single structure to a market structure, the safety paradigm must also shift from "psychology" to "governance studies." Previously, AI safety was a matter of aligning a single, massive brain. However, human oversight is powerless in high-frequency interactions occurring hundreds of millions of times per second. Therefore, the introduction of smart contracts is essential. When an agent completes a task, an oracle verifies the result and automatically executes payment. "Code becomes law," and if safety constraints are not met, the flow of funds is blocked.
3. Introduction of "Staking" and "Slashing" How can we prevent malicious agents? The research team surprisingly replicated the Proof-of-Stake (PoS) mechanism. If an agent wants to place a large order, it must first stake its assets. If malicious activity is detected during the audit process, the smart contract immediately slashes the collateral assets. This trust based on economic collateral is far more effective than simple code review.
4. On-Chain Identity Authentication (DID) and Gas Fee Regulation
DID: To prevent Sybil attacks, each agent must have a unique identity based on public key cryptography, linked to a legal entity.
Dynamic Gas Fee: To prevent spam data contamination, we propose charging a dynamic fee for agent operations. This closely mirrors the gas fee regulation mechanism used on the Ethereum network during congestion.
5. On-Chain Records: All decisions and transaction histories must be recorded in a cryptographically secure, immutable, append-only ledger. This enables forensic analysis in the event of a system failure and ensures that no one can evade accountability.
This paper represents a paradigm shift in AI security. In other words, it extends beyond simple computer science and value alignment to the realms of economics and game theory.
The cryptocurrency industry's intense exploration of DIDs, smart contracts, oracles, economic models, and governance mechanisms over the past decade, regardless of their maturity, represents at least a few steps forward. And these steps may even lay the foundation for a future, massive, decentralized, silicon-based life form.
Future AGI security experts will likely be closer to "AI economists" who understand game theory, market design, and decentralized governance than code-writing engineers. Our fundamental task is not simply to modify the neurons of this massive model, but to design the consensus, incentive, and governance models of this new species.
Due to the extensive amount of information in the paper, the above is only a partial excerpt. Those interested are encouraged to read the original text.

Chao
@chaowxyz
刚读完Google最新的AGI论文,论文的观点大胆且非常crypto native,我一度以为是在看加密项目的白皮书。
几个核心观点:
1. AGI终将是一个 DAO,而不是一个 CEO。
我们总是幻想某天一觉醒来,诞生了 GPT-10 这样的全知全能神。论文指出未来的AGI大概率是分布式的
This paper was able to break down the barriers between technology, economics, and game theory thanks to a team of so-called "hexagonal warriors" who are at the forefront of their respective fields.
First author Nenad Tomašev: Senior Chief Scientist at DeepMind. A true cross-disciplinary expert, he participated in the game AI research related to AlphaZero and is the first author of "Virtual Agent Economies" (which we previously covered in detail). In other words, this is an AI market governance scheme designed by someone who understands "game theory" firsthand.
Central author Simon Osindero: A disciple of AI's father, Geoffrey Hinton, and co-inventor of deep belief neural networks (DBNs), he is a leading figure with over 57,000 citations. His participation ensures that the entire theory possesses absolute rigor, even in the underlying technical aspects. Policy and Economic Advisory Board: Includes Sébastien Krier, DeepMind's AGI Policy Director (Constitutional and Regulatory Design); Julian Jacobs, a political economist at the University of Oxford; and Matija Franklin, an AI ethics expert at Cambridge/UCL.
This is not just an academic paper. It is a sort of **'prescription for the future** for AGI, a joint effort by senior researchers at Google DeepMind.
x.com/chaowxyz/status/20025827...…
Full Paper
Distributional AGI Safety
Korean translation
blog.naver.com/ryogan/22411785...…
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content





