Manus brings the dawn of AGI, but AI safety is also worth pondering

avatar
MarsBit
03-08
This article is machine translated
Show original
Here is the English translation:

Manus achieved SOTA (State-of-the-Art) performance in the GAIA benchmark test, showing that its performance surpasses the same-level large models of Open AI. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve decomposing contract terms, strategy prediction, and solution generation, and can even coordinate legal and financial teams. Compared to traditional systems, Manus' advantages lie in its dynamic goal decomposition capability, cross-modal reasoning capability, and memory-enhanced learning capability. It can decompose large tasks into hundreds of executable subtasks, process multiple types of data simultaneously, and continuously improve its decision-making efficiency and reduce error rates through reinforcement learning.

Data

While marveling at the rapid technological development, Manus has once again sparked a divergence within the circle on the evolutionary path of AI: will the future be dominated by AGI or MAS collaboration?

This starts with the design philosophy of Manus, which implies two possibilities:

One is the AGI path. By continuously improving the level of individual intelligence, it approaches the comprehensive decision-making capability of humans.

The other is the MAS path. As a super coordinator, it commands thousands of vertical domain Agents to work together.

On the surface, we are discussing different path divergences, but in reality, we are discussing the underlying contradiction in the development of AI: how should efficiency and security be balanced? As the individual intelligence approaches AGI, the risk of decision-making black box increases; while the collaboration of multiple Agents can diversify the risk, it may miss critical decision windows due to communication delays.

The evolution of Manus has inadvertently amplified the inherent risks of AI development. For example, the data privacy black hole: in the medical scenario, Manus needs to access patient genome data in real-time; during financial negotiations, it may involve unpublished financial information of enterprises; the algorithm bias trap, in recruitment negotiations, Manus may give lower-than-average salary recommendations to specific ethnic candidates; in legal contract review, the misjudgment rate of emerging industry clauses is close to 50%. Another example is the vulnerability to adversarial attacks, where hackers can inject specific audio frequencies to make Manus misjudge the opponent's bid range during negotiations.

We have to face a terrible pain point of AI systems: the more intelligent the system, the wider the attack surface.

However, security has always been a word mentioned in web3, and under the framework of the Blockchain Trilemma (a blockchain network cannot simultaneously achieve security, decentralization, and scalability), various encryption methods have been derived:

  • Zero Trust Security Model: The core idea of the zero-trust security model is "do not trust anyone, always verify", meaning that devices should not be trusted by default, regardless of whether they are within the internal network. This model emphasizes strict identity authentication and authorization for each access request to ensure system security.
  • Decentralized Identity (DID): DID is a set of identifier standards that allow entities to obtain verifiable and persistent identification without the need for a centralized registry. This realizes a new decentralized digital identity model, often referred to as self-sovereign identity, and is an important component of Web3.
  • Fully Homomorphic Encryption (FHE): FHE is an advanced encryption technique that allows arbitrary computations to be performed on encrypted data without decrypting it. This means that a third party can operate on ciphertext, and the result after decryption is consistent with the result of performing the same operation on the plaintext. This feature is of great significance for scenarios that require computation without exposing the original data, such as cloud computing and data outsourcing.

The zero-trust security model and DID have seen a certain number of projects making breakthroughs in multiple bull markets, with some achieving success and others being submerged in the crypto wave. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful tool for solving security problems in the AI era.

How to solve the problem?

First, at the data level. All user input information (including biometrics, voice tone) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in a medical diagnosis case, the patient's genome data participates in the analysis in ciphertext form throughout, avoiding the leakage of biological information.

At the algorithm level. Through FHE-enabled "encrypted model training", even the developers cannot peek into the decision-making path of the AI.

At the collaboration level. Multi-Agent communication uses threshold encryption, so that the breach of a single node will not lead to the leakage of global data. Even in supply chain attack and defense exercises, the attacker cannot obtain a complete business view after penetrating multiple Agents.

Due to technical limitations, web3 security may not have a direct connection with most users, but it has a thousand threads of indirect interest. In this dark forest, if we do not make every effort to arm ourselves, we will never escape the identity of "cannon fodder".

  • uPort was launched on the Ethereum mainnet in 2017, and may be the earliest decentralized identity (DID) project launched on the mainnet.
  • In the zero-trust security model, NKN launched its mainnet in 2019.
  • Mind Network is the first FHE project to go live on the mainnet and has taken the lead in collaborating with ZAMA, Google, DeepSeek, and others.

uPort and NKN seem to be projects that the editor has never heard of, indicating that security projects are indeed not the focus of speculators. Whether Mind Network can escape this curse and become the leader in the security field, let's wait and see.

The future is here. The closer AI approaches human intelligence, the more it needs a non-human defense system. The value of FHE lies not only in solving current problems, but also in paving the way for the era of strong AI. On this perilous path to AGI, FHE is not an option, but a necessity for survival.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments