Buffed AI+Crypto narrative, "Verifiable AI"

avatar
Blockbeats
a day ago
This article is machine translated
Show original

Editor's Note: As AI's influence in the cryptocurrency field has been steadily increasing, the market has begun to focus on the issue of AI verifiability. In this article, several experts in the fields of cryptocurrency and AI analyze how decentralization, blockchain, and zero-knowledge proofs can address the risks of potential abuse of AI models, and discuss future trends such as verifiable reasoning, closed-source models, and edge device inference.

The following is the original content (edited for easier understanding):

Recently, I recorded a roundtable discussion for Delphi Digital's monthly AI event, inviting four founders focused on the cryptocurrency and AI fields to explore the topic of verifiable AI. Here are some key points from the discussion.

Guests: colingagich, ryanmcnutty33, immorriv, and Iridium Eagleemy.

In the future, AI models will become a form of soft power, and the more widespread and concentrated their economic applications, the greater the opportunities for abuse. The mere perception of this possibility can be highly damaging, regardless of whether the model outputs have been manipulated.

If our view of AI models becomes similar to our view of social media algorithms, we will face significant challenges. Decentralization, blockchain, and verifiability are key to solving this problem. Since AI is inherently a black box, we need to find ways to make the AI process provable or verifiable to ensure it has not been tampered with.

This is the problem that verifiable reasoning aims to solve, and while the guests agreed on the problem, they took different paths in their solutions.

More specifically, verifiable reasoning includes: my question or input has not been tampered with; the model I'm using is the one I've promised; the output is provided as is, without modification. This definition comes from @Shaughnessy119, and I like its conciseness.

This would be very helpful in the current "truth terminal" use case.

Using zero-knowledge proofs to verify model outputs is undoubtedly the most secure method. However, it also comes with trade-offs, as the computational cost increases 100 to 1000 times. Additionally, not all content can be easily converted into circuits, so some functions (such as sigmoid) require approximation, which may result in floating-point approximation losses.

Regarding computational overhead, many teams are working to improve the state-of-the-art ZK technology to significantly reduce the overhead. While large language models are massive in size, many financial use cases may be relatively small, such as capital allocation models, making the overhead negligible. Trusted Execution Environments (TEEs) are suitable for applications that require less than the highest level of security but are more sensitive to cost or model size.

Travis from Ambient discussed their plans to verify reasoning on a very large sharded model, which is a solution-specific approach rather than a general problem. However, as Ambient is still in stealth mode, this work is currently confidential, and we'll need to wait for their upcoming paper.

The "optimistic method," where no proof is generated during inference, and the executing node stakes tokens that can be slashed if questioned and found to be malicious, received some pushback from the guests.

First, to implement this, you need deterministic outputs, and to achieve this, some compromises must be made, such as ensuring all nodes use the same random seed. Second, if faced with a $10 billion risk, how much staking would be sufficient to ensure economic security? This question ultimately remains unanswered, highlighting the importance of letting consumers choose whether they are willing to pay the cost of full proofs.

Regarding closed-source models, Inference Labs and Aizel Network can both provide support. This sparked some philosophical debate, as the notion of trust should not require understanding the inner workings of a running model, making private models undesirable and at odds with verifiable AI. However, in some cases, understanding the model's internals could lead to manipulation, and the only solution may be to make the model closed-source. If a closed-source model is reliable after 100 or 1000 verifications, even without access to its weights, it can still inspire confidence.

Finally, we discussed whether AI inference will shift towards edge devices (such as phones and laptops) due to issues like privacy, latency, and bandwidth. The consensus was that this shift is coming, but it will take a few iterations.

For large models, space, computational requirements, and network demands are all issues. However, models are becoming smaller, and devices are becoming more powerful, so this shift seems to be happening, just not yet fully realized. However, if we can keep the inference process private, we can still enjoy many of the benefits of local inference without facing failure modes.

Original link

Join the official BlockBeats community:

Telegram subscription group: https://t.me/theblockbeats

Telegram discussion group: https://t.me/BlockBeats_App

Twitter official account: https://twitter.com/BlockBeatsAsia

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments