Original

Decentralized artificial intelligence

avatar
BTCdayu
01-27
This article is machine translated
Show original

Source of this article: https://www.caseycaruso.com/thoughts/decentralized-ai

Compiled: https://twitter.com/BTCdayu

Here are the categories I found interesting at the intersection of cryptocurrency and artificial intelligence.

I believe that openness brings innovation. In recent years, artificial intelligence has crossed the chasm into global utility and influence. Since computing power increases as resources are consolidated, artificial intelligence will naturally promote centralization, and those with more computing power will gradually become dominant. This creates risks for our speed of innovation. I believe decentralization and Web3 are strong contenders to keep AI open.

This list and these sample companies change daily. Please do not consider this a source of fact, but a snapshot in time. If I missed some companies or you think I'm wrong, please PM me on Twitter. Would love to debate.

Decentralized computing for pre-training + fine-tuning

Crowdsourced computing (CPU + GPU)

Argument: The airbnb/uber crowdsourced resource model has the potential to expand into computing and aggregate alternative computing into marketplaces. Problems this could solve: 1) Cheaper compute for certain use cases that can handle some downtime/delay; 2) Censorship-resistant compute for training models that may be regulated/censored in the future.

Objection: Crowdsourced computing cannot achieve economies of scale; most high-performance GPUs are not owned by consumers. Decentralized computing is a complete paradox; it’s actually the opposite of high-performance computing…ask any infrastructure/machine learning engineer!

Example projects: Akash, Render, io.net, Ritual, Hyperbolic, Gensyn

Decentralized reasoning

Run inference on open source models in a decentralized manner

Argument: The open source (OS) model is in some ways approaching closed source ( 1 ) and gaining adoption. To run inference on operating system models, most people use centralized services such as HuggingFace or Replicate, which introduce privacy and censorship issues. One solution is to run inference through a decentralized or distributed provider.

Objection: There is no need to decentralize reasoning because local reasoning will win. Dedicated chips that can handle 7b+ parameter model inference are now available. Edge computing is our solution to privacy and censorship.

Example projects: Ritual, gpt4all (hosted), Ollama (web2), Edgellama (Web3, P2P Ollama), Petals

On-chain artificial intelligence agent

On-chain applications using machine learning

Argument: AI agents (applications that use AI) need a coordination layer to conduct transactions. It might make sense for AI agents to use cryptocurrencies for payments since it is inherently digital, and obviously agents cannot go through KYC to open a bank account. Decentralized AI agents also have no platform risk. For example, OpenAI just randomly decided to change their ChatGPT plugin architecture, which broke my Talk2Books plugin without notice. true story. Agents built on-chain do not have the same platform risks.

Con: The agent is not production ready...at all. BabyAGI, AutoGPT, etc. are all toys! Additionally, for payments, the entity creating the AI ​​agent can just use the Stripe API, without the need for encrypted payments. As for the platform risk argument, this is a well-worn crypto use case that we haven’t seen played out yet… why is this time different?

Example projects: AI Arena , MyShell, Operator.io, Fetch.ai

Data and model sources

Self-manage your data and machine learning models to collect the value they generate

Argument: Data should be owned by the user who generated it, not the company that collected it. Data is the most valuable resource in the digital age, but it is monopolized by big tech companies and poorly financialized. The hyper-personalized web is coming and requires portable data and models. We will move data and models from one application to another over the Internet, just like we move crypto wallets from dapp to dapp. The source of the data, and especially the depth of the fraud, is a huge problem, and even Biden acknowledges it. Blockchain architecture may well be the best solution to the data provenance challenge.

Con: No one cares about owning their data or privacy. We see this time and time again with user preferences. Check out the registration status on Facebook/Instagram! Eventually, people will trust OpenAI's ML data. Let's be realists.

Example projects: Vana , Rainfall

Token-incentivized applications (e.g. companion applications)

Consider Character.ai using crypto token rewards

Argument: Crypto token incentives are very effective in guiding networks and behavior. We will see AI-centric applications taking advantage of this mechanism. One compelling market may be AI companions, which we believe will be a multi-trillion AI-native market. In 2022, the United States will spend $130B+ on pets; AI companions are pets 2.0. We've seen AI companion applications reach PMF, with Character.ai's average session time exceeding 1 hour. We wouldn’t be surprised to see cryptocurrency incentivized platforms take market share here and in other AI application verticals.

Counterargument: This is just an extension of cryptocurrency speculative mania and will not generate lasting use. Token is the CAC of Web 3.0. Haven't we learned our lesson from Axie Infinity?

Example projects: MyShell , Deva

Token-incentivized MLOps (e.g. training, RLHF, inference)

Consider ScaleAI with Crypto Token Rewards

Argument: Crypto-incentives can be used throughout machine learning workflows to incentivize actions such as optimizing weights, fine-tuning, RLHF – where humans judge the model’s output for further fine-tuning.

Con: MLOps are a bad use case for cryptocurrency rewards because quality is too important. While cryptographic tokens are good at incentivizing consumer behavior when entropy is acceptable, they are bad at coordinating behavior when quality and accuracy are critical.

Example projects: BitTensor, Ritual

On-chain verifiability (ZKML)

Prove what models run efficiently on-chain and plug into the crypto world

Argument: On-chain model verifiability will unlock composability, meaning you can leverage outputs across DeFi and cryptocurrencies. Five years from now, when we have agents running physician models for us instead of going to the doctor, we will need some way to validate their knowledge and exactly which models were used in diagnosis. Model verifiability is similar to intellectual reputation.

Objection: No one needs to verify what model is being run. This is the least of our concerns. We are putting the cart before the horse. No one runs llama2 and worries about running different models in the background. This is the problem that cryptography (Zero Knowledge (ZK)) is looking to solve and the consequences of ZK getting too much hype and venture funding.

Example projects: Modulus Labs, UpShot, EZKL

Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments