TAO, which has a market value of US$20 billion, has been widely criticized. Is it really a coin-raising project that only deserves to be "reset to zero"?

This article is machine translated
Show original
Eric Wal , a crypto analyst known as the "Altcoin Killer" for his often sharp and critical views, has published a harsh criticism of the well-known AI concept coin project Bittensor (TAO) on the X platform, almost saying that the project is worthless. This post has attracted more than 1.8 million exposures and more than 4,000 likes and collections.


The following content was compiled by Followin :

The so-called "uselessness" of the $TAO token is indeed amazing. Now this "AI coin" has a market value of 10 billion US dollars.

So, what is TAO good for?

The mechanism of TAO is to run a bunch of different subnets. What are subnets?

Simply put, it is a text prompting tool, which is very easy to understand.

Next, let's talk about the actual principle in detail. You can check this official document ( https://github.com/opentensor/prompting )

Miners on a subnet run 2 different LLMs: Zephyr and wiki agent.

But for the most part, they're just running the base model Zephyr (now switching to the Solar model).

The whole principle is very simple:

You send a prompt word and a large model (LLM) run by miners replies with an answer, just like ChatGPT.

Miners are rewarded with TAO tokens for replying answers. This is how TAO tokens are created.

But it does mean that for every hint word, there are effectively a thousand miners that will be redundantly completing the exact same task.

The network will then validate these answers by checking how similar they are to each other. If a miner gives an answer that is an outlier, they will not receive TAO tokens.

Here's a question. For example, you provide a prompt: "What is water?"

Miners will answer “Water is a compound with the molecular formula H2O”, and they will all be incentivized to learn the same LLM because if you are the only one giving an anomalous response, then you will be penalized.

The result is that a thousand different miners repeat the same answer a thousand times in parallel.

We have no AI magic that can verify whether a model is actually running effectively. There is nothing to stop miners from copying each other’s responses and tweaking them to cheat each other.

The authentication mechanism itself is very basic:

In the current version, the validator generates one or more reference answers, and all miners' responses are compared with these reference answers. Those who are most similar to the reference answers will receive the highest rewards and ultimately the greatest incentives.

Putting aside how easy it is to cheat, this system is incredibly inefficient. For every tip you have 1,000 miners doing the same job? Do you have "decentralized intelligence"?

Look at me guys. Just put one miner in Tanzania. Give it a hint. If it gets shut down or is deemed to be outputting bad data, move to another miner somewhere else. You don't need 1000 different redundant LLMs running these basic language models in parallel if you can't even prevent them from copying and tweaking the answers to fake their work.

What is the purpose of running these "decentralized" models? Zephyr, Solar, and wiki-agent have the same type of content filters as ChatGPT. Zephyr was even trained on the output of ChatGPT conversations. So you have 1000 miners giving you the same underlying answer, but 1000x less efficient than a centralized miner, and still can't verify that 1000 separate answers were generated because the only thing you're doing is a similarity check.

Now, here’s the thing about this piece of trash wearing a crown: you can’t even use the tips of this network as a normal user.

Try it if you don’t believe me. Try actually connecting to the network as a user and getting Zephyr-generated replies from 1,000 miners.

The result is that you can't do it.

Everything that happens in this subnet happens internally, the validator generates the challenge prompt, and 1000 miners generate the same basic LLM response and earn TAO tokens for free.

These TAO tokens are then sold to this FDV $10 billion market cap retail idiot buyers who are trying to get exposure to “decentralized AI” by buying this shitty AI memecoin.

If a high school student were asked to create an AI coin project, he would almost certainly come up with Bittensor (TAO). “Well, I might just need 1,000 miners to generate answers to the prompts, so it’s like, uh, decentralized?”

"Okay, so how do you check the answer? How do you verify it?"

“Well, maybe the network could check, but just check if these answers are similar or if they’re shit?”

This is a pointless decentralized exercise that aims to do something only vaguely similar to “decentralized AI”, which is certainly a cool meme, but it doesn’t actually provide you with any guarantees and has no practical utility other than a 1000x less efficient ChatGPT bot that can only answer its own questions so that it has an excuse to print tokens and then dump them on retail investors.

So, fuck it back to zero.

Criticisms of TAO have attracted a lot of attention and discussion, including Bittensor founder @const_reborn. In response to the question of sharing reference answers between miners, Const refuted it. He said,

“The validator generates one or more reference answers, against which all miners’ responses are compared.”

It is important to note that the validator first retrieves the files to construct these reference answers. Miners do not have this additional context and must use RAG to get the correct answer.

This creates an incentive system that rewards the best retrieval designs. Redundancy is used to accurately measure this, and inference on the network does not execute redundant queries.

For those who understand machine learning, the design is quite profound: an activation system isomorphic to an autoencoder, or a one-way function.

Note that when the verifier constructs a reference answer, they have the actual reference at hand. That is, you are looking at the specific file that the question refers to.

On the other hand, miners need to start with the question, figure out what data to retrieve, and then compile the answer. The asymmetry of verification forces miners to get better and better at smart retrieval. The speed factor in the reward also forces them to run fast infrastructure to do this.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
46
Add to Favorites
14
Comments