This article is machine translated
Show original
Recently, a16z proposed the concept of "Staked Media," which is quite interesting. Considering that social media is now filled with AI accounts, fake news can look just as real, and ordinary users simply don't have the time or energy to distinguish between truth and falsehood.
The prediction of "pledged media" is not a pipe dream. It may happen within the next two years.
So, what does it mean to pledge media?
In simple terms, using cryptographic technologies like ZooKeeper allows media outlets or individuals to prove their credibility, similar to "signing a written agreement" online. This agreement is recorded on the blockchain and cannot be tampered with. However, simply signing an agreement isn't enough; collateral, such as ETH, USDC, or other cryptocurrencies, is required. This serves to prove the authenticity and reliability of the published content. If the information is proven to be fake, the collateralized assets will be forfeited. This creates an environment that encourages speaking the truth.
AI-generated articles and videos are everywhere, along with rampant fake news. Staking media is meant to make content creators more cautious, rather than speaking carelessly. For example, if a YouTuber posts a video praising a product, they might stake some ETH or USDC on the Ethereum blockchain. If the video is fake, the money is lost, and viewers feel reassured. Or, imagine you're a blogger recommending a phone; you might stake $100 worth of ETH on the Ethereum blockchain, stating, "If the phone's beauty function doesn't achieve a certain effect, I'll compensate you." Viewers see you've staked money and perceive you as reliable. If the video is AI-generated, the $100 is lost.
How can you play the staking game? You can imagine it.
Whether it's a KOL/media outlet or a minor influencer, when publishing an article, you need to "sign a document" on the blockchain (such as Ethereum) (signature verification is sufficient), and at the same time deposit a certain amount of tokens (such as ETH/USDT) into a specific smart contract. If the content is false, this money will be confiscated (given to the victim or destroyed). If the content is legitimate, the money can be returned after a certain period of time, and you may even receive a reward (such as staking the media's own issued tokens/funds confiscated from other sources for false content, etc.).
The specific amount to be staked depends on the platform's rules. Major media outlets/ KOL who publish important news will stake more tokens, such as hundreds or thousands of dollars or even more; smaller influencers who publish everyday content may only need to stake tens of dollars. The amount can be linked to the content's influence (using a fluctuating algorithm); the greater the influence, the more tokens can be staked.
For media outlets, pledging does increase financial costs, but it can gain the trust of the audience, which is also a cost in the age of fake news.
However, how is authenticity determined? It's verified through a dual approach of community and algorithm. On the community side, users with voting rights (who need to stake crypto assets) vote on-chain. If a certain percentage, such as 60% or higher, declares it fake, it's deemed fake. Additionally, an algorithm analyzes data to assist in the verification. If the content creator disagrees, they can initiate arbitration, which is then handled by an expert committee. If malicious manipulation by voters is discovered, their funds will be confiscated. Participation in voting and becoming a member of the expert committee are rewarded. Rewards come from confiscated funds and the media's own tokens.
In addition, content creators can use zk technology to generate proof of authenticity from the outset, such as using zk technology to generate videos.
What if rich people cheat? Rich people can pledge large sums of money to create fake news. As long as the returns are large enough, they might do it.
This involves not only pledged funds, but also historical records and reputation. Accounts with a history of penalties and confiscations are tagged, and the pledged funds for future content will increase. If an account is penalized and confiscated three or four times, people will be less likely to trust its subsequent content, and there may be legal repercussions. Therefore, falsifying information has significant costs, including not only financial losses, but also the trust built over time, historical records, reputation, and real legal liabilities.
Perhaps the pledged media project is already underway.
This is a completely imagined demand; whoever pays for it consumes it. I just can't understand it.
I've mentioned this idea before, and it seems quite feasible.

陈剑Jason
@jason_chen998
12-08
Swarm做的这个“验证真相”协议我前段时间也在琢磨,现在从大媒体到自媒体,各种假消息假新闻层出不穷,一个事能搞的反转反转再反转,纠其本质是造价成本过低但收益又过高,如果每个传播信息的角色都是一个节点,那为什么不能用区块链POS的经济博弈机制来解决这个问题?比如每个节点在发表观点之前都需要 x.com/GetSwarmed/sta…
No one needs to pay.
This is indeed quite interesting. Many social style prediction markets we've discussed before have also had similar designs, with the principle of "No Bet, No BB!"
exactly. No Bet,No BB!
The core issue should be the evaluation and incentive of the value of information.
I made dumplings just for a dish of vinegar, but I couldn't find anyone to eat them with. 😂
Remember Bihu?
Remember Augur? Now everyone only knows Polymarket. The climate has changed, the soil has changed, and the plants that can grow are different now.
The media pledging system has too many loopholes; economic incentives cannot solve the trust crisis.
The profits from fabrication can far outweigh the costs of pledging.
The poor have no right to speak.
High-quality content cannot be disseminated.
Is the truth chosen by the majority vote truly the truth? What if there are vested interests involved?
Problem A cannot be solved by introducing B. The solution to AI-generated fabrication and information overload lies in upgrading AI algorithms themselves. Musk's recent interview regarding Twitter's algorithm upgrade mentioned some points that are closer to the truth and meet the needs.
It doesn't say here that you can't send messages without pledging. It's targeting so-called trusted entities, like verified accounts. There are many ways to suppress the interests involved; there's no 100% way, but it can be greatly curbed.
These are all minor details, not the core issue.
For me, there's only one main problem: can the majority of people tolerate fake news? If the majority are indifferent to fake news, then it's a false need. If a segment of the population needs a good information environment, that's a genuine need. As long as there's a need, everything else is minor. If not, that's another story.
There is demand, but that doesn't necessarily support a product with a sound business logic. Fake news and information overload have always existed. If the business logic were sound, there would already be pledged media outlets. But even with pledged media, if no one publishes on them, it's the same as having no readers. With this new set of rules, why should anyone be allowed to publish on them? It's a bit like a chicken-and-egg problem.
Let's wait and see, don't rush to conclusions.
The next Story Protocol is already on its way – an upgraded version of "owing money for reading and writing".
I hope the next one won't be in debt.
I suspect he's been liquidated and forced to pay off his debts.
I'm happy to try this model; it feels great.
Useful and reasonable
It has also been ridiculed by many and has been controversial.
One of the inherent characteristics of the crypto is that even without actual consumer spending, tokens can be issued to assetize the protocol/product itself. This requires only a self-consistent demand logic and doesn't necessitate real-world payments. Reputation mechanisms like ETHOS, which appear to have no consumer spending, can still generate revenue simply by issuing a token. This is precisely the core function of cryptocurrencies.
Experienced investors no longer believe in narratives; they just want to make a quick buck when someone does.
The main concern is malicious intent. Even the UMA on Polymarkets frequently malfunctions; malicious actions by on-chain users whose origins cannot be traced would be even less costly.
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share



