There was a funny story about an AI programming project that had a very impressive demo and raised a lot of money, only to be exposed as having outsourced coding in India using outdated methods… This is quite representative. Many so-called AI projects leave you wondering if the AI is actually doing any work. Especially in crypto, everyone talks about AI automatically generating strategies, but you can't verify these things on the blockchain; it's essentially a matter of trusting me. Recently, I saw a project trying to solve this problem: Flap's AI Oracle. It writes the AI prompt into a smart contract, and when the contract is triggered, it calls the LLM (Local Level Module), sending the result back for execution. This way, the actual AI logic is executed on-chain, and the commit-reveal mechanism (a crucial method for decentralized verification using blockchain hash functions, which I won't elaborate on here) verifies that the AI logic was actually executed. Compared to previous solutions, the advantages are: first, by adding AI prompts at the smart contract level, it becomes more decentralized and blockchain-native, suitable for projects with high decentralization requirements; second, the reasoning process is visible, and the results can be verified on-chain. In short, this kind of thing seems quite useful in certain scenarios, such as prediction markets or mechanisms requiring fairness, at least you can prove that the decision wasn't made arbitrarily by humans. From this perspective, I think AI Oracles may become a rather necessary tool and infrastructure.
This article is machine translated
Show original


From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content




