This article is machine translated
Show original

Larry Ellison: Models such as ChatGPT, Gemini, Grok, and Llama are mostly trained on the same publicly available internet data. When everyone is training with the same information, models inevitably become homogenized. The real moat isn't the model itself, but the proprietary data behind it. Companies that can train using exclusive datasets gain a core advantage that competitors simply cannot replicate.

柴郡
@Crypto+AI Plus
03-10
上一轮 TON 生态的爆发,靠的就是 Telegram 恐怖的月活流量。这次 @trdEverything 直接从 TG 小程序切入,思路非常对。 项目由 @Humanityprot 孵化领投,@animocabrands 等顶级机构加持,基本就是对标 Robinhood 做的一个 Web3 一站式超级金融应用(涵盖 x.com/trdEverything/…
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments