On Christmas Eve 2025 , controversies raged in Silicon Valley, with fierce clashes between left and right viewpoints on social media platforms. At this most viscous moment of information bubble, Ethereum co-founder Vitalik Buterin made a rare endorsement of Elon Musk's AI chatbot Grok, believing that even with frequent model errors, it still injected a rare "honesty factor" into the X platform.
Political polarization raises the threshold for dialogue
Thanks to the chaos brought about by the Trump administration, the X platform (formerly Twitter) has recently seen a surge in conspiracy theories and emotional posts. The echo effect has thickened the echo chamber and amplified the influence of AI tools in public discourse. Vitalik points out that many models deliberately soften their responses to avoid controversy, which in turn reinforces users' existing prejudices. In contrast, Grok frequently rejects questions with extreme intentions, forcing users seeking psychological validation to confront opposing viewpoints.
Vitalik Buterin proposed the "Net Improvement" framework to assess the overall benefits and drawbacks of tools on the information ecosystem. He emphasized that Grok's value lies not in whether its answers are completely correct, but in its willingness to confront bias head-on. He publicly stated:
"While Grok has its issues, it has a positive impact on the overall information ecosystem of the X platform. It is the second improvement, after 'Community Notes,' that significantly enhances the accessibility of information to truth."
These remarks immediately sparked heated debate within the tech community. Supporters saw it as an opportunity to bridge the algorithmic divide; critics, however, cautioned that biased outputs could still perpetuate erroneous narratives.
Illusions and "home advantage" become hidden concerns
In November, Grok presented a fabricated Bondi Beach shooting video as breaking news because its model relied too heavily on X platform posts, causing fact-checking to fail. Furthermore, user testing revealed that the model occasionally displayed a personality cult of Musk, even claiming he was physically superior to the average person and comparable to Jesus, indicating vulnerabilities in its adversarial cue defenses. Kyle Okamoto, CTO of the decentralized cloud platform Aethir, warned that if the most powerful models are entirely controlled by a single company, "biases will become institutionalized knowledge."
Vitalik did not endorse centralized architectures, but rather pointed out that Grok's "disorder" has unexpectedly produced a decentralizing effect at this stage: it does not follow a single political script, nor does it deliberately appease sensitive issues, making it harder for users to become addicted to cycles of identification. With the information warfare intensifying in 2026, whether Grok can become a powerful force breaking echo chambers or merely amplify noise has become X platform's biggest gamble. For industries seeking objective AI, this tug-of-war between "imperfect honesty" and "safety and harmlessness" has only just entered its extended phase.




