Author: Mario Gabriele
Translator: Block unicorn

The Holy War of Artificial Intelligence
"I would rather live my life as if there is a God and die to find out there isn't, than live my life as if there isn't and die to find out there is." - Blaise Pascal
Religion is an interesting thing. Perhaps because it is completely unprovable in any direction, or perhaps just like my favorite quote: "You can't use facts to argue against feelings."
The hallmark of religious beliefs is that as they rise in belief, they accelerate in an incredible speed, to the point where it becomes almost impossible to doubt the existence of God. When more and more people around you believe in it, how can you doubt a sacred existence? When the world rearranges itself around a doctrine, where is there a foothold for heresy? When temples and cathedrals, laws and norms are arranged according to a new, unshakable gospel, where is there room for opposition?
When the Abrahamic religions first appeared and spread across the continents, or when Buddhism spread from India across Asia, the tremendous momentum of belief created a self-reinforcing cycle. As more people converted and built complex theological systems and rituals around these beliefs, it became increasingly difficult to question these basic premises. In a sea of credulity, it was not easy to become a heretic. The grand cathedrals, the complex religious texts, and the thriving monasteries all served as physical evidence of the sacred existence.
But history also tells us how easily such structures can collapse. As Christianity spread to the Scandinavian peninsula, the ancient Nordic beliefs crumbled in just a few generations. The religious system of ancient Egypt lasted for thousands of years, only to disappear when new, more enduring beliefs arose and a greater power structure emerged. Even within the same religion, we have seen dramatic schisms - the Reformation tore apart Western Christianity, and the Great Schism led to the split between the Eastern and Western churches. These schisms often began with seemingly trivial doctrinal differences, only to evolve into completely different belief systems.
The Scriptures
God is a metaphor that transcends all intellectual thought levels. It's that simple. - Joseph Campbell
Believing in God is simply religion. Perhaps creating God is no different.
Since its inception, optimistic AI researchers have imagined their work as a form of creationism - the creation of God. In the past few years, the explosive development of large language models (LLMs) has further strengthened the believers' conviction that we are on a sacred path.
It has also validated a blog post written in 2019. Although it was only known to those outside the AI community until recently, Canadian computer scientist Richard Sutton's "The Bitter Lesson" has become an increasingly important text in the community, evolving from obscure knowledge to a new, all-encompassing religious foundation.
In 1,113 words (every religion needs a sacred number), Sutton summarizes a technical observation: "The biggest lesson that can be learned from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." The progress of AI models has benefited from the exponential increase in computational resources, riding the huge wave of Moore's Law. At the same time, Sutton points out that much of the work in AI research has focused on optimizing performance through specialized techniques - increasing human knowledge or narrow tools. While these optimizations may be helpful in the short term, in Sutton's view, they are ultimately a waste of time and resources, akin to adjusting the fins or trying new wax on a surfboard when a giant wave is coming.
This is the foundation of what we call the "Bitter Religion". It has only one commandment, commonly referred to in the community as the "Scaling Law": Exponential growth in computation drives performance; everything else is foolishness.
The Bitter Religion has expanded from large language models (LLMs) to world models, and is now rapidly spreading through untransformed temples such as biology, chemistry, and embodied intelligence (robotics and self-driving vehicles).
However, as the Sutton doctrine spreads, the definitions are also beginning to change. This is the hallmark of all active and vibrant religions - debate, extension, annotation. "The Scaling Law" no longer just means scaling computation (the ark is not just a boat), it now refers to various methods aimed at boosting transformer and computational performance, with some tricks thrown in.

Now, the canon encompasses attempts to optimize every part of the AI stack, from techniques applied to the core models themselves (model merging, Mixture of Experts (MoE), and knowledge distillation) to generating synthetic data to feed these ever-hungry deities, with a lot of experimentation in between.
The Warring Sects
Recently, a question has arisen in the AI community with a hint of holy war, and that is whether the "Bitter Religion" is still correct.

This week, Harvard, Stanford, and MIT published a new paper titled "The Scaling Law of Precision", which has sparked this conflict. The paper discusses the end of quantization efficiency gains, a series of techniques that improve the performance of AI models and greatly benefit the open-source ecosystem. Allen AI Research Scientist Tim Dettmers outlined its significance in the post below, calling it "the most important paper in a long time". It represents the continuation of a conversation that has been heating up over the past few weeks, and reveals a notable trend: the consolidation of two religions.

OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both are confident that we will achieve Artificial General Intelligence (AGI) in the next 2-3 years. Altman and Amodei can be said to be the two figures most dependent on the sanctity of the "Bitter Religion". All of their incentives tend towards over-promising, creating the maximum hype, in order to accumulate capital in this game that is almost entirely dominated by economies of scale. If the Scaling Law is not the "Alpha and Omega", the first and the last, the beginning and the end, then what do you need $22 billion for?

Former OpenAI Chief Scientist Ilya Sutskever adheres to a different set of principles. He, along with other researchers (including many from within OpenAI, according to recent leaks), believes that scaling is approaching its limits. This group believes that maintaining progress and bringing AGI into the real world will inevitably require new science and research.
The Sutskever faction reasonably points out that Altman's faction's continued scaling ideology is economically infeasible. As AI researcher Noam Brown asked, "After all, do we really want to train models that cost tens or hundreds of billions of dollars?" This doesn't even include the additional tens of billions of dollars in inference compute costs if we shift the scaling from training to inference.
Here is the English translation of the text, with the specified terms retained and not translated: The true believers are very familiar with the arguments of their opponents. The missionaries at your doorstep can easily handle your hedonistic trilemma. For Brown and Sutskever, the Sutskever camp pointed out the possibility of expanding "test-time computation". Unlike the situation so far, "test-time computation" does not rely on larger computations to improve training, but uses more resources for execution. When an AI model needs to answer your questions or generate a piece of code or text, it can be given more time and computation. This is akin to shifting your attention from preparing for math exams to persuading your teacher to give you an extra hour and allow you to use a calculator. For many in the ecosystem, this is the new frontier of the "bitter religion", as teams are moving from orthodox pre-training to post-training/inference methods. Pointing out the flaws of other belief systems and criticizing other doctrines without exposing one's own position is indeed quite easy. So, what is my own belief? First, I believe that the current batch of models will bring very high returns on investment over time. As people learn how to circumvent the constraints and leverage the existing APIs, we will see the emergence and success of truly innovative product experiences. We will transcend the anthropomorphization and incremental stages of AI products. We should not view it as "Artificial General Intelligence" (AGI), as that definition has flaws in its framework, but rather as "Minimal Viable Intelligence" that can be customized for different products and use cases. As for achieving Artificial Superintelligence (ASI), more structure is needed. Clearer definitions and delineations will help us discuss the trade-offs between the potential economic value and economic costs that each may bring. For example, AGI may provide economic value to a subset of users (just a local belief system), while ASI may exhibit unstoppable compounding effects and transform the world, our belief systems, and our social structures. I don't believe that simply scaling transformers can achieve ASI; but unfortunately, as some might say, this is just my atheistic belief. The loss of faith may trigger a chain reaction that goes beyond large language models (LLMs) and affects all industries and markets. It must be pointed out that in most areas of AI/machine learning, we have not yet fully explored the scaling laws; there will be more miracles to come. However, if the doubts do quietly emerge, it will become more difficult for investors and builders to maintain the same high confidence in the ultimate performance states of "early curve" categories like biotechnology and robotics. In other words, if we see large language models start to slow down and deviate from the chosen path, the belief systems of many founders and investors in adjacent domains will collapse. In the long run, the debate about specialized models may be irrelevant. Anyone building ASI (Artificial Superintelligence) will likely have the ultimate goal of an entity that can self-replicate, self-improve, and have limitless creativity across all domains. Holden Karnofsky, former OpenAI board member and founder of Open Philanthropy, calls this creation "PASTA" (the Automated Scientific and Technological Advancement process). Sam Altman's original profit plan seems to rely on similar principles: "Build AGI, then ask it how to make money." This is the apocalyptic AI, the ultimate destiny. Unlike the apocalyptic models, these companies must demonstrate a series of progress. They will be companies built on scaling engineering problems, not scientific organizations conducting applied research, with the ultimate goal of building products. The believers are unlikely to lose their sacred faith in the short term. As mentioned earlier, with the surge of religions, they have compiled a script of living and worship, and a set of heuristic methods. They have built physical monuments and infrastructure, reinforcing their power and wisdom, and demonstrating that they "know what they are doing". In a recent interview, Sam Altman spoke about AGI, saying (the emphasis is mine): "This is the first time I feel like we really know what we're doing. From here to building an AGI, there's still a huge amount of work to be done. We know there are some known unknowns, but I think we basically know what we need to do, and it will take time; it will be very difficult, but it's also incredibly exciting." BLOCK, TRON, UNI, HT, OP, COMP, AR, RON, ONGHere is the English translation of the text, with the specified terms retained:If the expansion stagnates, I expect to see a wave of bankruptcies and mergers. The remaining companies will increasingly focus on engineering, an evolution we should anticipate by tracking talent flows. We have already seen signs that OpenAI is moving in this direction, as it is increasingly productizing its own products. This shift will create space for the next generation of startups to "overtake on the curve" by relying on innovative application research and science, rather than engineering, in an attempt to blaze new trails and surpass existing enterprises.

Lessons from Religion
My view on technology is that anything that appears to have obvious compounding effects usually does not last very long, and a common observation that people have is that any business that appears to have obvious compounding effects tends to grow at a much slower rate and scale than expected.

The early signs of religious schisms often follow predictable patterns that can serve as a framework to continue tracking the evolution of "The Bitter Religion".
It typically starts with the emergence of competing interpretations, whether for capitalistic or ideological reasons. In early Christianity, different views on the divinity and the trinitarian nature of Christ led to schisms and divergent biblical interpretations. In addition to the schisms we have already mentioned in AI, there are other emerging fissures. For example, we see some AI researchers rejecting the core orthodoxy of transformers and turning to other architectures such as State Space Models, Mamba, RWKV, Liquid Models, etc. While these are still just weak signals, they indicate the sprouting of heretical ideas and a willingness to rethink the field from first principles.
Over time, the impatient pronouncements of prophets also sow distrust. When religious leaders' predictions fail to materialize, or divine intervention does not arrive as promised, it plants the seeds of doubt.
The Millerite movement predicted Christ's return in 1844, but when Jesus did not arrive as scheduled, the movement collapsed. In the tech world, we often quietly bury failed predictions and allow our prophets to continue sketching optimistic, long-cycle future versions, even as their deadlines repeatedly miss (hey, Elon). However, if not supported by continual improvements in the underlying model performance, the faith in the laws of scaling may also face a similar collapse.
A corrupt, bloated, or unstable religion is vulnerable to apostasy. The Protestant Reformation succeeded not only because of Luther's theological views, but also because it emerged during the decline and turmoil of the Catholic Church. When cracks appear in the mainstream institutions, long-standing "heretical" ideas suddenly find fertile ground.
In the AI field, we may focus on smaller-scale models or alternative approaches that achieve similar results with less computation or data, such as the work done by various Chinese corporate labs and open-source teams (like Nous Research). Those who break through the limits of biological intelligence and overcome long-held barriers may also create a new narrative.
The most direct and timely way to observe the transformation is to track the movements of practitioners. Before any formal schism, religious scholars and clergy often privately hold heretical views while publicly conforming. The corresponding phenomenon today may be AI researchers who outwardly adhere to the laws of scaling but are quietly pursuing radically different methods, waiting for the right moment to challenge the consensus or leave their labs in search of theoretically broader horizons.
The tricky part about the orthodoxies of religion and technology is that they are often partially correct, just not as universally correct as their most devout adherents believe. Just as religions have embedded basic human truths within their metaphysical frameworks, the laws of scaling accurately describe the real-world learning of neural networks. The question is whether this reality is as complete and immutable as the current enthusiasm implies, and whether these religious institutions (AI labs) are agile and strategic enough to lead the zealots forward. At the same time, establishing the printing presses (chat interfaces and APIs) that can propagate their knowledge.
The Endgame
"Religion is true to the common people, false to the wise, and useful to the rulers." - Lucius Annaeus Seneca
One potentially outdated view of religious institutions is that once they reach a certain scale, they become susceptible to the survival instincts of many human-run organizations, trying to survive in competition. In the process, they neglect truth and noble motivations (which are not mutually exclusive).
I have written about how capital markets become narrative-driven echo chambers, and incentive structures often perpetuate these narratives. The consensus around the laws of scaling has an ominous similarity - a deeply entrenched belief system that is mathematically elegant and extremely useful in coordinating large-scale capital deployment. Like many religious frameworks, it may be more valuable as a coordination mechanism than as a fundamental truth.





