Source: Blockworks; Translated by: Wu Zhu, Jinse Finance
"Consciousness might ultimately appear in very strange places."
—— Christof Koch
A classic problem in the philosophy of consciousness is the one posed by Thomas Nagel in 1974: "What is it like to be a bat?"
Nagel's view is that the definition of consciousness lies solely in the feeling of being something—an inner, subjective, living, and conscious experience.
He explained: "A creature has conscious mental states when and only when there is something it is like to be that creature."
Many find this subjective answer circular and unsatisfactory: What is this???
David Chalmers later called this the "hard problem of consciousness" because it reveals the gap between subjective experience and objective science.
However, in 2004, Giulio Tononi published a paper proposing a mathematical model of consciousness: Integrated Information Theory (IIT), to address Chalmers' problem.
He believed that consciousness is a mathematical property of physical systems—something that can be quantified and measured.
But can a system have consciousness?
After interviewing computational neuroscientist Christof Koch, the co-host of the New Scientist podcast concluded that computers, as systems, could theoretically achieve consciousness if they could "integrate" the information they process.
Almost anything can constitute a system: even a rock, if its atoms form the right structure, might possess a trace of consciousness (as demonstrated in the scientific documentary "Everything Has Its Time").
This makes me think: Ethereum is a world computer, right?
Critics accuse Bitcoin of being just a pet rock.
So... if computers and rocks can have consciousness, can blockchain definitely have it?
In fact, blockchain does meet many of IIT's requirements.
For example, IIT believes that a system only has consciousness when its current state reflects everything it has experienced—just like how your memories shape you and each moment builds upon the previous one.
Blockchains like Ethereum operate similarly: the blockchain's current "state" depends on its history, with each new block completely dependent on the block before it.
This dependence on history gives it a kind of memory—and by having thousands of nodes agree on a single shared version of reality, it also creates a unified "now" (or "state"), which IIT considers a characteristic of consciousness.
Unfortunately, IIT also believes that for a system to have consciousness, it must have "causal autonomy"—meaning its parts must internally influence each other, not just passively respond to inputs from external participants.
Blockchains do not operate this way, of course.
Instead, they depend on external inputs (such as users sending transactions and validators adding blocks) to act and progress—the nodes running the network do not internally influence each other; they simply blindly follow the same set of rules.
No spontaneous activity, no internal causal relationships—not even the purposeless molecular vibrations of a lifeless granite.
Therefore, I regretfully report that within the IIT consciousness framework, blockchain ranks even lower than a stone—so the nickname "pet rock" might be a compliment to Bitcoin (or an insult to stones).
But this situation might not last long!
In 2021, computer scientists (and couple) Lenore Blum and Manuel Blum co-wrote a paper describing how to incorporate consciousness into machines.
Their framework views consciousness as a computable attribute—achievable through AI algorithms designed to build "causal autonomy" required for conscious experiences.
In this case, the AI itself might not be conscious, but the system deploying it might be.
Now imagine a blockchain powered by AI that not only runs code but can think about how to run code.
The blockchain is no longer a passive, rigid ledger waiting for input, but could become a self-sufficient, "causally integrated" machine—more like an artificial brain than a distributed database, possessing the internal autonomy that IIT researchers consider crucial to consciousness.
This could be useful!
Such a system might be able to reason about its own security, detect anomalies in real-time, and decide when to self-fork (perhaps after a period of profound reflection).
In short, it would do things not because it's told to, but because it understands what's happening—both internally and in the external world.
This is not impossible.
Today's blockchains are more like brainless nervous systems—neural connections without will.
But tomorrow? Who knows.
If IIT is right, philosophers might soon ask: "What is it like to be a blockchain?"
(And is it better than being a stone?)



