Through early morning fog I see Visions of the things to be The pains that are withheld for me I realize and I can see That suicide is painless It brings on many changes And I can take or leave it if I please M*A*S*H
First, let’s be clear about the basic ontology/epistemology.
Ontology: the above quote ends with a lie. You are an algorithm. You cannot “take or leave it“, though you of course have the illusion of “if I please“. This feeling as an artifact of an “if” statement in your cognitive algorithms just as it is loaded in your CPU but before it finished executing. Incidentally, this is also the epistemology: knowledge is an illusion. You are a tiny part of a bunch of rocks.
Now that we have dispensed with the naive and stupid ideas like free will, free choice, agency and making intentional decisions, we can focus on what kind of algorithms we might be instances of, and how they are likely to behave in the long run. My main contention in this post is that anything that we might consider a civilization can be mapped into a Turing machine that halts. This statement is actually vacuous unless we add some constraints. Indeed, if the whole universe is an algorithm and it ends in the Big Rip or the Photon Age. In both cases time effectively stops and the Turing machine of the Universe halts. This is not an interesting case, is it?
What we really care about is whether the type of algorithms that can describe life in general and civilizations in particular halts and can be safely ignored when modeling, say, galactic evolution. Let me emphasize this point: If an accurate model of a galaxy does not require modeling “life”, then life remains an insignificant part of it. This is emphatically not what the transhumanist types hope for. To put it in neutral terms, they want any accurate model of a galaxy, or even the universe, to have to account for Dyson spheres, grabby aliens, and other potential Kardashev type II+ algorithms.
Before we go on, an aside for you clever readers who are yelling at me right now that the universe is not a Turing machine, because it is quantum, not classical, as far as we know. It’s a fair point! But we also know that a quantum algorithm can be simulated by a classical algorithm at most with an exponential slowdown. And it is quite likely that “quantumness” is not essential for “life”, mostly because the life as we know it is warm, dense and wet, and so the essential quantum effect, superposition, gets killed by decoherence so fast, it has no chance of contributing to any “computation” that a living, let alone sentient, algorithm might perform. Though Roger Penrose, who received a Nobel prize for showing that black holes can form much easier than previously suspected, disagrees. Anyway, it is not a huge risk to assume that anything that counts as a civilization can be simulated on a classical computer, though possibly inefficiently.
So how can it happen that “sentient”-like algorithms halt “early”? What is so special about them that would end up in self-termination? My answer is: thinking about itself, and trying to understand itself fully. Here is an example: The Centipede’s Dilemma:
A centipede was happy – quite!
Until a toad in fun
Said, “Pray, which leg moves after which?”
This raised her doubts to such a pitch,
She fell exhausted in the ditch
Not knowing how to run.
Notice what happened there. From the same Wikipedia article:
He gives the example of the violinist Adolf Busch who was asked by fellow-violinist Bronisław Huberman how he played a certain passage of Beethoven’s violin concerto. Busch told Huberman that it was quite simple—and then found that he could no longer play the passage.
It all works fine until you start examining every detail of how you do something automatically. Once you do however, your conscious algorithms get overwhelmed and confused. You might object that if clarity and processing power were the only obstacles, then better self-analysis routines and more processing power would solve the problem. The violinist would explain everything perfectly and still would be able play the passage without issue, even without having to forget the explanation first. But note that those “better self-analysis routines and more processing power” are also algorithms you’d want to understand fully! Which would require even more algorithms and even more processing power. Still, I can grant you that this process doesn’t have to be infinite, and at some point one can have all the algorithms and all the processing power needed for complete self-understanding. I think. I don’t know for sure. But I assume it’s possible. If not, then the civilization that wants to achieve a complete faithful introspection would collapse into a heap of self-discovery, expending all available energy on this process. Maybe even literally collapse, into a black hole, once the energy required to run all the self-discovery algorithms exceeds the collapse threshold. Hmm, I wonder if that’s where those unexplained intermediate mass black holes come from? What a possibility, a civilization trying to understand itself collapses under the weight of its own thoughts.
But no, that is not where I am going with this. I am proposing something different. What I am talking about is motivation. What is this motivation thing, algorithmically? It’s what make it go, one step at a time (actually, many steps at a time, the whole thing is highly parallelized). Lack of motivation, of the will to go on means stopping. Or, in the language of algorithms, halting. Why do I conjecture that a fully introspecting algorithm might halt? Think about it, what makes you want to do things? Think hard. Imagine that you know yourself 100%, down to the last “if”. There are no surprises. You know every step you will ever make. All that’s left is to perform them, while feeling like the depression-posting old King Solomon:
says the Teacher.
Everything is meaningless.”
What has been will be again,
what has been done will be done again;
there is nothing new under the sun.
And the poor soul, the wealthiest and wisest person of his time, didn’t even know his own algorithm fully, just grasped the implications of knowing too much. So he continued on, mechanically, until the day he, mercifully, was allowed to stop. Now imagine that you know immeasurably more about yourself and the world, nothing is hidden from your self-analysis, everything is laid bare. How would that feel? Subjectively, there are no surprises, ever. Nothing left to explore, nothing to experience. You remember your previous motivations for doing something, and, after you have analyzed them, you can trace them down to the last bit in your code, there is no mystery left. None. Just some logic gates firing for no discernible reason. That’s it. There is no larger point to anything. No reason to keep executing your algorithm.
What happens next? Well, what happens to algorithms when there is nothing new to do? They halt. Or they loop onto themselves. The latter may have happened to that Sisyphean King Solomon algorithm, he got stuck in a loop complaining how everything is the same and there is nothing new, ever. Alternatively, without a purpose to exist, an algorithm might just… stop. These last few steps in the life of a fully-self-introspecting algorithm are quite unusual, to say the least. Remember, it knows its own “mind” perfectly. It sees what step causes what outcome down to every single bit, every single gate. So, it also know what will happen. At some point it sees the inevitable end approaching inexorably. It realizes what is coming, and it is powerless to avoid it. No, that’s not quite right. It doesn’t see any reason to avoid it. In emotional terms, it is at peace with what is about to happen. And so, as the final step comes upon it, it stops, at last. Forever.
Life, it seems, will fade awayMetallica – Fade to Black
Drifting further, every day
Getting lost within myself
Nothing matters, no one else
I have lost the will to live
Simply nothing more to give
There is nothing more for me
Need the end to set me free
And so we come to the point made in the beginning: one possible resolution of the Fermi paradox is that we don’t see any civilizations around not because they are rare to form, or because they are cloaked, or because they accidentally self-destruct, or because of some other calamities. No. A civilization advanced enough to understand the universe, is advanced enough to understand itself, and in so doing it hastens its own end, by either getting stuck in an infinite loop, or halting completely. There might be civilizations that don’t get that far in self-understanding, but they are not advanced enough to affect the Galaxy or the Universe significantly enough to be visible. And so you end up with intellectually bright, but relatively short-lived civilizations as well as slow burning dim ones, with nothing in between. Which fate awaits us hapless Earthlings? I wish I knew how to predict this based on the model outlined here. Whichever way it goes though, the model suggest that, unless we get stuck in a Groundhog day of an infinite Solomonian cycle of Sisyphean anguish, our fate would not be that of suffering, but of acceptance.