2. Yardsticks of the Computable Universe

Moral Of Story
Solar Panel
Published in
4 min readOct 8, 2016

--

To recap, the hypothesis is, “all computable things exist, maybe other things, too, but we’re pretty sure about the computable things, as it is easy to trust that they are well-defined enough to be necessary truths.”

But what do “computable things” look like, and is this hypothesis powerful enough to account for our reality? Can we use this hypothesis to make any testable predictions? I think the answer is yes to both. But it’s going to take a while to get to the latter.

Astronomy made great advances after the discovery of “standard candles”. In this post I want to give some analogous tools that can be used to think about the computable universe.

We start our tour of the computable universe with Turing Machines, Conway’s Game of Life, the number π, and the halting problem. If you know what these things are, skip down until you see [STOP SKIPPING]. Otherwise, I will give an extremely terse overview of each; follow the links to wikipedia if you need more detail.

Turing Machines (henceforth TMs) are an extremely simple model of computation. The simplicity makes it easier to prove things about them, but it also makes their “source code” (state table) harder to understand than, say, Python. TMs operate on a single infinitely long “tape”, marked either 1 or 0 at each location. (Using more than two symbols, or more than one tape, doesn’t provide more power.) TMs have a current state and a table they use to switch states. At each step, the TM reads its current location, optionally writes 1 or 0, moves right or left, and switches to another state (or halts). Input is provided by preloading the tape; output is read off the tape when the program halts. Or you can peak while it’s in progress if you really want to understand what it’s doing.

Some TMs are special, in that you can compute, for any given program P, a prefix for your input which will cause the TM to run program P. These are called Universal Turing Machines (UTMs) and they’ll be important later on.

The Game of Life (GoL) is one of the simplest 2D cellular automata. It’s “played” on a 2D grid. Each cell is alive or dead. At each step, we count each cell’s alive neighbors. If there are 2 or 3, the cell may stay alive. If exactly 3, the cell is “born” if it was dead. Otherwise, the cell stays dead or dies. These simple rules lead to a lot of rich behavior. Similarly complex behavior can be produced by even simpler cellular automata, such as rule 30 or 110 of the 1D rules.

[STOP SKIPPING] Most of you have heard of pi before. I’ll use pi as an example because it’s the most famous infinite string of digits. I want to disambiguate between that infinite sequence of digits, and the means by which those digits can be produced. For our purposes, we’ll think about the family of algorithms which compute the digits of pi. These algorithms (programs) take the number of digits to compute and then return that many digits.

Now for the fun stuff. We’re going to be doing a lot of thinking about finding programs inside the output of other programs, and we’re going to do it with programs we don’t understand very well, programs that we currently haven’t even found the source code for. So let’s practice now with programs that we do understand relatively well. Specifically, let’s compare pi and GoL.

One can construct a TM that runs the GoL, compressing the 2d grid onto a 1d tape. The program can even grow the grid as needed. Perhaps surprisingly, one can construct a setup in GoL that will run arbitrary programs, just like a UTM. (Having this property is called being “Turing Complete”.) Yes, you can make a big stack of “TM running GoL running TM running GoL running…” But every layer comes with a pretty hefty slowdown penalty, so you wouldn’t see the innermost program making much progress if you actually tried to run it.

Similarly, one can construct a TM that outputs as many digits of pi as one wishes. However, one cannot use the program that generates pi to run an arbitrary program. This is a critical difference. It is thought to be true that for any program P, you can find bits in the output of pi that exactly match the output bits of P. Suppose P matched digits 100–1000 of pi (maybe P’s source code is something like “print 821480865132823066…”). Now, let’s suppose that we modify P such that it prints slightly different output (“print 91037…”). Do we expect the 100–1000th digits of pi to change as well? No. We expect to find that this new program P’ merely matches a different location of pi. Pi remains unchanged by our shenanigans.

Pi is usually thought of as a constant, for good reason. In fact, you can think of the output of any program as a constant: given its state table and input bits, a given TM always does exactly the same thing: our program P’s output was also a constant. So the above thought experiment demonstrates that there can be “logical coincidences” between constants.

Can there similarly be “logical causality” between constants? Yes! The constants have a causal structure. Think back to the toy example of a TM running GoL. The final output is constant. Would you say it was caused to be what it is by the TM? By the rules of GoL? By the input bits? In contrast to our previous thought experiment, we know that if we changed the rules of GoL in this program stack, that the output would be completely different.

Now, the hard question: if we didn’t happen to know what the stack of programs was, would it be reasonable of us to believe that? How could we tell the difference between causal structure and coincidences? We’ll be revisiting this question in the future, but we have a few more stops to make first, so hang on.

(previndexnext)

--

--