# Coffee Space

Listen:

## Langton’s Ant Universe

I find myself back looking at Langton’s Ant after a very long time. I love how something that appears to be simple ends up being quite a complicated problem. I have been looking to return to it for quite some time, but from the perspective of measuring entropy.

A long story short… I got into a discussion about how I believe we live in a cyclic Universe. There were several criticisms, but the main ones were:

1. The Universe either doesn’t appear to have bounds, or these bounds are forever expanding. The argument is that it is not a closed system, and therefore energy conservation does not apply.
2. The Laws of Thermodynamics, specifically Newton’s Second Law of irreversibility of natural processes, where entropy decreases and the Universe becomes irreversibly cold. The idea being that the Universe can never return to its original state.

I disagree that the Universe is unbounded, but I cannot back that up. But under this assumption, I disagree with the second law applied to the Universe, and instead look towards the first law of conservation of energy.

What is particularly interesting is that if conservation of energy applies and all parts of the system are reversible functions, I believe it should in theory return to its original state, no matter how complex it becomes.

My speculation is as follows:

For any closed, bounded system where energy is neither created nor destroyed, it will always return to its original state given sufficiently long enough amount of time.

Showing this is extremely difficult, but we can at least begin to prove out parts of it through experimentation. By using Langton’s Ant, we should be able to show that no matter how complex the system becomes, it is capable of returning to the original state.

A full proof would likely have to prove that for a sufficiently complex closed-system, that it does not repeat, and that every state is unique up until the point of repeat. If a repeat is found, it will definitely return to that state and become stuck.

Note to self: From a security perspective there is probably a lot that can be said about hashes that return to their previous state, i.e. you shouldn’t see:

$hash(x) = hash(hash(..hash(hash(x))))$

If this happens too soon, then only a smaller subset of hashes need to be explored. You would see regular collisions and cracking would become (relatively) easier.

### Entropy

For this we are interested in measuring the entropy of the system.

$H_2(X) = -\sum_{x \in X} p(x) \log_2 p(x)$

$H_2()$ we use to indicate 2 bit entropy, $X$ is the string we calculate entropy of and $x$ is a byte within the string $X$.

We use the code from Stack Overflow to calculate string entropy (which they claim to be reasonably fast):

0001 import math
0002
0003 def entropy(string):
0004     # get probability of chars in string
0005     prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ]
0006     # calculate the entropy
0007     entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ])
0008     return entropy
0009
0010 def entropy_ideal(length):
0011     prob = 1.0 / length
0012     return -1.0 * length * prob * math.log(prob) / math.log(2.0)
0013
0014 tests = [ "a", "aa", "Hello world!", "mf7nclsa." ]
0015 for t in tests :
0016   print("'" + t + "', entropy: " + str(entropy(t)) + ", ideal:" + str(entropy_ideal(len(t))))
'a', entropy: -0.0, ideal:-0.0
'aa', entropy: -0.0, ideal:1.0
'Hello world!', entropy: 3.0220552088742, ideal:3.5849625007211565
'mf7nclsa.', entropy: 3.169925001442312, ideal:3.1699250014423126


The code we stole works, awesome!

### Environment

We first create an environment of a given size:

0017 def env_create(size) :
0018   x = size // 2
0019   y = x
0020   r = 0
0021   s = size
0022   w = " " * (s ** 2)
0023   return (x, y, r, s, w)
0024
0025 x, y, r, s, w = env_create(5)

A 5x5 world will do us nicely. The larger the world, the longer it takes to compute.

Some functions for getting and setting patches:

0026 def env_get(w, s, x, y) :
0027   return w[x + (y * s)]
0028
0029 def env_set(w, s, x, y, p) :
0030   return w[:x + (y * s)] + p + w[x + (y * s) + 1:]

And now we want something that rotates the agent and advances it (the main part of the Langton’s Ant algorithm):

0031 def agent_rotate_advance(s, x, y, r, left = True) :
0032   # Rotate
0033   if left : r -= 1
0034   else : r += 1
0035   # Update rotation based on limits
0036   if r < 0 : r += 4
0037   if r >= 4 : r -= 4
0039   if   r == 0 : y -= 1
0040   elif r == 1 : x += 1
0041   elif r == 2 : y += 1
0042   elif r == 3 : x -= 1
0043   # Update advance based on limits
0044   if x < 0 : x += s
0045   if x >= s : x -= s
0046   if y < 0 : y += s
0047   if y >= s : y -= s
0048   return (s, x, y, r)

Note: Our simulation is wrapping! We could equally simulate some kind of bounce, but ultimately we are simulating a bounded Universe.

Lastly, it would be good to actually visualise what on earth is going on!

0049 def env_graphic(x, y, r, s, w) :
0050   z = 16
0051   g = Svgg(width = s * z, height = s * z, bg = "#FFF")
0052   # Print the world
0053   for j in range(s) :
0054     for i in range(s) :
0055       bg = "#FFF"
0056       if env_get(w, s, i, j) != " " : bg = "#222"
0057       g.add("rect", {"x": i * z, "y": j * z, "width": z, "height": z, "fill": bg, "stroke": "#000"})
0058   # Print the agent on top
0059   dir = "?"
0060   if   r == 0 : dir = "^"
0061   elif r == 1 : dir = "&gt;"
0062   elif r == 2 : dir = "v"
0063   elif r == 3 : dir = "&lt;"
0064   g.add("text", {"x": (x + 0.5) * z, "y": (y + 0.5) * z, "dominant-baseline": "middle", "text-anchor": "middle", "fill": "#F00", "value": dir})
0065   print(g.to_uri())
0066   return
0067
0068 env_graphic(x, y, r, s, w)
0069 entropies = [ entropy(w) ]
0070 print("Entropy: " + str(entropies[-1]))

Entropy: -0.0


And we get a nice little graphic to show this works correctly. The environment starts with zero entropy.

Now we define a way to run each cycle of the simulation:

0071 def env_run_once(x, y, r, s, w) :
0072   p = env_get(w, s, x, y)
0073   if p == " " :
0074     w = env_set(w, s, x, y, "#")
0075     s, x, y, r = agent_rotate_advance(s, x, y, r, False)
0076   else :
0077     w = env_set(w, s, x, y, " ")
0078     s, x, y, r = agent_rotate_advance(s, x, y, r, True)
0079   return (x, y, r, s, w)

And we run for 8 cycles:

0080 i = 0
0081 while i < 8 :
0082   x, y, r, s, w = env_run_once(x, y, r, s, w)
0083   env_graphic(x, y, r, s, w)
0084   entropies += [ entropy(w) ]
0085   print("[" + str(i) + "] Entropy: " + str(entropies[-1]))
0086   i += 1

[0] Entropy: 0.2422921890824148

[1] Entropy: 0.40217919020227283

[2] Entropy: 0.5293608652873644

[3] Entropy: 0.6343095546405662

[4] Entropy: 0.5293608652873644

[5] Entropy: 0.6343095546405662

[6] Entropy: 0.7219280948873623

[7] Entropy: 0.7950402793845224


Lovely. Now we want to actually plot the entropy over time:

0087 def display_entropy() :
0088   p = Plott(
0089     title = "Langton's Ant Entropy Over Time 5x5",
0090     x_title = "Time (ticks)",
0091     y_title = "Entropy",
0092     fg = "#FFF",
0093     bg = "#222"
0094   )
0095   x = list(range(len(entropies)))
0096   p.plot(x, entropies, "Entropy")
0097   print(p.to_uri())
0098   return
0099
0100 display_entropy()



And we can see it increasing as Newton suggests. Lastly we do a quick calculation of how many possible states the patches can be in:

0101 print("All possible states (patches only): " + str(2 ** (s ** 2)))
All possible states (patches only): 33554432


Running for 33 million steps with this not so efficient code is a lot. It would be super cool if we could do it faster…

### Experiment

Now we run for some number of steps (reached by running the code previously):

0102 while i < 2335 :
0103   x, y, r, s, w = env_run_once(x, y, r, s, w)
0104   entropies += [ entropy(w) ]
0105   i += 1

Now for these last steps, we want to see the environment and the entropy. Note how the number of black patches is decreasing and we are returning to the starting state (ignoring the position and rotation of the agent):

0106 while i < 2342 :
0107   x, y, r, s, w = env_run_once(x, y, r, s, w)
0108   env_graphic(x, y, r, s, w)
0109   entropies += [ entropy(w) ]
0110   print("[" + str(i) + "] Entropy: " + str(entropies[-1]))
0111   i += 1
0112
0113 display_entropy()

[2335] Entropy: 0.6343095546405662

[2336] Entropy: 0.5293608652873644

[2337] Entropy: 0.6343095546405662

[2338] Entropy: 0.5293608652873644

[2339] Entropy: 0.40217919020227283

[2340] Entropy: 0.2422921890824148

[2341] Entropy: -0.0



Pretty cool!

### Discussion

What is super super super cool is that the entropy is actually symmetric. Something cool is happening there.

If you zoom into any part of the graph, it looks random and seems as though high entropy is maintained. Yet. This simulation does in fact return to its original state. For a complex enough simulation you would struggle even more to see any kind of symmetry at all. You would simply see that entropy is maintained, for what appears to be infinity. Bare in mind, you would see the number of possible states absolutely explode:

0114 for size in range(6, 16) :
0115   print("Possible states for a " + str(size) + "x" + str(size) + " world (patches only): " + str(2 ** (size ** 2)))
Possible states for a 6x6 world (patches only): 68719476736
Possible states for a 7x7 world (patches only): 562949953421312
Possible states for a 8x8 world (patches only): 18446744073709551616
Possible states for a 9x9 world (patches only): 2417851639229258349412352
Possible states for a 10x10 world (patches only): 1267650600228229401496703205376
Possible states for a 11x11 world (patches only): 2658455991569831745807614120560689152
Possible states for a 12x12 world (patches only): 22300745198530623141535718272648361505980416
Possible states for a 13x13 world (patches only): 748288838313422294120286634350736906063837462003712
Possible states for a 14x14 world (patches only): 100433627766186892221372630771322662657637687111424552206336
Possible states for a 15x15 world (patches only): 53919893334301279589334030174039261347274288845081144962207220498432


To put this into perspective, there is something like $10^{80}$ visible atoms in the Universe, and they have more than two distinct states. Luckily, infinity is a pretty short time when there is nobody can live that long to experience it. It’s also a hell of a lot shorter when you don’t need to explore every state in order to return to the original state.

There are some limitations of course…

1. We assume this simulation is also valid in a continuous environment. I would simulate it, but ultimately any simulation I can run would be digital in some nature (float can only be so accurate).
2. The states only change around one agent. Assuming we are not in a one-electron Universe, we would need to somehow have the entire environment being updated in parallel. We are currently limited by how computers work (but potentially quantum computers could handle it).
3. There isn’t really a solid concept of energy here, and it’s not clear if it’s created or destroyed.

Maybe in the future I will look to address these issues. It would probably be more like a gas explosion, but I would need to have a good think about precisely how this would be done.