The subject of Elementary Cellular Automata tickles my mind occasionally. These seemingly simple devices are able to spawn as-yet unexplained complexity, especially interesting to me being the output of Rule 30.

In previous articles I’ve described methods to optimise processing effort, profiled the effort of various optimisations, speculated on various patterns, and drawn inconclusive albeit interesting (to me at least) comparisons to logistic functions.

From this I know for a fact that you can skip rows. You don’t have to compute every single cell of every single row to figure out the value of an arbitrary cell on row n.

What I realised not long after figuring this out is that instead of using lookup tables to skip a number of rows, you can simply expand the radius or neighbourhood.

Traditionally, we want to figure out the next row, or basically we are figuring out row n + 1. To do this, our radius is 1. To figure out row n + 2, our radius must be… 2.

A radius of 1 requires us to compare a neighbourhood of 3 cells — left, centre and right.

A radius of 2 requires us to compare a neighbourhood of 5 cells — 2 x left, centre, and 2 x right.

And so on.

A radius of 4 requires us to compare a neighbourhood of 9 cells — 4 x left, centre, and 4 x right.

With a radius of 4 (or 9 cell neighbourhood), we would end up with 2⁵¹² rules.

Rule 1672702650582238412929299990695737076105523291763612823450679843418791795446849064772635979956834888647496876308913622902380656508433020627081483337588750 is an elementary cellular automaton with a radius of 4 (cell neighbourhood of 9):

If you look closely, it generates the same output as Rule 30, if you’re to skip 4 steps at a time.

My conjecture is that for each number of rows of a base rule skipped, there is a rule in the broader radius that generates the same output for the skipped rows.

Also, the number 1672702650582238412929299990695737076105523291763612823450679843418791795446849064772635979956834888647496876308913622902380656508433020627081483337588750 is cleanly divisible by 30 with no remainder, so my other conjecture is that these broader neighbourhood rules are always divisible by their elementary radius=1 counterpart.

But that just makes sense right?

Of course it does. The rules themselves aren’t actual numbers. Rule 30 is called as such because if you line up the possible patterns in a certain order, they add up to 30. They’re not numbers per-se, they’re just patterns than are arranged in a certain order, converting them from binary to decimal is just a convenience.

When you increase the radius, and look at the rule number in binary, then you end up with rule 30 and its inverse repeated at the start and end of the binary sequence. In fact if you split the binary representation into multiple rows you can see the same pattern repeated, observe:

Rule 30 with a radius of 1 is the familiar 00011110

But you can see the same 00011110 repeated as the first bit of each 8th of radius 3, radius 5, radius 7, and all other odd radii.

The even radii are equally interesting, as you can see the 00011110 repeated as the last bit of each 8th.

Meanwhile you see the first bit of each even 8th repeated across all even 8ths.

Clearly, while we give rules numbers, like 30, or 1672702650582238412929299990695737076105523291763612823450679843418791795446849064772635979956834888647496876308913622902380656508433020627081483337588750… These are not actually numbers in the usual sense. They’re just a convenient representation of a pattern.

You can see how rule 30’s equivalent for a radius of 4 plays out here: https://www.wolframalpha.com/input/?i=rule+1672702650582238412929299990695737076105523291763612823450679843418791795446849064772635979956834888647496876308913622902380656508433020627081483337588750+r%3D4

You can also see that not only does the radius 1 binary pattern and its inverse repeat across higher radii, but also that the first and last halves of an 8th of a rule’s binary digits bookend each 8th of it’s radius +2.

Is this a thread worth pulling on?

Every time you increase the radius by x, you can calculate row n + x. This seems great. Perhaps if you figure out the magical pattern, you can figure out a rule for an arbitrary radius through a series of derivative binary operations. Then, with that rule in hand, you can skip however many rows you want to.

However, every time you increase the memory needed to compute the rule number (if that’s even possible) in a very sharply exponential fashion. Within less than 10 higher-radius rules you already need more than a megabyte just to store the “rule number”.

So as a means for finding a higher-order rule that can more easily figure out n+x, (assuming you could find an algorithm to generate higher-radius rules easily), this is definitely no improvement over simply calculating each row as you go, simply because computing the rule and storing it will quickly encompass more resources than the universe itself contains.

However, it is interesting to see that even within the rule number itself (when viewed in binary), the same kinds of geometric patterns keep appearing.