Happy Tau Day! It’s funny. I feels like I’ve written a lot of posts about pi plus few about it’s bigger sibling, tau. Yet the reality is that I’ve only ever written one Tau Day post, and that was back in 2014. (As far as celebrating Pi Day, I’ve only written three posts in eight years: 2015, 2016, & 2019.)
What I’m probably remembering is mentioning pi a lot here (which is vaguely ironic in that I won’t eat pie — mostly I don’t like cooked fruit, but there’s always been something about pie that didn’t appeal — something about baking blackbirds in a crust or something).
It’s true that I am fascinated by the number.
But I have talked about it a great deal, so I won’t be talking about it today (except in passing).
Instead, I thought I’d key off a fascinating property of pi, that it is thought to be a normal number, and introduce an artificial number that is also normal (except, in another sense, it’s anything but a normal number).
It’s the artificial number, the Champernowne constant.
Mathematically, for a number to be normal means that it’s a real number whose infinite series of digits contains all possible digits with equal frequency.
As a simple example, 4.0 isn’t normal, because its infinite series of digits is all zeros. Likewise one-third (0.333…) isn’t normal because that infinite series of digits is all threes.
Pi (and few other famous constants, like e) are thought to be normal, they pass all tests, but there has not yet been a constructive proof they really are normal.
In contrast, the Champernowne constant has been proved normal.
It’s an artificial number constructed by concatenating successive integers in the appropriate base.
For instance, the Champernowne constant in base ten, which is notated C10, is:
Since the progression is infinite, every integer, and thus every finite sequence of digits, definitely exists somewhere in the constant. In fact, we can even say where a given integer appears. (Can’t do that with pi!)
The Champernowne constant can be constructed in other bases. For instance, in base two, it’s notated C2, and is:
I did a little color coding to make the bit groups stand out.
One thing that’s interesting to me about the Champernowne constant is that its definition is algorithmic.
Compare that with pi, whose definition is simply: The ratio of a circle’s diameter to its circumference. (Ironically, pi cannot be constructed without an algorithm. Both pi and the Champernowne constant are infinite series.)
The algorithm for the Champernowne constant is very simple:
function champernowne (n, max): if max <= n: return string(n) return string(n)+champernowne(n+1,max)
(Recursion does all the work!) The string() function just converts a binary integer to its string representation.
To generate the Champernowne constant for integers up to 100, call the above function with:
I’ve played around a bit with pi, exploring its randomness and normality.
I did a bit of testing with the Champernowne constant, too, because the way it’s constructed seems like it would take a lot of digits for the normality to show up.
A quick (but not very precise) way to test that is to add up all the digits and divide by how many. That is, to take the mean (aka average) value.
The results didn’t surprise me too much:
|Digits||Pi Mean||Champ Mean|
The expectation is that the mean of a random set of digits should be 4.5.
Pi converges pretty quickly due to the random nature of its digits. But the Champernowne constant is only getting close around a million digits!
One interesting property of both pi and the Champernowne constant is that, while they are infinitely large objects, their definition is small.
In the case of the Champernowne constant, the definition is explicitly algorithmic. Even when specified as an infinite series (see the Wiki page), it still needs to be calculated.
When it comes to actually generating the digits of pi, that is also a calculated series that requires an algorithm. [See Calculated Math for an exploration of calculation versus evaluation.]
This brings us, briefly, to another calculated number, Chaitin’s constant, omega, due to Gregory Chaitin, who is considered a co-founder of algorithmic information theory (along with Andrey Kolmogorov, after whom the notion of Kolmogorov complexity is named).
Chaitin’s constant, omega, is (roughly speaking) the probably that some random program (random set of bits) will halt on a given system. (Obviously, very few sets of random bits will even run.)
It has the interesting quality that no algorithm can ever compute its digits, because that would require solving the Turing Halting Problem.
So it’s a well-defined number that can’t be computed.
That brings me to my all-time favorite infinitely large mathematical object with a simple algorithmic definition: the Mandelbrot.
Which I’ve written about here plenty, so I won’t talk much about it today.
Today I’ll just show you two images I made:
[Click on either for a bigger version.]
These are, for me, pretty deep zooms. For the artists who regularly plumb the Mandelbrot depths, I’m still dabbling in the shallows.
But even so, the first image took a bit over 12 hours to generate, and the second image took a whopping 32 hours (31:54, actually). The program I use, UltraFractal, is using 100% of the CPU all during.
Part of what fascinates me is the scale involved. The entire Mandelbrot is contained within a circle with a 2.0 radius (on the complex plane). Zooming in gets into some seriously tiny territory.
(The Planck Length, the smallest distance we think has any meaning, is 1.616255×10-35. Both Mandelbrot images go below 10-60. And, as I said, that’s really just the shallows.)
As an illustration, along the X axis, the first one runs from:
Notice how so many of the digits are the same. Only after 60 digits of decimal precision is there a difference. That’s the region of that first image. (The Y axis is similarly tiny.)
In the center of that first image, you see a tiny black “mini-Mandelbrot” — which is what I zoomed in on for the second image (so there’s a few extra digits on that one).
As I’ve mentioned before, the Mandelbrot cannot be fully computed (around the edges), also due to the Turing Halting problem. But what can be computed is deeply fascinating (pun very much intended)!
So there ya go, some tasty numbers for Tau Day.
And remember, being it’s Tau Day, you get twice the pi!
Stay numerical, my friends!
June 28th, 2019 at 3:15 pm
In the Pi Day post in 2016 I wrote about how it’s possible, given a normal string of a given length, to determine the probability that a given ordered sub-sequence of a certain size appears.
See the linked post for the math, if interested. What’s new here is that I wrote a simple algorithm to scan my ten-million digit pi sequence. It looks for all sub-sequences from “9999” to “999999”
Here’s the start and end of its output:
The listing shows the sequence being scanned for, how many occurrences were found, and the offset (number of digits down the string) of the first occurrence.
What the above doesn’t show is that every single sequence was found (many times)! Every number from 9,999 to 99,999 is found in the first 10,000,000 digits of pi. (And, of course, all lower numbers would be there, too.)
June 29th, 2019 at 9:10 am
Can’t say I have much mathy to add here, except that Pi’s weirdness always makes me wonder if we’re seriously missing something with reality.
And now I’m craving pizza for some reason!
June 29th, 2019 at 11:13 am
Pi and e and every transcendental number, yeah, so wierd. It really makes one wonder about that quote due to Leopold Kronecker: “God made the integers, all else is the work of man.” The bedeviling thing about that, though, is that, presumably, God made circles and therefore pi (and e is pretty “natural,” too).
Mathematician John Baez has an interesting paper, Struggles with the Continuum (also as an 8-post series on his blog), that puts a lot of this in stark relief. (But, of course, no one has any answers, just questions.)
Until I read that paper, I didn’t realize that even Newtonian mechanics has issues when real numbers are taken to represent reality (the first post goes through that).
I, and other serious people who know a lot more than I about this, continue to hold that time and space are smooth (and, naturally, other serious people don’t), but that belief does carry inherent paradoxes. Something like Planck Length needs to act as a limit on smallness (which it seems like it might) to resolve the paradoxes.
There is something vaguely ironic about using real numbers (complex numbers!) to do quantum calculations.