This image here of the Mandelbrot fractal might look like one of the uglier renderings you’ve seen, but it’s a thing of beauty to me. That’s because some code I wrote created it. Which, in itself, isn’t a deal (let alone a big one), but how that code works kind of is (at least for me).
The short version: the code implements special virtual math for calculating the Mandelbrot. That the image looks anything at all like it should shows the code works.
Yet according to that image, something wasn’t quite right.
See the jagged edges on the big blue bands of color around the Mandelbrot set (the round black parts and the “needle” sticking out to the left)? They shouldn’t be jagged.
Not unless every Mandelbrot rendering I’ve seen does something to smooth out the edges (and I don’t believe that).
I expected something that looks like the one below (done with Ultra Fractal 6):
There is a coloring algorithm that smooths the banding such that one band blends nicely into another — that’s common to the point of being the default.
Band smoothing results in something like this (also from UF6):
But those ragged edges — something is going on there.
The weird thing: What kind of error gives me ragged edges, but still generates a correct-looking Mandelbrot? A math error seems like it should result in worse damage to the image.
I had an idea what might be at fault. A known bug, but I didn’t see how it mattered here (turns out it didn’t).
Plan A was to fix that and try another render, but the code is calculation intense — that 200×200 jagged image above took over two hours. If the bug wasn’t the problem, nothing gained after hours for a render.
I didn’t have that kind of patience.
Plan B was to investigate the reason for the jagged edges. Understanding the cause should point to the fix. Which it did.
The short version: The math code itself was fine. The jagged edges came from the Mandelbrot generation code.
After 12 hours of overnight calculation, here’s the 400×400 pixel result:
Which looks a lot better.
The scalloping of the edges — also an artifact of the generation code — I can live with because the alternative is horrifying.
At this point you might be wondering what this post is about and, more importantly, whether it is worth reading.
To be honest, it almost certainly is not.
This is me commemorating a small milestone regarding some code I’ve been working on. Make no mistake: What I’m doing is strictly Amateur Night at the Mandelbrot. I’m dabbling in something that interests me — building ships in bottles, so to speak.
This is not likely to be of much interest to others, so caveat lector.
[That said, I’m not going to geek out much on the code itself. I have a programming blog for that. I am going to geek out on the Mandelbrot a bit.]
Basic Background #1: The Mandelbrot is generated by doing a repeating calculation (the same calculation) on every point of the image. The calculation repeats until: [A] some max limit is reached; or [B] the calculation “escapes” — reaches a value greater than 2.
Points that escape are not in the Mandelbrot set, points that do not are. (Which makes the M-set impossible to fully calculate — some points that max out might eventually escape.)
Incidentally, that the same calculation is done on each pixel means the Mandelbrot is embarrassingly parallel — every pixel can be calculated simultaneously. My dream is using a GPU stack to create a highly parallelized Mandelbrot Machine.
Basic Background #2: Traditionally, points in the M-set are rendered black. The main cardioid shape, the circular bays, and the threads and mini-brots seen in zooms.
The colorful parts are outside. A traditional method uses the number of calculations it took to escape. This number is then mapped to some color palette. The color of a pixel indicates how long it took to escape.
Most of the fun of Mandelbrot renderings is in the coloring schemes, the palettes, and how palettes are mapped. A full exploration would require a college semester (at least).
Basic Background #3: This is geometry on a two-dimensional plane.
That a point “escapes” when the calculation goes above 2.0 refers to its distance from the center.
Each time the calculation repeats, it generates a new point (which is plugged back in for the next repeat).
If the new point is ever further from the center than a distance of 2.0 it “escapes” (it’s been shown that in such cases, the point always diverges to infinity).
Orbits figure 1 shows the “orbits” of a point inside the set. (The original point — the one being calculated — is the blue dot in the upper right. The red lines show how it moves after each calculation.
As you see, the orbits converge on an attractor and never escape.
Orbits figure 2 shows a point that does escape after jumping back and forth (for quite a while) on the left side:
When last seen, the point was headed off for the far right. You can see how it starts jumping up and down, which leads to its escape.
(BTW: This particular point [-0.75,+0.001] is interesting and will be the subject of future posts.)
Basic Background #4: Those big bands, points far from the M-set, are regions where the number of calculation repeats is tiny.
The most distant regions are already escaped, so their count is zero (or one if all points get at least one to “prime the pump”). Slightly closer regions escape after one, two, or just a few more, repeats.
Out in the “flats” the change from, say two to three, is really obvious, so we get those big bands. (Closer in, things get chaotic and fast-changing, and the bands aren’t so apparent.)
(Band smoothing is just a trick to eliminate the edges.)
Okay, so what happened with my code concerns the check to see if the point has escaped. It needs to be performed after every calculation.
It looks basically like this:
loop (max=1000) do calculate(point) if 2.0 < abs(point) then exit point(ESCAPE) end loop exit point(NOT_ESCAPE)
But that simple algorithm, especially the bit in red, hides something more complicated.
Taking the absolute value of a point gives us a distance. The actual calculation being performed there is:
Which, of course, is just the Pythagorean formula. We want to know if dist is greater than 2.0.
But this adds two multiplications, an addition, and (worst) a square root. (Remember: this is virtual math I have to implement, and implementing a square root routine? No thank you very much!)
So the work around is to cheat and leverage the fact that a distance of two is guaranteed if x+y is above 4.0 (’cause its square root is 2.0).
The problem is that the obvious replacement…
if 4.0 < (point.x + point.y) then exit point(ESCAPE)
…results in the jagged edges. D’oh!
My Plan B involved using native math, which is adequate for rendering so long as we don’t zoom in too much (or at all). The native math is much (much) faster, so I could experiment.
First I confirmed that the usual approach (taking the absolute value) gave expected results:
So far, so good. Using the x+y approach, I got this:
So, yeah that approach is the problem (but not the math itself; yay).
But now what? I really don’t want to have to do a distance calculation using my virtual math.
Ah, but if either x or y is above 2.0, that should work…
But the bands are a little close in.
Turns out much better if I test for x or y above 4.0:
And that I can live with. An acceptable compromise in band edges balanced by not having do to a distance calculation.
(BTW: any “bailout” value above 2.0 works — higher values just shift the banding. That’s all that happened here.)
This isn’t about my code (or even the Mandelbrot, per se). It’s another exploration of algorithms.
A point illustrated here is that subtle aspects of the code can result in unexpected results in the generated reality. Code is very difficult — perhaps even impossible — to get 100% right.
Even if your code is right, your thinking can be wrong. Good code, wrong idea, it still amounts to a bug — an unexpected (and usually undesired) result.
Another point is the need sometimes to implement basic building blocks you then use to create a virtual reality (in this case, a rendering of the Mandelbrot set).
Here I’m implementing a limited set of operations on arbitrary precision real numbers. That’s the virtual math needed for the Mandelbrot reality.
The “arbitrary precision” is what drives the need. The native numbers of most computers are fixed precision. In many cases, the best precision is 53 binary bits, which amounts to 16 or so decimal digits.
Calculating the deeper parts of the Mandelbrot (which is to say: nearly all of it) requires many hundreds of decimal digits of precision. That sort of thing isn’t available normally except in highly specialized packages (such as fractal generators).
Few things in the real world need that kind of monster precision. Only in the abstract world of math can it matter. (Which, once again, is an argument that there’s something not quite right, or at least damned weird, about the real numbers.)
It takes 12 hours to calculate a 400×400 image because I’m using 160 bits for binary fractions. That gives me 49 decimal digits.
I kept it low to make it even possible to do the calculation of all the points. My intended use for this code is studying what happens to specific points (like the one I mentioned above).
If, for example, I used 1024 binary fraction bits (a default mode), I get 309 decimal digits of precision. (And very slow calculation times.) The code ultimately is limited only by system limits, but for technical reasons I’ve got it throttled at 8,192 bits — a whopping 2,467 decimal digits of precision.
It’s a project I’ve had in mind for a long time. Doing it in Python made it easy, but slow. A proof of concept. The next step is doing it in C++ or other low-level language. That’ll make it fast.
The final step is porting that to a GPU and running pixels in parallel!
Stay precise, my friends!
As a bonus, here’s a little movie that animates orbits for a series that follows the cardioid border at different distances (the Trace value — 1.00 would be right on the border). Note how complex the orbits are at close range. (More about this another time.)