Sideband #68: More Fraction Digits

The last Sideband discussed two algorithms for producing digit strings in any number base (or radix) for integer and fractional numeric values. There are some minor points I didn’t have room to explore in that post, hence this followup post. I’ll warn you now: I am going to get down in the mathematical weeds a bit.

If you had any interest in expressing numbers in different bases, or wondered how other bases do fractions, the first post covered that. This post discusses some details I want to document.

The big one concerns numeric precision and accuracy.

Accuracy is always relative to some abstract perfect value.

The accuracy of a measurement refers to how close the measurement numbers are to the underlying reality (the “ground truth” if you will). An estimate of how many jelly beans are in a jar, or how many people are in a crowd, has an accuracy relative to the true number.

Precision is the number of valid (that is: accurate) digits in a number’s expression.

Note that accuracy applies to the numeric value and has little to do with the actual digits in the number’s expression. Precision, however, strictly applies to the expression of the value.

§

A completely accurate number implies “full-precision” — that very digit of the number is accurate. But reducing precision, and therefore the accuracy, allows for shorter numbers.

For example, the speed of light is conveniently remembered as 300,000,000 meters per second. But the full-precision (fully accurate) number is 299,792,458.

The first one may not be completely accurate, but it has the advantage of a short form: 3e8 (which is a common way to write 3×10^8). It’s also easier to remember — two digits versus a string of nine.

Another example is the Planck Length, which is often written as 1.616255×10^-35 meters (with an uncertainty of 18 in the last two digits). An easy-to-remember form is 1.616e-35.

The first one has seven digits of precision, the second one has just four. All the zeros (34 of them) between the decimal point and these digits aren’t counted in this case. What matters is the number of digits expressing a value.

Likewise, the speed of light examples have one and nine digits of precision respectively. All the zeros behind the 3 don’t matter.

(Of course, zeros are counted if embedded: 1.70035 has a precision of six digits. The point is how many digits are requires to express the number.)

§

Why this matters is illustrated by the speed of light example. We can store (or remember) just one digit, or we can store nine. More precision means bigger numbers.

Imagine an eight-bit register or memory location. It can store any number from 0 to 255. That means it has — at best — three digits of precision. If there was also an eight-bit exponent, this two-byte setup could express the speed of light as 3e8.

But it could not express the more accurate 299,792,458 because that requires nine digits of precision.

Computers these days have, at least, 32-bit registers, and many have ones with 64-bits. The former can count from 0 to 4,294,967,296, which allows nine digits of precision comfortably. The latter counts up to a whopping big 18,446,744,073,709,551,616 — 19 digits of precision.

Which is great for integers, and some systems even allow arbitrary length integers, so integers can be as big as needed (subject to system sanity) with full precision.

§

When number formats must support fractions things get complicated. The rational numbers are a whole other ballgame.

The two most common formats, single and double floating point, offer 24 and 53 bits of precision. That amounts to about 7 and 16 decimal digits of precision respectively.

We can calculate how many decimal digits result from a given number of bits with this formula: log(2) * number-of-bits. The base 10 log of two is 0.301029+ so basically multiply the number of bits times three-tenths.

For my purposes (related to Mandelbrot calculations), these formats are seriously under-powered. Unusable, in fact. Mandelbrot calculations require hundreds, and sometimes thousands, of digits of precision.

We can reverse the formula above to determine how many bits are required. If we wanted, say, 500 digits of decimal precision, we divide that by log(2), roughly three-tenths: 500 ÷ log(2) ≈ 1661 bits (which requires a 208-byte number).

And 208-byte numbers are decidedly non-standard.

§ §

As discussed last time, fractional expressions are often infinite strings. In truth, all fractional expressions have infinite strings, but some are just zeros.

For example, our lowly friend, 0.1, actually has an infinite string of zeros following the 1. Even a number like 5.0 has an infinite string of zeros. We just ignore that (as we do leading zeros) because they don’t affect the precision or accuracy.

In fact, little old 0.1 (one-tenth) only looks so trim in our familiar base ten. I mentioned last time how it’s a repeating sequence in binary. It’s a repeating sequence is all bases, except bases 10, 20, 30, etc.

Here are the (base 10) numeric values for one-tenth in bases 2 through 16:

 2: 0.00011001100110011001100110011001
 3: 0.00220022002200220022002200220022
 4: 0.01212121212121212121212121212121
 5: 0.02222222222222222222222222222222
 6: 0.03333333333333333333333333333333
 7: 0.04620462046204620462046204620462
 8: 0.06314631463146314631463146314631
 9: 0.08080808080808080808080808080808
10: 0.10000000000000000000000000000000
11: 0.11111111111111111111111111111111
12: 0.12497249724972497249724972497249
13: 0.13b913b913b913b913b913b913b913b9
14: 0.15858585858585858585858585858585
15: 0.17777777777777777777777777777777
16: 0.19999999999999999999999999999999

Note these are the decimal (base 10) values. You can see there is a repeating sequence in all cases. (In base ten, it’s the repeating zeros.)

This also illustrates how, as the radix increases, the actual value of one-tenth in that base gets larger and larger. In base 20, its (decimal) value is 0.2.

§

Precision affects accuracy when its not full. To illustrate, I’ve converted our friend one-tenth into binary (base 2) at increasing levels of precision. Then I converted the binary back to a decimal string.

The following table shows how accurate the representation of 0.1 is at different levels of precision (last three truncated due to length):

 8: 0.09765625
16: 0.0999908447265625
24: 0.099999964237213134765625
32: 0.09999999986030161380767822265625
48: 0.09999999999999786837179271969944…
64: 0.09999999999999999996747393482543…
80: 0.09999999999999999999999950369163…

First, notice that the number of decimal digits matches the binary precision. For example, the last one has 80 decimal digits. There’s a reason for that.

Next, notice the sequence of 9s. They get longer as precision increases. Essentially, you can see the accuracy improve. I’ll get back to both these points.

Finally, notice that these values are all slightly less than 0.1 (although, by the time we’re at 32 bits of precision, the difference is tiny and gets even tinier as precision increases).

In all cases, increasing the binary value by the least significant bit — the smallest possible addition we could make — pushes the value above 0.1.

For instance, the binary for the eight-bit version is 0.00011001. If we increase that by 0.00000001, the smallest amount we can add, we get the binary value 0.00011010. The decimal value of that is: 0.10156250, which is greater than one-tenth.

Even the 80-bit version suffers from this. If we again add the smallest possible amount, the decimal value becomes too large. That decimal 0.1 doesn’t have an exact binary equivalent means we’re forced to accept a binary representation that is either a bit too small or a bit too large.

All we can do is use enough precision to make the accuracy acceptable for a given purpose.

§

Regarding the link between binary precision (number of bits) and decimal precision (number of digits), if we look at the first eight binary fraction digits, we notice something striking:

1: 0.5
2: 0.25
3: 0.125
4: 0.0625
5: 0.03125
6: 0.015625
7: 0.0078125
8: 0.00390625

Each bit results in a decimal number one digit longer than the previous. In fact this pattern persists for all fractional binary bits. A binary number with 8000 bits (1000 bytes) results in a decimal fraction with 8000 digits.

But the accuracy — the valid digits of precision — are just three-tenths of the total. That’s why we saw a string of 9s (valid digits) followed by a much longer string of “trash” digits.

That 8000-bit number, for example, has 2,408 valid digits of precision followed by 5,592 trash digits. (And if we added the smallest possible amount — a binary fraction with 7999 zeros ended by a 1, then the resulting number would be too large.

But 0.099999… for over 2400 digits of 9 is a pretty accurate version of 0.1 even with a lot of trash on the end.

§ §

Lastly, one might complain that, while the integer base conversion algorithm takes a numeric value as input, the fractional one takes a string. It implies the second one is using more of a special “trick” than the first.

That’s a fair criticism, in fact it does. It uses the length of the number, which merges the world of strings and numbers. And it uses strings to do its work.

That bothered me, too, although using strings actually works for my purposes. Most of what I deal with involves input numbers, either typed or from a text file, so strings are what I’m dealing with.

But it did annoy my sense of symmetry, so here’s an algorithm that is much more like the integer one:

  1. Accept input in the form p/q — i.e. a rational fraction.
  2. If p is zero, we’re done.
  3. Multiply p by the desired base.
  4. If p < q, add a zero to the output queue.
  5. If q <= p, integer divide p by q for a quotient and remainder.
  6. Add the (integer) quotient to output queue.
  7. The remainder becomes the new p.
  8. Go to step #2.

This algorithm takes numeric inputs, handles any fraction (it does require that p < q, in other words, a value below 1.0), and works in any base.

[Because “All your base…” you know the rest. 😀 ]

§ §

And that’s it for fractions in any base.

I hope you found it useful in your day-to-day life.

I know I have!

Stay basic, my friends!


About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

3 responses to “Sideband #68: More Fraction Digits

  • Wyrd Smythe

    For those interested, I will be posting some code on my programming blog, so stay tuned.

  • Wyrd Smythe

    One advantage of the p/q-input algorithm over the string-input is that one can calculate exact any-base fractions of numeric values that don’t express exactly in base 10. For example, one-third.

    The binary expansion of one-third is 0.010101…

    The “01” pairs go on forever without change.

    But using the decimal string as input, the value must be rounded off, for instance: “0.333333333” — which (in hex) becomes:

    55555553e6d4575218c8

    The leading 5s have the right bit pattern, “0101” but, as you see, the 5s do not go on forever like they should. This number has only 80-bit precision, but greater precision would just delay the point when the 5s turn to trash. This 80-bit value translated back to decimal is:

    0.333333332999999999999999221…

    Which is close to the input, but not exact. Compare with the 80-bit expansion of the precise version:

    0.33333333333333333333333305760…

    So a bit more accurate to the value. Note in both cases how the precision turns to trash at the same point: 24 decimal digits, which is log(2) × 80 bits = 24 digits.

  • Wyrd Smythe

    This is the second time recently that YouTube has recommended a video that perfectly matches something I wouldn’t have thought YT was aware I was interested in. But this one is exactly on point for this point:

    I only wish it had been recommended a week ago!

    I assume this is synchronicity, but it’s a little eerie how perfect some of these “random” videos are. Matches not to what I’ve been watching (although there are those, of course), but to other things I’m doing elsewhere.

    Like writing a post that vamps on accuracy and precision and then being handed a four-year-old video about exactly that. 😮

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: