There are many science-minded authors and working physicists who write popular science books. While there aren’t as many math-minded authors or working mathematicians writing popular math books, it’s not a null set. I’ve explored two such authors recently: mathematician **Steven Strogatz** and author **David Berlinski**.

Strogatz wrote ** The Joy of X** (2012), which was based on his New York Times columns popularizing mathematics. I would call that a must-read for anyone with a general interest in mathematics. I just finished his most recent,

**(2019), and liked it even more.**

*Infinite Powers*Berlinski, on the other hand, I wouldn’t grant space on my bookshelf.

It is, perhaps, an interesting illustration of my judgement, tastes, and biases, that I found the two Berlinski books I tried so unpalatable but really took to Strogatz. In fact, I found them so unpalatable I was unable to finish either.

A while back I tried *One, Two, Three: Absolutely Elementary Mathematics* (2011) because I’m interested in the foundations of mathematics, and the description sounded on-point:

In his latest foray into mathematics, David Berlinski takes on the simplest questions that can be asked: What is a number? How do addition, subtraction, multiplication, and division actually work? What are geometry and logic?

But I found his writing meandering and baroque. For me it made reading the book unpleasant; I wasn’t getting anything out of it. After a handful of chapters I gave up and returned the book (to the library).

More recently, because I’m brushing up on my calculus, I checked out an earlier book of his, *A Tour of the Calculus* (1995). I hate to judge a joint on just one meal, but I couldn’t finish this one, either. Same reason — flowery talk without substance.

Berlinski is a prolific author, lots of books and articles — even a few fiction books. Looking him up later, the first paragraph of his Wiki page dropped my jaw (emphasis mine):

David Berlinski (born 1942) is an American author and philosopher who has written books about mathematics and the history of science as well as fiction.

He is a senior fellow of the Discovery Institute’s Center for Science and Culture, a center dedicated to the promulgation of the pseudoscience of intelligent design.

Hey, wait,… *what?*

Does this explain anything? Do I connect any dots here?

Yep. I do.

So much for David Berlinski.

**§ §**

Steven Strogatz, on the other hand, has a much cleaner and more engaging writing style.

He’s not nearly as prolific because (I assume) he’s busy doing actual mathematics and teaching at MIT and Cornell, but his books are (at least in my eyes) far more worthwhile.

For a general audience I absolutely recommend *The Joy of X* to anyone with an interest in what makes mathematics so useful (and so beautiful to those who love it).

Even those who don’t like math might find it an enjoyable journey as well as an eye-opener. It’s an easy read intended for those with little or no background in math.

His book about the calculus, *Infinite Powers*, is also intended for casual readers, although it does get into some deeper water while discussing derivatives. (But they are crucial in calculus because they are crucial in life. As a simple and canonical example, speed is the derivative of distance over time.)

What this book covers in wonderful easy detail is that (the) calculus is all about just two things, *derivatives* and *integrals*, and they are Yin-Yang brother and sister to each other.

Calculus was born from the need in science to deal with rates of change, whether it be the curve of a circle, the speed of a moving object, the spread of a virus, or the trajectory of a spacecraft. Everywhere we look, reality is filled with things that *change*.

**§**

Calculus also deals with — *and tames* — infinity. The earliest seeds of calculus come from ancients trying to calculate the area of a circle or the volume of sphere.

They found the formula for the area of a circle through an ingenious method of cutting the circle into ever smaller “pizza pie” wedges and lining up the slices to form a rectangle for which the area is easy to calculate (it’s just width times height):

As the slices become “infinitely” small, the rearranged slices approach a perfect rectangle for which the area is, as any school kid knows, *“pi-r-squared”* (except any school kid also knows pie are round).

This idea of slicing something up into pieces that can be assembled in a solution is the basis of *integrals* in calculus. While historically they were the first aspect of the Yin-Yang nature of calculus, today they’re the *second* topic taught.

The first taught these days is other sibling, *derivatives*.

**§**

A problem early mathematicians wanted to solve was determining the *slope* of a curve in a graph.

Slope means exactly what you probably think it means. A ramp has a slope; so does a hill or a valley. Flat ground has zero slope, ramping up is positive slope, ramping down is negative slope. (Obviously it depends on direction. On a graph, slope is always measured going from left to right.)

Slope is important, because slope is the rate of change.

A flat ramp with a constant rise of 1 inch per run of 1 foot has a slope of **+1/12** (or **0.08333…**), which is a gentle positive slope. Because the ramp is flat with constant rise, the slope is the same everywhere — and easy to determine, it’s just rise over run.

The problem is that nature tends not to be linear, so there aren’t many flat ramps. Most natural rates of change, from populations to falling rocks, vary their rate of change. Even a bathtub drains faster at first and slower as it empties.

This was a lot harder for early mathematicians to figure out, which is why historically it came second.

**§**

Consider the three curves shown in *Figure 2*.

They represent a car that begins standing still, picks up speed to a maximum, and then slows to a stop.

For all three curves, the horizontal axis represents time.

The red curve shows the distance the car covers. It starts flat at zero, has a rise to a steep slope that tapers off at the top to flat when the car stops. The steeper the curve the more distance the car covers in a given period of time — in other words, the steeper the curve, the faster the car.

The blue curve shows that, the car’s velocity (speed). It also starts and ends at zero because the car is standing still at the beginning and end. However, it peaks in the middle when the car is going at maximum speed and then drops as the car slows down.

The green curve shows the car’s acceleration, which is the *change* in speed. As the speed increases, so does the acceleration, but as the speed levels off at its peak, the acceleration drops to zero. When the car slows, the acceleration is negative, so the curve drops below zero.

One can imagine creating these curves with instruments that measure these properties of the car, but with calculus — specifically with derivatives — all we need is the first curve, the red distance curve. The other two derive from that.

**§**

Specifically, velocity is the derivative of distance over time, and the relationship is essentially the same as the flat ramp’s relationship with rise over run.

Put another way, velocity is the rate of change of distance relative to (constant) time. Put yet another way, velocity is the slope of the distance curve at any point along that curve.

If you look at it in that light, the blue curve is just the point-by-point *slope* of the red curve. Since the slope of the red curve is zero, then positive, and finally zero, that’s what the blue curve shows.

Likewise, acceleration is the rate of change of velocity, which makes it the rate of change of the rate of change of distance over time. It’s the rate of change *squared* of distance compared to time.

[This is why the rate of free fall in Earth is 9.8 m/s^{2}. Free fall (up to terminal velocity) is acceleration.]

Therefore, acceleration is the *slope* of the velocity curve (which, in turn, is the slope of the distance curve). Acceleration is the *derivative* of velocity (which is the derivative of distance).

Note how the blue curve starts at zero, has positive slope to a peak, then has negative slope back down to zero. The green curve is positive as the velocity rises, but negative when the velocity drops.

Also note that acceleration is zero only for an instant as it crosses from positive to negative. This means the velocity was never constant, but always either rising or falling.

**§**

While this historically took longer to figure out and codify, once it was it turns out to be easy enough to work with (in myriad cases) that we teach it to teenagers. In many situations, derivatives are trivially easy.

[For example, the derivative of the function ** x^{2}** is the function

**2**.]

*x*More crucially, calculus took us from seeing curves as objects consisting of data points to functions that map inputs to outputs. Curves have slopes, which are complicated, but functions have derivatives, which are often simple.

The magic and power of calculus comes from the discovery that derivatives and integrals are Yin-Yang siblings. They do the same thing, but in opposite directions. Unfortunately, exploring that would take another blog post.

[But for example, one integrates the function **2 x** using the function

**. The thing about integration is that working backwards — determining the**

*x*^{2}*anti*-derivative of a function — is generally a harder problem than determining the derivative.]

**§ §**

Some random (high-level) notes from reading *Infinite Powers*:

¶ *Limits are simpler than their approximations.* Whoa; mind blown, but it’s true. One example is 1/3, which is the limit of 0.333… Another example is a circle, which is the limit of a polygon with an increasing number of sides.

One can also involve digital music, which is an approximation of the (limit of the) analog signal.

It’s actually an insight into why numerical simulations are always more complicated than what they simulate — they’re approximations to some limit.

¶ Calculus took curves from geometric objects to functions and reduced a world of apparently different problems into three: Getting the slope from a curve (derivatives); getting the curve from a slope (anti-derivatives); getting the area under a curve (integrals).

As it turns out, they’re all three sides of one coin. (Obviously a three-sided coin.)

¶ Quadrature explains why the *sin* and *cos* functions are 90° out of phase. Mind blown again. But it’ll take another post to explain.

The short form is that when a point on a wheel is at a given angle, its momentum is 90° relative to that angle. (This ties back to my *Fourier Circle* post.)

¶ Kind of related, the *sin* and *cos* functions with are derivatives of each other. What’s more, the double derivative of either is just -1 times that function. (The double-derivative of the *sin* function is the *-sin* function.)

And double derivatives are important. As just seen, acceleration is the double derivative of distance over time. Nature has myriad double derivatives.

¶ Kudos to Strogatz for mentioning the importance of Katherine Johnson, Sophie Germain, and Mary Cartwright. (The last two were new to me.) They, along with Emmy Noether, are unsung mathematical giants.

**§**

I highly recommend *The Joy of X* to anyone with any interest in math. I also recommend *Infinite Powers* to anyone with a slightly deeper interest.

To be totally honest, I really wish everyone had about the level of mathematical background and knowledge as expressed in these books (the first one for sure).

I suspect it would be a far more sensible world if everyone did. Math teaches clear thinking; it really is as simple as that.

*Stay integral, my friends! Go forth and spread beauty and light.*

∇

November 20th, 2020 at 1:45 pm

This went long (sorry), and I didn’t even get to mention Jim Al-Khalili and his book

Paradox: The Nine Greatest Enigmas in Science, which I quite enjoyed.He touches on a number of “paradoxes” in physics — nearly all of which aren’t paradoxes at all. Each chapter touches on a different paradox:

Similar to Strogatz’s

The Joy of XI didn’t get a lot out of this — I was well acquainted with the material — but both were enjoyable reads.November 20th, 2020 at 6:26 pm

I might have to think about The Joy of X at some point. I was reading Morris Kline’s book, which was fascinating at first, discussing how things like the Pythagorean theorem was worked out. Then, suddenly, at some point, it was just math, and more math. I went a chapter or two into that before bailing. Just randomly sampling pages in the Amazon preview, it seems like The Joy avoids that.

Did you read The Joy physically or electronically? I ask because the Kindle edition gets knocked in the Amazon reviews.

November 20th, 2020 at 7:30 pm

There isn’t a lot of math in either

The Joy of XorInfinite Powers. The heaviest going might have to do with derivatives in the latter book.I read both electronically (through Cloud Library), no complaints (although I do wish it was possible to expand diagrams).

November 20th, 2020 at 9:09 pm

Ok, thanks. It is often possible to expand diagrams in Kindle books, although that probably has to do with how good a job the editors did on the book. Just added Joy to my Kindle account. The trick will be getting around to reading it!

November 20th, 2020 at 11:54 pm

Between two library apps, Apple eBooks, and the Kindle app, I have four different readers not to mention the differences in how ebooks are put together.

SometimesI can enlarge a diagram or image, but usually not.As I recall, I breezed through

The Joy of Xpretty quickly. It’s based on his newspaper articles, so it’s pretty easy reading.November 21st, 2020 at 10:35 am

Now, if I weren’t incrementally losing workable time. Out of my conscious 17 hours per day, less and less of them are operationally available. So much time now is devoted to learning this damnable Rust that to think I can read /anything/ is laughable. I would not have thought that at nearly 60yo I’d have to start learning a whole new “thing”.

So, keep posting your findings, (I’m sure you will), and I’ll dribble in your discoveries as I may.

November 21st, 2020 at 12:35 pm

Heh, yeah, so much one could learn, but so little time in life. I’m trying to learn quantum mechanics, but I think I started too late in life (I’m

overthe 60-year mark). I’m finding it a real challenge.(FWIW: Learning is so important, I made “Learn to learn” my first Principle of Programming!)

November 23rd, 2020 at 9:22 am

I’m currently reading

The Informationby James Gleick. Last July I read and reviewed his 2016 book,Time Travel: A History, and found it “ambling, rambling, and meandering.”I’m finding the same thing here. I’m page flipping through large sections — most of the second chapter, in fact (which was about a narrow aspect of the development of language that went on and on page after interminable page).

There’s nothing wrong with these books other than they bore me silly, and that’s just a matter of my personal taste. Just not into an ambling, rambling, meandering trip through history.

I think that’s the real problem. Gleick is a historian, and I’ve never cared much about how things got to be what they are. I care about what things are

now. (I find that more than complicated enough for me!)November 28th, 2020 at 12:32 pm

OMG! I’m finally starting to understand the Schrödinger equation!

(Emphasize

startingto… 😮 )