Operators represent the *observables* of a quantum system. All measurable properties are represented mathematically by an operator.

But they’re a bit difficult to explain with plain words.

There *are* words — precise, exact words that work great if one speaks the lingo. (And of course there’s math.) The trick is finding the larger mass of simpler words that can do almost as good a job.

We started with the notion of a ** vector space** — where we see every point in the space as the tip of an arrow that has its tail at the origin. To that we added the notion of (

Very simply put, an ** operator** is a mathematical function (something that takes one or more values and returns a value) that accomplishes a given transformation on the space.

Typically, a transforming or mapping function is called an *operator* if the space in question represents the states of some physical system. The transformation *operates* on the system states.

So an *operator* is a function (or something we see as a function) that takes a state, changes it, and returns a new state. (Implicitly, it transforms *all* the vectors in the space.)

**§ §**

In the previous post in this series, I introduced a number of basic transforms applicable to a 2D Euclidean space. This time I’ll explore those again focusing on implementing them with operators.

The first one was the trivial *Null* transform that left the vectors unchanged. I used a simple form of notation to express this:

Last time I used ** z** as the input because I was using the complex plane as the vector space, and

As written above, we have transformation ** T** which takes a vector

We could just leave it at that, but it doesn’t give us a general case for transforms. The definition works for this transform, but not for any other type. However, *because* the transforms of interest here are *linear* transforms, we can use the matrices of linear algebra to implement them.

Doing this requires we express our vectors as column vectors — as matrices with one column and as many rows as there are dimensions. The complex plane has two dimensions, so our column vectors have two rows:

In 3D space they would have three. Quantum mechanics often deals with spaces with a vast number, even an infinite number, of dimensions, so conceptually these vectors can have many rows.

Now we need some square matrix to represent the *Null* transformation operator. We need a matrix that, when multiplied against the input vector, gives us that same vector. We need the matrix equivalent of multiplying by one, and that’s the ** identity matrix**:

This important matrix exists in as many dimensions as required and consists of all zeros except for ones along the ** main diagonal**. Multiplying this matrix with any other matrix just gives that other matrix.

Now we can write our *Null* transform like this:

Interested readers are urged to try this. Pick some random values for the ** x** and

Now we have a general case for our transforms:

Any operator (transform) can be represented by a matrix ** M**.

**§**

The *Zero* transform, which reduced all vectors to the zero vector, I notated as:

We can now define it as:

Again the reader is encouraged to try these, but it should be pretty obvious the result has to be [0, 0].

Note that the original simple definition doesn’t return a vector, but the scalar zero value. I originally defined it as T(z)=[0,0] to make the result a vector but decided to stick to a simpler notation. Our matrix versions always return a column vector.

(There is also that, using that notation, it’s a row vector, and here we’re dealing with column vectors. I didn’t want to muddy those waters.)

**§**

There was also the *Real* transform, which collapsed the 2D space into a 1D space along the X-axis (the real number line). I notated it as:

Which, again, returns a real number, not a vector. I could have written it as T(z)=[Re(z), 0], but that seemed too confusing. (It would have been slightly more correct, though.)

We define the matrix version like this:

Readers definitely should try their hand with this one.

I didn’t show you an *Imag* version that collapses the space to a 1D line on the imaginary axis (the Y-axis). That would have been notated as T(z)=[0, Im(z)], which I thought was even more confusing.

It has a matrix implementation like this:

(I did show a similar transform when I scaled the respective axes. See below.)

You may get a sense of what’s going on by comparing these to the identity matrix. Both use one column of that matrix, which preserves one axes, while using zeros in the other column, which (as seen in the *Zero* transform) collapses the axis.

**§**

Speaking of scaling, the next transform was scaling the space — which can either enlarge or contract the vectors.

The notation was:

The simplified notation conceals what is actually a two-step process. As written, the transform takes both a vector, ** v**, and a scaling factor,

That’s redundant and unnecessary. The more accurate way to notate it might be:

Which defines the transform to scale to a certain factor — it’s built in to the definition of the transform, not a required parameter. Note that the free parameter, ** x**, is not defined here but carried through to the transform, which will take a parameter to resolve the missing

[In more detail: The function *Scale*, given a scaling factor, returns a new function that scales to that factor. The new function requires (and of course returns) a vector, as all transforms do.]

The matrix version is:

This shrinks the entire space by **0.5**. To expand by, say, **1.5**:

Note that both reflect the identity matrix, but put the scaling factor where the ones went.

**§**

Combining the axis collapse and scaling resulted in two transforms I showed last time, *ScaleX* and *ScaleY*. These didn’t collapse their axis, just scaled it.

And:

I didn’t show a simple notation for these because there really isn’t one that’s illustrative. The best might be something like (for *ScaleY*):

Which, even worse than the *Real* transform, seemed too confusing for the post. Representing these as matrix transforms, however, is easy.

We define *ScaleY(0.25)* as:

In order to scale the vertical Y-axis by (in this case) 0.25 (as shown in the previous post).

We likewise define *ScaleX(0.25)* as:

They’re like the overall scaling matrix, but only for one axis. The other column reflects the identity matrix and preserves the axis.

**§**

The transform matrices (operators!) so far have all been variations on the identity matrix. Because of that, these transforms preserved the orthogonality (right-angles) of the space.

That *isn’t* true of what I called the *Lorentz* transform. That operation shifted the space on the diagonal:

And, again, there isn’t a useful easy notation for it (even more so than with previous examples). But as before, the matrix version is easy:

By easy I mean such a matrix is defined as:

Where ** v** (in this one case in this post) is the

Doing the math gives us the transform:

Note that ** v** is once again the input vector here.

**§**

Another transform I showed was a rotation:

As with the scaling and Lorentz transforms, we effectively need a function that gives us the matrix we need for a given rotation. (Note that such a “function” can just be knowing how to define the matrix.)

We define a rotation matrix like this:

Where *theta* (θ) is the angle we want to rotate the space. The rotation shown above is 30° so, doing the math, the matrix transform is:

Note that we can scale the rotation by multiplying those matrix numbers by a scaling constant.

I covered matrix-based rotations extensively in two previous posts: ** Matrix Rotation** and

**§**

The final transform is a shear transform:

A shear, as with the Lorentz transform, doesn’t preserve right angles. (But, as with all linear transforms, it does preserve straight lines.)

Shear transforms come in too many varieties for a canonical form. The one above looks like this:

You can make sense of this by using what the **Matrix Rotation** and **Matrix Magic** posts explained about the *i-hat* and *j-hat* column vectors.

I’ll also again point you to Grant Sanderson’s Linear Algebra playlist on his outstanding **ThreeBlueOneBrown** YouTube channel.

**§ §**

I’ll revisit the use of matrix operators when this series gets more into quantum computing because “logic gates” in QC are defined as matrices. (They’re also important with regard to quantum spin.)

Next time I’ll get into eigenvectors and eigenvalues.

Stay operating, my friends! Go forth and spread beauty and light.

∇

]]>As is usually the case when talking about the mind and consciousness, considerable speculation *is* involved — there remain so many unknowns. A big one involves the notion of free will.

I just read an article that seems to support an idea I have about that.

The article, which I saw in **Salon** (dot com), is ** Most brain activity is “background noise” — and that’s upending our understanding of consciousness**, by Thomas Nail, Professor of Philosophy at the University of Denver.

The article starts off with why it can be so hard to answer the question, *“What are you thinking about right now?”*:

95 percent of your brain’s activity is entirely unconscious. Of the remaining 5 percent of brain activity, only around half is intentionally directed. The vast majority of what goes on in our heads is unknown and unintentional. Neuroscientists call these activities “spontaneous fluctuations,” because they are unpredictable and seemingly unconnected to any specific behavior.

It goes on to compare our minds to ships tossed at sea, and I think we’ve all experienced being at sea mentally, our thoughts seemingly directed every which way.

Nail brings up the question of why our brain, which is just 2% of our body’s mass, uses a whopping 20% of our energy to run a system that produces so much noise.

That seems generally contrary to how evolution works. (Although evolution is certainly not without its excesses and outright mistakes. In some cases, traits selected for sexual desirability put a strain on the animal itself. Think of the giant bills, tails, and plumes, of some birds, for instance.)

Still, it doesn’t seem all that likely a noisy brain would be accidental or a byproduct of an evolutionary misstep. The alternative is that it serves an important purpose.

**§**

Let me interrupt here to mention an idea I’ve had for a while now about how our brains might be one of the only non-deterministic physical systems that exist.

In part it comes from their sheer complexity, but a big part of my speculation has been that our minds are filled with background thoughts and mental noise, and that our consciousness is able to sift through and select from that in ways that defy physical determinism.

Ways that seem random physically, but which are driven by our consciousness selecting amid the noise of equally available thoughts.

I’ve spent a lot of time watching my mind as I decide what to have for dinner. Especially in cases where I’ve decided soup sounds good, what causes me to select clam chowder over minestrone or lentil or any of the other varieties I keep in my pantry? (I like soup.)

Determinism claims there is some chain of physicality that inevitably leads to the soup I select. A vexing issue about free will versus determinism is why it *feels* like I’m making a free choice, especially when I go back and forth trying to pick a soup.

The question is, if reality could be rewound to that moment, could I have chosen a different soup? It *feels* like that’s possible.

But is it? Is the sense we have of free will an illusion? If it *isn’t*, what’s the mechanism that provides for it?

**§**

Back to the Salon article. Nail writes:

Many brain studies of consciousness still look only at brain activity that responds to external stimuli and triggers a mental state. The rest of the “noise” is “averaged out” of the data.

Then he endears himself with me a bit by mentioning computationalism, a topic I’ve spent a lot of blog posts trying to unpack.

Nail continues:

This is still the prevailing approach in most contemporary neuroscience, and yields a “computational” input-output model of consciousness. In this neuroscientific model, so-called “information” transfers from our senses to our brains.

Then came the part that made me smile:

Yet the pioneering French neuroscientist Stanislas Dehaene considers this view “deeply wrong.” “Spontaneous activity is one of the most frequently overlooked features” of consciousness, he writes. Unlike engineers who design digital transistors with discrete voltages for 0s and 1s to resist background noise, neurons in the brain work differently. Neurons

amplifythe noise and even use it to help generate novel solutions to complex problems. In part, this is why the neuronal architecture of our brains has a branching fractal geometry and not a linear one. The vast majority of our brain activity proceeds divergently, creating many possible associations and not convergently into just one.

Which, yeah, is very much what I’ve been thinking about the brain. One argument I’ve made repeatedly is that, no, neurons are **not** like logic gates. They’re much more like signal processors, and rather noisy ones at that.

[Note: The article links to Dehaene’s book. I linked to his Wiki page.]

**§**

Nail writes that multiple scientists are addressing the issue of spontaneous fluctuations in a new field *“known as the ‘neuroscience of spontaneous thought.’ Several critical studies in this area have shown that cognitive flux, or ‘spontaneous fluctuation,’ is not secondary to but rather fundamental for consciousness.”*

Apparently the *“frequency and distribution of this flux can even accurately predict whether someone is conscious or unconscious.”*

Nail suggests this view could be a game-changer for theories of mind.

As a music lover, I liked the analogies to our minds acting like a jazz band:

Our “metastable minds” are emergent properties of lower frequency fluctuations that conjoin into “nested hierarchies” with higher frequency fluctuations. Neuroscientists call this process “cross-frequency coupling.” It works a lot like syncopation in music. At the lowest frequencies, the drums lay down a beat. In-between these beats, the bass plays a rhythm, and in-between the notes of that rhythm, the guitar plays a melody. The song is a sound-wave made of sound-waves.

I like his stream analogy, too:

In most theories, consciousness is “mission control” perturbed by background noises. But consciousness functions more like an eddy in a river in this new model. Just as whirling patterns emerge from turbulent waters, our stream of conscious thoughts and feelings arise from the torrent of spontaneous brain fluctuations.

He actually gets quite poetic about it:

Our brains respond to these frequencies with their own spontaneous fluctuations. They play between the waves with melodies that make up our thoughts and feelings. Like a jazz trio, the world, body, and brain have their own spontaneous fluctuations that are the basis of the creative improvisation we call existence. The world, body, and brain entrain with one another like interlocking eddies floating down a stream.

At the end of the article Nail explores the idea of this cognitive flux and mental health, which is a topic of extra concern in these COVID-stressed times.

If the recent theories of these fluctuations are correct and consciousness emerges noisily from the bottom-up, this suggests a different mental health treatment model. Over time, spontaneous brain activity can become entrained and coupled into negative perceptions and rigid mental habits that constrain the lower frequencies. Higher frequency brain activity can act as a “filter” on our incoming perceptions and feelings about ourselves and the world. In particular, recent research on cognitive flux shows that depressive and anxious rumination occurs at some of the highest levels of nested activity in a region scientists recently named the “default mode network.”

Which ties into the idea that one can work oneself into depression by constantly playing the same mental grooves. Over time, the brain seems to lock into this mode to the point that, traditionally, anti-depressant drugs are necessary to pull it back to normal.

**§ §**

I liked this so much I couldn’t help quoting so much of it. The article is much longer though, I haven’t spilled all the beans. It’s good reading for those interested in theories of mind.

Obviously it’s speculative, but it does sound well-grounded to me.

And, just maybe, it provides that mechanism for free will.

Stay mentally noisy, my friends! Go forth and spread beauty and light.

∇

]]>

Excellent and thorough tutorials exist for those interested in digging into either topic, but (as with matrix math) I thought a high-altitude flyover might be helpful in pointing out important concepts.

The irony, as it turns out, is that trig is actually pretty easy!

At least the basics are. The thing to keep in mind about trigonometry is that it’s just triangles. And right-angle triangles at that.

[I’m reminded of an old *calm down and carry on* saying among computer geeks: “Calm down, it’s just ones and zeros.” So for trig: Calm down, it’s just triangles.]

But the *other* thing about trig (again, as with matrix math) is that it’s often taught by teachers without a fundamental understanding or who even share that vague fear and loathing. Math, in general, often suffers from rote teaching by teachers who learned it the same way.

If I may be forgiven for one last comparison to matrix math, in the defense of teachers these may not be the easiest subjects to *teach*. As with any subject, there are gestalts that provide a “big picture” view. Without those, one has little else to resort to than those rote rules and procedures which, at best, are boring and, at worst, are utterly opaque and off-putting.

And, to repeat, most people just don’t run into trigonometry very often in life. Who among us has *ever* needed to calculate the height of a flagpole using trigonometry? It does seem to fall under the frequent student complaint: *Why must I learn this? What’s it good for?*

The intent here is to try to provide those basic gestalts — a basic understanding of trigonometry on a fundamental level. And, perhaps, to demonstrate what it’s good for (besides figuring out how tall a flagpole is).

**§**

As mentioned, it’s all just right-angle triangles.

*Figure 1* shows the canonical trigonometry right-angle triangle. It stands for all triangles with the following two properties:

• One of the three corners has a 90° angle.

• The long side is *set to* a length of 1 (no matter how long it actually is).

Such triangles can be tall and skinny, or squat and broad, or (as in *Figure 1*) somewhere in the middle.

A primary characteristic of triangles in flat space is that the sum of their three angles is always 180°. (In fact, a test of a space to see if it’s flat is to see if this is true. If it isn’t, the space isn’t flat.) Since a trig triangle has a 90° angle, the other two angles must sum to 90°.

Effectively that means we only care about one of the angles (which we call *theta*, θ), since the other angle is always 90-θ. For example, if *theta* is 30° then the other angle is 90-30=60°.

Setting the long side (the *hypotenuse*) to 1 makes the math easier. For real-world triangles with real-world lengths, we use the actual value of the hypotenuse as a scaling factor. For example, if the hypotenuse is actually 4.5, then, as you’ll see, we scale our various trig values for the sides by that much.

A key point is that the 90° angle along with locking the hypotenuse to 1 constrains the ** a** and

Conversely, if side ** b** is long, side

Note that, as shown in *Figure 2*, the orientation of the triangle isn’t important, only that it has a 90° angle.

Note also that in *Figure 2* I kept the hypotenuses the same length, even though that length (per the background grid) is obviously not 1. In fact the actual length (per the grid) is 4.1231… (it’s the square root of 17).

Which brings us to another important aspect of right-angle triangles and trigonometry: *the Pythagorean theorem*. It says the hypotenuse is the square root of the sum of the squares of the sides. Which always sounds torturous expressed in words. It’s *much* simpler in numbers:

Since we’re setting the hypotenuse to 1, we have:

If we take the square root of both sides:

Which, incidentally, is the equality that defines a circle (the unit circle, but as with the triangle, it can be scaled to any real-world size).

And now the locking of the side lengths is obvious, because:

Making one side longer makes the other side shorter, and vice versa. This proportional locking is fundamental to trigonometry — it’s *why* it works.

**§**

So trig boils down to one angle and one side of a right-angle triangle. How easy is that?

Not only are the two sides locked, so is the angle, *theta*, and this is where the trigonometry kicks in. Given that angle, and the length of one side, we can determine all the sides and angles of a right-angle triangle.

We can also determine the angle by knowing the lengths of at least two of the sides.

Essentially, given two pieces of information about the triangle, we can determine *all* the information about it. *That’s* the value of trig.

**§ §**

The trick is to not let words like ** sine** and

[That said, as with most math topics, one can get deep into the weeds where things get very involved and there’s lots to learn. We won’t need any of that here.]

To explain, consider *Figure 3*.

To be canonical, we need to switch from calling them side ** a** and side

The *sine* and *cosine* are nothing more than the proportions of the sides given some angle, *theta* (θ). They are functions that take an angle and return the length of the side.

Note that the length returned assumes the hypotenuse is set to 1. As mentioned above, when that’s not true, the actual hypotenuse length becomes a scaling factor we multiply the returned sine or cosine value by to get the actual side length.

Again, all *sine* and *cosine* do is, given an angle, give us the length of a side assuming the hypotenuse is set to 1. That’s their whole deal.

In particular, the *sine* gives the length of the side *opposite* the angle, and the *cosine* gives the length of the side *next* to (“co”) the angle (see *Figure 1*).

**§**

You may have noticed that, while *sine* has its own Wiki page, *cosine* (and all other trig functions) link to the trigonometric functions page (or the inverse trigonometric functions page for ones like *arctangent*).

That’s because *sine* is the fundamental function. The *cosine*, due to how the sides are proportionally locked, is just *sine’s* mirror image. In fact, by simply switching to the other angle, the *sine* and *cosine* swap.

If you’ve ever perused a trig table, you’ve noted both contain the same numbers in reversed order. (Actually, not reversed but 90° out of phase with each other.)

[For the more mathematically inclined, the *cosine* is the derivative of the *sine* and vice versa, modulo some sign changes.]

At root, the sine is nothing more than the length of the opposite side divided by the length of the hypotenuse. Cosine is the length of the *adjacent* side divided by the hypotenuse. At heart, they’re just fractions that, because of the proportional locking, are tied to the angle. (See the Wiki trig functions page.)

A sine wave is what we get if we feed the sine function a progression of increasing angles:

The above animation nicely demonstrates how a sine wave comes from circular motion. In fact, the A.C. electricity that comes into our home follows a sine wave *because* generators turn in circles.

The animation also illustrates how the *sine* and *cosine* are 90° out of phase.

**§**

That’s pretty much the deal. The rest is just elaboration, details, and building on these basic tools.

Another big trig function is *tangent*, which is just the opposite side’s length divided by the adjacent side’s length. That ends up making the *tangent* the *slope* of the hypotenuse, which is why it isn’t defined for 90° (or 270°) — the slope is infinite on a vertical line.

[To be precise, the slope isn’t defined for a vertical line because slope is defined as **Δ y**/

The other common trig function is *arctangent*, which is the inverse function to *tangent*. It takes the slope and returns the angle. (The inverse functions take a length and return the angle, whereas the normal functions take an angle and return a length.)

*Arctangent* is handy when you know the size of the triangle and want to figure out the angle. For example, the green triangle in Figure 2 has lengths of x=1.0 and y=4.0 and, thus, the slope of the hypotenuse is **4.0** (** y**/

My calculator says the arctangent of 4 is **75.9+** which means the angle (in the lower left) is nearly 76°. We can check this by using the angle and a known side:

Remember this assumes the hypotenuse is 1.0, but (per the background grid), that triangle’s actual hypotenuse is:

So that’s our scaling factor:

That vertical side is 4.0, so (given the rounding) exactly right.

I cannot stress enough that the only way to really learn these things is to work with them. Just reading about them won’t do it.

[As a side note to programmers, the common ** atan2** function provided by many code libraries is the

**§ §**

This was a hard post to write. I’ve had a note about a *“Trig Is So Easy”* post for years, but I’ve been working with trig so long it’s hard to judge what needs to be explained and what is obvious.

And it just now occurs to me to wonder if that’s why Rachel Maddow often seems to over-explain things on her MSNBC show. She’s so smart and well-read that she misjudges what her viewers need explained and what is well-known to them? (One time she spent about five minutes explaining a “dead man switch.” I kind of assume anyone smart enough to watch Maddow in the first place already knows what that is, but maybe that’s my own blind spot speaking.)

Well, if that’s what’s going on there, I can sure relate. It ain’t easy!

Stay trigonometric, my friends! Go forth and spread beauty and light.

∇

]]>

I’ll mention the technique I use when doing **matrix multiplication** by hand. It’s a simple way of writing it out that I find helps me keep things straight. It also makes it obvious if two matrices are compatible for multiplying (not all are).

One thing to keep in mind: It’s all just adding and multiplying!

Let’s start with the basics: A matrix is a rectangular bundle of numbers. Being rectangular, it has rows and columns, the number of each being the main characteristic of a matrix. Rows are always listed first, so a 3×1 matrix has three rows and one column, while a 2×2 matrix has two of each.

A matrix with the same number of rows and columns (such as a 2×2 matrix) is a *square matrix*, which is special.

Two other important special matrix types are column vectors, which have multiple rows, but just one column (their size is ** n**×1); and row vectors, which have multiple columns, but just one row (size 1×

The idea of number bundles isn’t new; the vectors just mentioned are bundles of numbers. Even a complex number is a number bundle; it has a real part and an imaginary part. (That it has two parts is what let us treat it as a 2D vector.)

Matrices have many uses in mathematics. They are a *type* of number, so, as with all numbers, there are operations on matrices that create new matrices or return others types of numeric values. For instance, a matrix can have an *inner product*, which is an operation that returns a single numeric value.

Specifically, we can add and multiply matrices, but there are some constraints.

**§**

**Matrix addition** requires that both matrices be the same size; that is, they must have the same number of rows and columns. Then, just as with vector addition, matrix addition is just a member-wise add that results in a sum matrix (also of the same size).

This generalizes to any number of rows and columns.

Matrix addition is both associative…

(**A**+**B**)+**C** = **A** + (**B** + **C**)

…and commutative…

**A**+**B** = **B**+**A**

**§**

**Matrix multiplication** is (notoriously) a little more complicated.

While, like addition, it is associative, it is **not** commutative:

**A**×**B** ≠ **B**×**A**

The order of multiplication matters and is a factor in the constraint. The rule is that the *column count* of the matrix *on the left* must match the *row count* of the matrix *on the right*.

The by-hand technique highlights this:

The idea is simply to write the second (right-hand) matrix above, thus leaving a space to do the calculation. It lines thing up nicely, and if you want, you can move the second column of the above matrix more to the right to place it over the calculations that use that column (I usually don’t bother).

Notice how this also helps illustrate why the column count of the left-hand matrix must match the row count of the right-hand one.

We can make the order of operation more obvious by converting the square matrix to a row or column vector that itself *contains* vectors:

Each of the components is a vector with two elements. The arrow over the name indicates a vector. Now we can visualize the multiplication like this:

On the left side, the column and row vectors, each containing vectors. On the right side, the same thing in bra-ket notation (which I’ll explore in more detail in another post).

In either case, what we’re doing is taking the *inner (or dot) product* of the component vectors. The inner product is a multi-dimensional form of multiplication (of taking a *scalar* product of two numbers), so we’re doing the same multiplication as shown in the first version, but these versions better illustrate which rows and columns to combine.

**§**

Following are some examples of multiplying matrices of different sizes.

[1×1] *times* [1×1]

The simplest possible matrix (hardly a matrix at all) is a 1×1 matrix.

The result is a 1×1 matrix, and the single value is the same as we’d get multiplying two scalars together:

But note that a 1×1 matrix is *not* a scalar. (The difference becomes apparent in the next two cases.)

[1×1] *times* [1×2]

Similar to the first case, the 1×1 matrix *acts* as a scalar, and the result is the same 1×2 matrix (row vector) we’d get multiplying the 1×2 matrix by a scalar.

This effectively is the same as:

But note that, *unlike the scalar multiplication*, the matrix multiplication cannot be reversed because [1×2][1×1] is an illegal operation. The number of columns in the first matrix doesn’t match the number of rows in the second.

So a 1×1 matrix is not (always) the same as a scalar!

[2×1] *times* [1×1]

Here’s the legal version of putting the “scalar” matrix second. In this case, the single column of the 2×1 matrix (a column vector) matches the single row of the 1×1 matrix:

This is the same as:

However, as in the second case, this operation cannot be reversed (due to the column/row mismatch), whereas with the scalar operation it can.

[1×2] *times* [2×1] (*inner product*)

Multiplying a row vector by a column vector results in a 1×1 matrix usually treated as a scalar:

This operation is known as the *inner (or dot) product* of the matrices. It generalizes to row and column vectors of larger sizes (so long as they match). When taken as an inner product, the result is always considered a scalar value.

[2×1] *times* [1×2] (*outer product*)

Multiplying a column vector by a row vector results in a matrix with as many rows and columns as the vectors (in this case, a 2×2 matrix):

This operation is known as the *outer product* of the matrices. It generalizes to column and row vectors of larger sizes (so long as they match). Note this operation always creates a square matrix.

[2×2] *times* [2×2]

Multiplying two (same-sized) square matrices results in a new matrix of the same size (in this case, 2×2).

(Multiplying square matrices is what many think of as “matrix multiplication” but as the examples above show, it’s not the only form.)

[2×2] *times* [2×1]

One of the more important examples is multiplying a square matrix times a column vector. In quantum mechanics, square matrices represent operators and column vectors represent quantum states. The output of such an operation is a new column vector, a new quantum state:

Note that you cannot multiply such an operator with a row vector because the column-row counts don’t match.

**§ §**

** Square matrices** are special because they have special properties.

An easy one is the ** trace**, which is defined only for square matrices. The

The ** main diagonal** starts at the upper left and extends down to the lower right. The

The identity matrix has the property that using it to multiply another matrix gives that matrix.

It’s the matrix equivalent of multiplying a regular number by one, the *multiplicative identity*.

Multiplying a regular number, ** n**, by its inverse, 1/

I’ll note that determining a matrix’s inverse is, in many cases, non-trivial. It’s not just a matter of 1/** n** as with scalar values. (It’s not that hard, just takes a bit of figuring.)

**§**

The ** matrix determinant** is, among other things, the scaling factor of the linear transformation the matrix represents.

Given some rectangular area before the transformation, the matrix determinant is how much that area shrinks or grows in the transform. If the determinant is 1, the scale of the space doesn’t change. If the determinant is negative, the transformation somehow inverts (“flips”) the space.

Calculating the determinant for a 2×2 matrix is easy:

The formula for a 3×3 matrix isn’t hard, just long: (** aei**)+(

**§**

A square matrix can have a *dot product* (two, actually, depending on whether we consider it holding row vectors or column vectors as we did above). The matrix is ** orthogonal** if the dot product is zero (because its vectors are orthogonal).

Taking it one step further, if the vectors are *normalized* and orthogonal, the matrix is ** orthonormal**.

**§**

Of great importance in quantum mechanics, a square matrix is ** Hermitian** if it’s equal to its conjugate transpose.

The ** transpose** of a matrix (which can be done to any matrix) is a flip along the main diagonal. Square matrices remain square, of course, but in other matrices the row and column counts switch places. In particular, a transpose converts a column vector to a row vector, and vice versa.

The ** conjugate transpose** of a matrix is, first, a transpose, and then taking the complex conjugate of its members. Obviously, this only applies to matrices with complex values.

[This is important in quantum mechanics, especially with regard to *bra-ket notation*, where a *ket* is a column vector, and a *bra* is a row vector. In particular, *bra* 〈** a**| is the

Lastly (also important in QM), a square matrix is ** unitary** if its conjugate transpose is also its inverse.

Unitary Hermitian matrices are operators in quantum mechanics (that they are unitary is why QM is said to preserve information).

**§ §**

This post is just a high altitude fly over to introduce the various aspects of matrix math. Interested parties will obviously have to explore this in more detail.

One point I’ll make about reading (or watching) math topics such as this: It’s not enough to just read or watch. One has to have pen and paper at hand, and one has to do the work. It’s the only way the topic will begin to really make sense. Doing it for yourself is also the only way to really acquire the necessary skill.

Good luck, and I’m always here to help.

Stay in the matrix, my friends! Go forth and spread beauty and light.

∇

]]>

The next mile marker in the journey is the idea of a ** transformation** of that space using

This first post introduces the idea of (*linear*) ** transformations**.

I mentioned in the Introduction that, despite the unfamiliar name, linear transformations — using ** linear algebra** — are just a cool kind of geometry.

As with many forms of geometry, linear algebra generalizes to any number of dimensions, but (as with many forms of geometry) we’ll explore it in two dimensions to build the necessary intuitions.

So we will assume a 2D Euclidean space that we’ll view as vector space. (See previous post for details.)

Every point in the space has an associated arrow (a ** vector**) with its tail at the origin and its tip at the point. These vectors give each point a

Additionally, we multiply the Y-axis by the imaginary unit, ** i**, to create the 2D

The third form, the exponential, is especially helpful in wave mechanics in general and in quantum mechanics in particular. We’ll explore that in more detail in future posts. For now we start simply…

**§ §**

A ** transformation** in this space is a function (or something we treat as one) that takes a vector from the space and returns a new vector. We say a transformation

The simplest transformation is to do nothing at all. For now, we’ll call this the *Null* transformation. It maps all vectors onto themselves. (Think of it as multiplying by one.) We’ll use notation that looks like this:

This defines transformation ** T**, which takes complex number

A transform map need *not* be bijective; it can be non-injective and/or non-surjective. The domain (the input) is all vectors, but the codomain (the output) need not be complete or unique.

Put less mathematically, the map takes every vector in the space as input, but it’s allowed to map different vectors to a single output vector. It’s also allowed to leave “voids” in the output — vectors no input will map to.

**§**

Here’s a transformation (we’ll call it *Real*) that converts all vectors to real numbers by taking their real value (and ignoring their imaginary value):

Where * T*(

The result of the transformation is the one-dimensional real number line; the 2D complex plane collapses to a 1D line. All vectors “lie down” on the X-axis.

This, as mentioned, leaves the 2D space unmapped, and many vectors from that space end up mapped to the same vector.

Notice that the vectors already aligned with the X-axis don’t move. This fact, that some vectors don’t move under some transformation, becomes hugely important later under the heading of *eigenvectors*.

In Figure 2a (which is before transform), all five vectors shown, after transformation, end up as the bottom vector that’s already lying directly on the X-axis. They all map to the same result vector.

That bottom vector doesn’t change and ends up as itself. Therefore, that bottom vector is an *eigenvector* of the *Real* transformation because it doesn’t change under that transformation. (There’s a bit more to it than that, but that’ll do for now. I have post coming later about *eigenvectors* and *eigenvalues*.)

**§**

We can also imagine a transformation (call it *Zero*) that collapses all vectors to the zero vector:

This (rather useless operation) reduces the entire space to a zero-D space — a single point.

(Normally, transformations are not quite so reductive. These trivial examples serve to illustrate the notation and introduce the basic idea of a transform.)

**§ §**

In linear algebra, there are two important rules with regard to the allowed types of transformation:

• Firstly, the origin must remain in place; it never transforms. The zero vector remains the zero vector under transformation.

• Secondly, the transform must be linear, meaning it must preserve straight lines. Note that grid lines do **not** have to remain *orthogonal* (at right angles) or in any particular orientation — just straight.

**§ §**

The *Null*, *Real*, and *Zero*, transformation all obey these rules. The *Real* transformation collapses the vertical axis, and the *Zero* transform collapses both, but, they remain straight in the limit.

Speaking of which, if we think of the *Zero* transform as taking place over time, we see the vectors all shrink proportionally until they’re all zero. During this, the origin remains centered, and the axes remain straight. This implies another kind of transformation, *Scale*, that shrinks or expands all vectors proportionally by a *factor*:

The transform takes a scaling factor, ** sf**, that determines how much to shrink (

Note that, in reality, the “after” image should be covered with a half-sized grid, but in these examples I only transform the *visible* part of the “before” space to help illustrate the transform. Just remember that the transform includes the entire space.

**§**

Another legitimate transformation is a rotation, call it *Rotate*, that takes an angle, θ (*theta*), that specifies how much to rotate the space.

*Rotate* is always around the origin, which preserves the origin, and rotation also preserves straight lines.

(Again, I’m only transforming the visible part of the “before” space. In reality the entire space is transformed.)

A rotation transform can include a scale transform, so:

Transform ** T** takes a complex number,

As we’ll explore in the next post, there are various ways to accomplish these transforms, but all of them can be represented by square matrices (which is how I’ve done all the images you see here). To give you a taste of what such a matrix looks like, here’s the matrix for the *RotateScale* transform:

I won’t, in this series, explain exactly how this works because I’ve already covered that ground. See the Matrix Rotation and Matrix Magic posts.

**§**

We’ve seen transforms collapse axes, scale, or rotate (or do nothing). There are two additional transforms that preserve the straightness of axes.

First, note that above we had one transform that scaled the whole space, and another that collapsed one axis. The collapse is actually a scaling of that axis to zero. We can also scale one axis to some scaling factor:

We can shrink or expand along any axis. The above scales the Y-axis, the below the X-axis:

We can also scale on a diagonal:

In fact, this is the same transformation we see under a Lorentz transformation — the transformation we see of a frame moving relative to us. The example above represents the frame shift due to moving at 0.5 ** c**.

**§**

The last transformation I’ll show you is a shear:

Note that, unlike the Lorentz transform, which rotates both axes, a shear only rotates one axis (in this case the Y-axis). The other axis just “slides” (or shears, hence the name).

It may not mean much right now, but the last two transforms (as far as I know) are pretty strictly matrix transforms. That’s why ** Shear(M)** takes a matrix — it’s the only way to easily specify the transform.

To repeat an important point, a linear transformation preserves the straightness of the axes, but (as in the last two cases) need **not** preserve angles. The prior transforms all maintained the 90° orientation of the axes, but the last two obviously don’t.

**§ §**

The main point is that a transformation implicitly applies to all vectors in the space. The red grid in the examples stands in for all the vectors in the space.

In fact, we can use transforms to smoothly transform a continuous space:

Note that I arranged to have the pixels that were shifted off to the left be wrapped around to the right so we can see them — that is **not** part of the transform. To show it properly would require enlarging the after space to the left.

**§ §**

That’s enough for this time. Next time we’ll get into these transforms as ** operators** on the space and talk about how they’re implemented.

I *highly* recommend, in general, Grant Sanderson’s **ThreeBlueOneBrown** YouTube channel for learning about mathematics in a fun visual way. It’s one of the best math channels around.

In particular I recommend his **Linear Algebra playlist** which introduces linear algebra in fairly short videos with *outstanding* animations and explanations. I’m deeply indebted to the channel and this series for helping me understand this stuff at a much deeper, more fundamental, level.

There are many other free resources available. Even Wikipedia has a lot, although it’s not always the best resource for beginners. It is pretty great once one gets rolling, though.

Stay transformed, my friends! Go forth and spread beauty and light.

∇

]]>So I was delighted to see that, not only do Minnesotans have good sensibilities when it comes to voting politically, we also have good sensibilities when selecting our favorite Valentine’s Day candy.

See full map below. ION: Saw a cool new SF movie last night!

I saw the map in an article in **The Takeout** (dot com), an online magazine for people who love food and dining out (and I do love both).

The article is, ** Map of each state’s top Valentine’s Day candy leaves us worried about California**, written by Marnie Shure. The map she uses comes

Ain’t technology something!

Without further ado, here it is (click for big):

Ms Shure’s article was concerned about California’s favorite apparently being candy necklaces. My suspicion is the Californians think of them as Puka shell necklaces, which are required wearing in some sectors of the state.

Some comedian, long ago, might have been Richard Pryor, said that after God created the United States (of America), he grabbed the whole thing by Florida and gave it a good shake to get things going. But that made all the lose nuts and bolts roll down into California, and they’ve been there ever since.

I lived in Los Angeles for almost 20 years, and can attest to the truth of this. California, especially Southern California, is *weird*.

(Really, all three giant boundary states, California, Texas, and Florida, have something notably weird about them. California, of course, is especially weird, and it’s possible it has something to do with having multiple MLB teams in one state. Florida and Texas each have two, but only California has *five*.)

Anyway, Puka candy necklaces aside, I was struck by my state’s excellent taste, not just in candy, but in *chocolate*. Dove choc is great choc!

*Go Minnesota!*

**§**

One caveat. As Ms Shure points out, *searching* for candy isn’t the same as *buying* candy, and probably few need to search for those ubiquitous generally tasteless chalky candy hearts. Every store on the planet is overflowing with them at this time of year.

That, the author suggests, may explain why only four states seem to favor that traditional, if childish, Valentine’s Day candy.

Since she loves the Sour Punch Hearts (never heard of them) she also rejoices in that 45 states of 50 go for chocolate of some kind.

**§**

I found the regionality a bit interesting.

Look at the concentration of western states who are into M&Ms (a very decent choice; they melt in your mouth, not in your hand).

There’s a similar concentration to the east for chocolate strawberries. I wonder if the latter one is due to an eastern firm that specializes in them?

On some level it almost makes sense to me that Alaska and Hawaii would also be into the Dove chocolate, but Utah, too? Okay…

I am a little surprised Hershey chocolate doesn’t make a stronger appearance. I’ve recently realized they make some of the best chocolate I’ve ever tasted. I wonder if many, as I once did, think of them as a minor player, an also ran compared to Dove and the fancier brands.

But honestly, I’ll take Hershey chocolate any day. Great stuff!

**§ §**

In Other News: Until last night, the above was all I intended to post today. I saw that article about the candy last week and have been saving it.

But last night, on Netflix, I watched, ** Space Sweepers**, a new Korean SF space romp I’d heard about, and

It’s perhaps halfway between *Firefly* and *Star Wars* with maybe a dash of *The Expanse* thrown in. The protagonists are a crew of four (one is a robot) on their ship Victory. They’re independent space trash cleaners — they track down and capture errant space junk.

It’s a competitive business, and an early scene features them literally ripping a prize away from other sweepers. Who come off as deadly enemies, but apparently it’s all in the family, so to speak.

It all takes place in 2092, when the Earth is dying in its own wastes. Humanity has begun fleeing into orbit (those who can *afford* it), and there is also a project to terraform Mars using a genetically altered “super tree.” A powerful genetically altered visionary is behind all this. It will surprise no one that he’s the main villain of the piece.

This isn’t just a swashbuckling space opera, it’s also incredibly sweet and touching. The crew, while salvaging some space junk, discover what appears to be a very young child who was protected from whatever wrecked the spaceship they’re salvaging by being stashed in a cargo hold surrounded with crash balloons.

But it turns out this is apparently “Dorothy” supposedly a killer robot with a hydrogen bomb inside her. Oops. The crew is terrified of her (which makes for some light comedy), but realize this could be their golden ticket.

As with all such rag-tag, they live on the edge of existence, constantly in debt and never able to get ahead. But “Dorothy” — if they can just sell her to the right people — is worth millions.

Unless, of course, things aren’t as advertised and they all come to love this special child.

Good action and space CGI, some humor, and a lot of sweetness. If you like space opera, this is definitely worth seeing!

**§ §**

Stay chocolate, my friends! Go forth and spread beauty and light.

∇

]]>Then, this January, the post got 257 views — 161 in the first three days. After being largely ignored for a year-and-a-half, something made the post go mildly viral. No one commented, so I have no idea how or why the post got so much traffic.

I have a thought it might have to do with the title.

Given the current COVID-19 crisis, I’m wondering if the title, *Quarantine*, is the attraction. Based only on headlines I saw, there was apparently a big uptick in interest in pandemic-themed movies such as *Outbreak* (1995).

Unfortunately, if that’s the case, the book isn’t about *viral* outbreaks at all.

Quoting from my post:

When the story begins, Earth has been cut off from the stars since 2034 by The Bubble, which suddenly appeared out beyond Pluto (at about twice the distance to the ninth planet).

The Bubble, which has all the characteristics of a black hole event horizon, hides the stars and locks humanity inside. No one knows the how, why, or who, of The Bubble.

The quarantine in question is imposed by unknown aliens or forces and has nothing to do with viruses or disease. In fact, the story is an interesting riff on the Many-Worlds Interpretation of quantum physics.

So if it was a search for a pandemic tale, it would have been disappointing to discover this wasn’t that. That might account for that lack of comments or Likes. (Maybe I should be glad that WordPress doesn’t have a Dislike or Thumbs Down button.)

**§**

Being a data geek (and a data lover), I couldn’t resist visualizing the data:

At first I thought it would be a classic ring decay, but it’s turned out to have a fairly long tail. (It even got one view today.)

It’s not reflected in the graph, but there was zero activity on any of the days prior to the big bump.

Maybe I’m easily amused, but it’s been interesting watching the spike and fade.

**§**

It occurs to me my theory about the attraction being the impression given by the title…

[I have the impeachment trial playing on another screen, and every time Rep. Jamie Raskin (D-MD) gets up to speak, I have to stop and listen. He is so compelling and impressive!]

Where was I… Right.

It occurs to me my theory about the attraction being the impression given by the title might be wrong. A spike like this must be due to someone posting a link, which implies they know what the book is about. Surely no one would post a link based on just a book title?

Although I guess there really is no knowing when it comes to people. They do the strangest things.

So it ends up being a little mystery.

**§ §**

Speaking of quarantine and pandemics, I pulled an updated (as of today) copy of the COVID dataset I wrote about last week. The data is starting to look a little better.

To begin, here’s a look at the cases reported per day:

As you see, we’re seeing a sharp drop in reported cases, despite our increasing tests, so that seems very good news. (The red curve is the unsmoothed daily data; the blue curve is the smoothed data.)

Even the death rate is starting to trend down:

Except for today — let’s hope that’s not at trend. Wonder if it has to do with New York under-reporting nursing home deaths. Would they have tacked those onto the day for today, maybe?

You might find this chart interesting:

It’s a calculation of deaths/cases, so it’s the death rate per infection rate.

One thing that’s interesting is that, in general, the world is getting better at not letting people die from COVID. (France kinda lost it there in early April.)

It’s also interesting that the USA has done much better throughout than either Great Britain or France, but not as well as Germany.

Fortunately, these days all four are getting a handle on treating the disease. (I haven’t looked at many other countries. Those I did mostly look like France and Great Britain.)

**§ §**

And on that note, I’m going back to the Impeachment coverage. Today has been pretty interesting, and we haven’t had to listen to too much infuriating lying bullshit from Twitler’s sad excuses for attorneys.

The House Managers are in closing arguments now. I wish Pence would have spoken up, but they did get some good testimony read into the record. Maybe, just maybe, the pressure is getting strong enough on the Republicans that we might secure a conviction.

One can hope. If we don’t, we may as well toss out the whole Impeachment clause of the Constitution, because if these aren’t Impeachable offenses, high crimes and dereliction of duty and oath of office, then nothing ever will be.

It may not be apparent, but this is an inflection point in our republic. Pray that the Republicans have some shred of honor and duty left.

[Update later that day: Nope. Except for seven GOP Senators, the rest are spineless cowards who shirked their oath and duty. Our Republic continues to tremble on the brink of the precipice. Be afraid. Be very afraid.]

Stay quarantined, my friends! Go forth and spread beauty and light.

∇

]]>Therefore I’m offering up a lightly edited political piece I’ve had sitting in my folder of potential posts… since 2012. Which makes it both outdated and yet oddly still relevant. It’s a short piece, originally intended to be a Brain Bubble, but I’m just going to throw it out there as a regular post.

It’s a rumination on the differences between Left and Right in politics.

I can’t help but open with an old (*old*) “Little Johnny” joke: The teacher asked Little Johnny to use the word “politics” in a sentence, and Little Johnny replies: *“Our parrot swallowed a watch, and now Polly ticks!”*

I never could resist a pun.

Anyway, for whatever it’s worth, this [*with some edits*]:

**§ §**

One last political Brain Bubble, and then I’m going to go back to ignoring politics again…

Liberal, Left, Democrat. Conservative, Right, Republican. These are ideas used to describe the two dominant political sides of American politics. Is there any objective criteria with which to measure and judge the two sides, or is it strictly a matter of opinion? Are both views *coherent* given certain assumptions about how civilized life should be?

In the interest of full disclosure, I’ll mention that I think the two sides — as they exist now in American politics — both suck, but that the Left sucks a bit less. I want nothing to do with either side (and have renounced my membership in the Ds), but I want far less to do with the Republicans. (I plan to vote Libertarian this fall.)

[*Actually, I voted for Obama again despite, at the time, some misgivings about his political effectiveness. In retrospect, it’s hard to believe how much I miss that guy. Or even that era of mild misgivings about how well government was doing. It does seem a lifetime ago now.*]

To talk about this at all requires defining exactly what is meant by “Left” (or “Liberal”) and “Right” (or “Conservative”):

To me, the biggest defining feature of the Left is the idea that government is a solution to many problems, and that a larger government can serve the people and define the republic. On the other side, the Right famously sees the government ** as** the problem and wants the smallest government possible. The Right, they say, is big on the rights of the individual.

[*And yet that seems one of the biggest lies. They’re pretty big on the rights of rich white male individuals, but the rest of us, not so much.*]

The second key defining feature to my eyes is the idea of public spending.

The Left is seen (often rightly) as spendthrift (which, oddly, means the opposite of what you’d think), whereas the Right is seen as financially more conservative (and that’s a key source of the name). I’m a fan of being fiscal conservative, but I want bridges that don’t fall down and excellent highways and universal high-speed internet.

I also want a strong education system. (I really, *really* want a strong education system. If only we had one.)

The Left, under the rubric “Liberal” is often called “Progressive” (and it’s funny how the Right can make that sound like an insult). The Right, as “Conservatives” are apparently against progress (and interestingly, accusing someone of being “conservative” in many contexts implies they aren’t with it).

It does seem, at least sometimes, that the Right longs for the “old fashioned” America. (Well, in some ways, who doesn’t long for a simpler time. The thing is, it wasn’t so simple for everyone.)

Presumably that means the days when children didn’t pass through metal detectors in school and you could leave your door unlocked safely. But for others, that means a time of ignorance and severe gender and racial inequality. It can seem like the “good old days” were simpler and safer, but were they really? You don’t have to dig too deep to find the problems of “man’s inhumanity to man” in any era.

But in all fairness, it is probably the case that a society can be more *stable* [*and likely static*] when roles are more tightly constrained. And it is possibly also true that families with at least one stay-at-home parent are better for the kids. Traditional conservative values aren’t directly *wrong* about that, but they ignore other realities.

I once heard Bill O’Reilly [*remember him?*] say that the Left was the more humane side, and I think that’s a true point. The Right has a laissez-faire approach [*supposedly*] based on individual rights and the “sink or swim” principle (in fact, laissez-faire means “leave it alone”).

The Left has a stronger social responsibility and obligations point of view.

Which point of view is better depends on your point of view. Do civilized societies have an obligation towards the weak and disadvantaged? That’s something not commonly found in the animal kingdom.

Does advanced intelligence and society imply obligations to your fellow beings? Does success imply obligation to the society that provided the platform for that success? Or is it every being for themselves?

[*A note I’ve had for a long time looking for a post: “Both the discussion of morality and the idea of it are utterly foreign to the animal kingdom. Humans choose a path. We talk about morality. A lot. Based on a metaphysical belief that what we do matters in some greater context. A basic principle: One helps if one can. A bottom line: Humans can think about morality, so we are obligated to do so.”*]

[[*Yes, I do know that some animals evidence behaviors we might interpret as altruistic. But I very much doubt it’s due to any moral analysis, let alone discussion about morality.*]]

Consider a simple case of industrial food preparation.

The Left “Big Government” view is that regulation is a necessity to insure the safety of consumers. The Right “Small Government” view is that the marketplace will have the same effect as regulation. The idea being that companies that occasionally poison their customers will fail and vanish.

I see two problems with that: Firstly, that the bad company will eventually fail is scant comfort to those poisoned. Secondly, perhaps more importantly, what prevents a company that’s been “run out” of one market from setting up the same ill practice in another? In a country with 300 million people, there’s a lot of market space.

Surely the Wall Street and Banking issues of the past few years have shown us we can’t trust business to *ever* have the interests of their customers at heart?

Religious Conservatives sometimes oppose the idea of Evolution, and it’s always struck me as odd that they tend to be Republicans: a party that prefers the evolutionary principle. Meanwhile, Progressive Liberals oppose that principle by strongly supporting the weak. (No one ever said politics makes much sense.)

Another odd contradiction: I mentioned above that when roles are more constrained, society is probably more stable. The Progressive viewpoint favors role-breaking individual expression, but the Conservative viewpoint favors restricting roles more, which seems at odds with the “individual first” point of view.

(If the Right really favors the individual over government, why aren’t they the ones pushing for gay marriage?)

[*One of the dumber arguments the Right ever came up with (at least back then) was this idea that gay marriage threatens the institution of marriage. But no one ever says how. My theory is that Republicans are all secretly deeply closeted gay and they would all abandon their womenfolk if gay was okay to them. It’s the only explanation that makes sense.*]

[[*I take that back. That modern Republicans, in general, are fucking idiots is a much better explanation and almost certainly correct.*]]

Perhaps a pithy way to define the two sides is that the Left believes in “ours” while the Right believes in “mine.”

**§ §**

Liking that last line is what kept this piece in my queue for so many years. I think it kinda sums it up. Mine versus ours. We’re really feeling that difference with the Biden administration. (Oh, what a blessing it is.)

Of course, knowing *wrong* from right is a whole other matter, and one the very ironically named Right seems to be having a lot of trouble with these days.

Ain’t life strange?

Now I gotta go bundle up warmly and shovel some snow. (It’s up to -1. Hooray?)

Stay centered, my friends! Go forth and spread beauty and light.

∇

]]>The novel impressed me so much I bought the series of essays Huxley published almost 30 years later, ** Brave New World Revisited** (1959). So far, I’ve only read the first five (so many distractions these days), but the apparent prescience continues to astound and astonish me.

I qualify that with “apparent” because it’s actually as old as humanity.

Once I finish his essays I will likely revisit this (I’m really quite taken with it), but two essays I read last night, after two days of the historical second Impeachment proceedings, and after (despite January 6th) continuing utter lying bullshit on the part of Republicans, had me getting up from my chair and having to walk off my amazement and despair.

I said, way back in 2016, I saw eerie parallels to pre-WWII Germany, and that sense has only grown in the past five years. (Frankly, I don’t know why every intelligent person isn’t terrified. I sure am. The events of the last few months, especially, should frighten all right-thinking people.)

To be very clear, when I refer to right-thinking people, I mostly mean sane, sober, rational, and literate. I mean people free from what Huxley refers to as *“herd-poisoning”* (what a great term).

Further, any “standards” I ever mention here come from humanity’s 2000+ year history of normative literature, art, philosophy, and thought. I think it is arguably true our species becomes more moral through history (in slow and fitful steps), but we are ever prone to demagogues.

There is excellent reason that the best epithet for P45 is “Twitler” — the parallels really ought to make your skin crawl and the hairs on the back of your neck raise.

**§**

But don’t take my word for it. Let me share some of Huxley’s words.

Starting with chapter IV, ** Propaganda in a Democratic Society**…

Huxley begins quoting Thomas Jefferson:

“

The doctrines of Europe,”Jefferson wrote,“were that men in numerous associations cannot be restrained within the limit of order and justice, except by forces physical and moral wielded over them by authorities independent of their will… we (the founders of the new American democracy) believe that man was a rational animal, endowed by nature with rights, and with an innate sense of justice, and that he could be restrained from wrong, and protected in right, by moderate powers, confided to persons of his own choice and held to their duties by dependence on his own will.”To post-Freudian ears, this kind of language seems touchingly quaint and ingenuous. Human beings are a good deal less rational and innately just than the optimists of the eighteenth century supposed.

Indeed, and recent years have well-illustrated the weakness of that thinking. Democracy has been called the “least worst” form of government, and I think that’s probably true. The balance between personal freedom and sovereignty versus the security and service provided by the state is a tricky one. The term “herding cats” applies more to humans than cats.

As Huxley goes on to point out, many modern inhumane atrocities have been done by supposedly rational evolved humans. (And what could be more atrocious — less *American* — than beating someone to death with an American flag pole?)

The power to respond to reason and truth exists in all of us. But so, unfortunately, does the tendency to respond to unreason and falsehood — particularly in those cases where the falsehood evokes some enjoyable emotion, or where the appeal to unreason strikes some answering chord in the primitive, subhuman depths of our being.

That bit, written back in 1959, in light of January 6, 2021, rather dropped my jaw. He perfectly describes that mob.

Huxley points out there are two kinds of propaganda: *“rational propaganda in favor of action that is consonant with the enlightened self-interest of those who make it and those to whom it is addressed, and non-rational propaganda that is not consonant with anybody’s enlightened self-interest, but is dictated by, and appeals to, passion.”*

Compare, for instance, anti-smoking, “buckle up,” “don’t text and drive,” or “don’t drink and drive” propaganda as examples of the first case. And anything Twitler ever said as the second.

In the second essay (chapter V, which I’ll get to) Huxley gets more into one of the keys that made Hitler so successful: the power of mass communication and media. One can’t help but think of Twitler and his use of Twitter.

Mass communication, in a word, is neither good nor bad; it is simply a force and, like any other force, it can be used either well or ill. Used in one way, the press, the radio and the cinema are indispensable to the survival of democracy. Used in another way, they are among the most powerful weapons in the dictator’s armory.

He goes on to point out how mass communication has shifted from myriad small newspapers and radio stations to conglomerate giants owned by Big Money. This obviously undercuts the power of media for democracy, but, he says, it’s still better than the state-controlled media seen in totalitarian societies.

Huxley, of course, had no notion of the internet or how that would change the equation. It has put a certain amount of power back in the hands of people, but in our chaotic society that has been a two-edged sword. It’s allowed fringe groups consisting of insane people to have a lot more social power than they deserve.

In regard to propaganda the early advocates of universal literacy and a free press envisaged only two possibilities: the propaganda might be true, or it might be false. They did not foresee what in fact has happened, above all in our Western capitalist democracies — the development of a vast mass communications industry, concerned in the main neither with the true nor the false,

but with the unreal, the more or less totally irrelevant. In a word, they failed to take into account man’s almost infinite appetite for distractions.

Hoe! Lee! Cow! (Bold emphasis mine.) That was one of the bits I had to walk off.

How many times have I ranted here about the empty fantasy bullshit people gorge on? The idiotic video games. The infantile underpants movies (superhero movies). The soap-opera bullshit about stars and famous people. The multiverse and other science fantasies. All the meaningless ephemera people besot themselves with.

Huxley is so much a man after my own heart. He gets it. (And why *don’t* so many of you? Open your eyes!)

One last quote from this chapter:

Only the vigilant can maintain their liberties, and only those who are constantly and intelligently on the spot can hope to govern themselves effectively by democratic procedures. A society, most of whose members spend a great part of their time, not on the spot, not here and now and in the calculable future, but somewhere else, in the irrelevant other worlds of sport and soap opera, of mythology and metaphysical fantasy, will find it hard to resist the encroachments of those who would manipulate and control it.

I had to walk that one off, too.

I mean,… wow. Just,… wow.

**§ §**

I’m reminded (yet again) of what Leon Wieseltier said, back in 2014, when he was on *The Colbert Report* (see this post):

A democratic society, an open society, places an extraordinary intellectual responsibility on ordinary men and women, because we are governed by what we think, we are governed by our opinions. So the content of our opinions, and the quality of our opinions, and the quality of the formation of our opinions, basically determines the character of our society.

And, indeed, the character of our society is pretty shitty these days.

Later in the interview Wieseltier says:

And that means that in a democratic society, in an open society, a thoughtless citizen of a democracy is a delinquent citizen of a democracy.

We’ve certainly seen the effect of that delinquency.

I’ve been seeing some newsfeed articles about what the Netflix Top Ten says about our cultural tastes (they are, generally speaking, in the toilet).

We may not exactly live in Huxley’s Brave New World, but, in terms of our besotted distracted vastly ignorant populace, we aren’t that far. Sadly, it isn’t a government that’s done it so much as a slippery slope we gleefully sledded down.

**§ §**

In chapter V, ** Propaganda Under a Dictatorship**, Huxley explores what made Hitler (and our modern day Twitler) so effective. (

Firstly, Hitler understood the masses. He didn’t think highly of them, but he understood how they operated:

The first principle from which he started was a value judgment: the masses are utterly contemptible. They are incapable of abstract thinking and uninterested in any fact outside the circle of their immediate experience.

Hitler (and Twitler) appealed to the *“members of the lower middle classes who had been ruined by the inflation of 1923, and then ruined all over again by the depression of 1929.”* They were a fertile field for a demagogue.

Here’s another bit I had to walk off:

To make them more masslike, more homogeneously subhuman, he assembled them, by the thousands and the tens of thousands, in vast halls and arenas, where individuals could lose their personal identity, even their elementary humanity, and be merged with the crowd.

Ever seen footage of a Twitler rally? The only thing missing is the nazi salute. (And, of course, any nazi sympathizers are clearly strong supporters of the head nazi.)

[No, I will not capitalize that word. I don’t like even using it. I’ve long thought the damn nazis should be consigned to the dusty books of our misbegotten history and *forgotten* as historical aberrations.]

In a word, a man in a crowd behaves as though he had swallowed a large dose of some powerful intoxicant. He is a victim of what I have called “herd-poisoning.” Like alcohol, herd-poisoning is an active, extraverted drug. The crowd-intoxicated individual escapes from responsibility, intelligence and morality into a kind of frantic, animal mindlessness.

Video clips from January 6 demonstrated this very clearly.

With regard to Twitler in particular, but Republican tactics in general:

Opponents should not be argued with; they should be attacked, shouted down, or, if they become too much of a nuisance, liquidated. The morally squeamish intellectual may be shocked by this kind of thing. But the masses are always convinced that “right is on the side of the active aggressor.”

How familiar does all this sound?

**§ §**

Is there a fix to this mess? In reality, probably not, we’ll muddle through in mediocrity as we usually do. Unless things get even more out of hand, and our values, which have eroded like a sand castle at high tide, completely collapse.

We hopefully have avoided the precipice this time, but it was close. A murderous mob invaded the Capitol with desecration, destruction, and death, on their minds and in their hearts. *And the damn Republicans apparently are fine with this.*

As Huxley points out:

Unlike the masses, intellectuals have a taste for rationality and an interest in facts. Their critical habit of mind makes them resistant to the kind of propaganda that works so well on the majority.

So the fix, obviously, and as I’ve long said, is to pull your besotted head out of that ephemeral trivial *at least once in a while* and exercise your flabby out-of-shape brains.

Stay intellectual, my friends! Go forth and spread beauty and light.

∇

]]>Where quantum mechanics takes place is a challenging ontological issue, but the way we compute it is another matter. The *math* takes place in a *complex vector space* known as

Mathematically, a *quantum state* is a ** vector** in Hilbert space.

I’ve written several times about parameter spaces (and there’s an old post on vectors). The thing about these spaces, especially vector spaces, is that the idea is very generic with many applications. An X-Y graph is a simple example of such a space, and — as demonstrated by the beer and ice cream spaces — we can define a variety of useful spaces.

Physics is about physical space (and what’s in it), but many important concepts involve *abstract* spaces (and what’s in them). A good grasp of physics (let alone quantum mechanics) requires a good grasp of these abstract spaces.

The topic is too large (and too well covered elsewhere) for me to try to teach it. This overview only highlights important concepts you’ll need in quantum mechanics.

**§ §**

As with physical space, an *abstract space* has the notion of a location within the space. We call each possible location a ** point** and identify it with a unique

A primary characteristic of such a space is *dimensionality* — the number of independent components necessary in a coordinate to uniquely define a point in the space. Mathematical spaces can have as many ** dimensions** as desired, including infinitely many.

A *Euclidean space* defines ** orthogonal** axes, one per dimension of the space. The coordinates for a point are the orthogonal (right angle) projections of the point onto the axes. (A

Note however that, in quantum mechanics (and in math in general), the concept of *orthogonality* generalizes beyond the intuition of *right angle* to the more important concept of ** linear independence** (another term is

**§**

Given a coordinate space with points, we derive the notion of a ** distance** between two points.

Consider a simple space, the one-dimensional real number line. In this space, each point is a real number, and its coordinate is that number. (The point is right on top of its “shadow”, so to speak.) The space has only one dimension, so the coordinate has only one component.

The distance between two points on the line is their difference — one number minus the other.

But hang on, there’s a wrinkle. Let’s say one point is **12** and the other is **5**. The difference going one way is **+7**, but going the other way is **-7**. Is that really what we want when we speak of “distance” between points?

In some cases, yes, the sign tells us which way we’re going, but as a general concept we don’t refer to distance with negative values — there’s no such thing as a negative distance. (Except abstractly as a debit; *“miles to go before I sleep.”*)

What we can do is take the square root of the square.

Then we always get a positive number.

This is the foundation of the Pythagorean notion of distance in a Euclidean space.

We square the distances projected by the points on each axis, add those squares together, and then take the square root of the sum. It’s the same thing we just did with the one-dimensional number line, but extended to multiple dimensions.

Note that the squaring means it doesn’t matter in which order we subtract one from the other. Any negative value we get gets squared away, so we only sum positive distances.

In two dimensions, of course, this reduces to the famous Pythagorean theorem from high school geometry, but it applies to any number of dimensions.

For instance, in three dimensions:

Where (*x*,*y*,*z*)_{1} and (*x*,*y*,*z*)_{2} are any two points in 3D space.

You may not have thought of the Pythagorean theorem as a measure of distance between points, but that’s what it really is, a generalization of distance in a coordinate space. That’s why its form shows up in other contexts; standard deviation, for instance.

**§ §**

A ** vector space** is a coordinate space where we imagine an “arrow” (a

While a coordinate space is an infinite density of points, a vector space is an infinite bristle of vectors all pointing out of the center, one vector per point.

(There are other types of vector spaces where the arrows don’t all start at the origin, but we’re not concerned with those here.)

Along with the idea of a distance between two points, a vector space allows some additional concepts:

Firstly, the arrow implies a distance from the origin to the point — a *length* to the arrow. We use the distance equation which, since the origin’s coordinates are all zero, simplifies to:

We call this distance the ** magnitude** or

Note that there is a ** zero vector** (all zeros), which has

Secondly, the arrows in a vector space imply an *angle* between any two vectors or between any vector and any axis (an axis is an implicit vector).

Vectors and angles allow an alternate coordinate system, called polar coordinates, that identifies points by their angle and magnitude:

As a 2D vector space, the *complex plane* sees every complex number as a vector. Each has a *magnitude* and an *angle* (or ** argument**) relative to the positive X-axis. Therefore the complex plane has a kind of built-in polar mapping.

Among other things this means:

For any complex number ** z** with angle

These identities are extremely important in quantum mechanics (and in wave mechanics in general). [Especially the exponential form (right-hand side above). See Circular Math and Fourier Geometry for details.]

To get *η* and *θ* from some complex number ** z**:

[One tip about quantum math: learn the Greek alphabet. I use *eta* here because the η looks like an ‘n’ so it’s often used for the ‘normalizing’ constant. *Theta* is very commonly used for an angle. There is also that *eta* and *theta* are alphabetically adjacent, *and* similar sounding, so they make a cute couple.]

**§**

The notion of an angle between two vectors lets us define the idea of an ** inner product** (also known as a

Algebraically, we take the two vectors multiply the respective components of their coordinates together and then sum those products.

For example, in Figure 3:

** v** = (1.0, 1.5),

So their inner product is:

** v·u** = (1.0 × 3.0) + (1.5 × 1.0) = 3.0 + 1.5 =

Note that the inner product of a vector with itself is the norm squared:

**v·v** = (1.0 × 1.0) + (1.5 × 1.5) = 1.0 + 2.25 = **3.25** = |*v*|^{2}

**u·u** = (3.0 × 3.0) + (1.0 × 1.0) = 9.0 + 1.5 = **10.5** = |*u*|^{2}

Because an inner product is: ** x^{2}**+

**§**

Geometrically, an inner product is an orthogonal projection of one vector onto the other. The inner product is the length of the projection times the length of the other vector.

In Figure 3, ** p** is the projection of vector

Where *θ* is the angle between the vectors. The inner product is important enough that I’ll return to it in the future.

**§ §**

The intent here is that two-dimensional spaces offer some intuition about spaces with higher dimension. In some cases, a space can have an *infinite* number of dimensions. A very large number isn’t uncommon.

The need for many dimensions is hinted at by a phrase I used above: *“degrees of freedom.”* Each degree is a dimension, and here’s where things start to get a bit involved.

In three-dimensional space — our physical space — every particle has a 3D location coordinate and a 3D momentum or energy vector. Therefore, *every* particle has six degrees of freedom and requires six dimensions to specify. A two-particle system requires 12 dimensions since the particles are independent.

In classical mechanics, there is no quantum uncertainty, and the position vector, ** r**, and the momentum vector,

In quantum mechanics, uncertainty changes the equation (literally, from Newton’s to Schrödinger’s), but in three dimensions we still need six numbers for two vectors. In this case we deal with position and *energy potential*.

To reduce the complexity, and because the math is essentially the same, most quantum mechanics instruction begins with one particle and one dimension. Physical dimensions interact, so it’s not as simple as just doing the math three times, but the principles are the same.

**§ §**

Next time: linear transforms and operators (and matrices, oh my). That’s looking like two posts, one for transforms, one for operators.

Down the road: quantum states, eigenvectors and eigenvalues, the Bloch sphere, and spin. As I mentioned in the Introduction, the ultimate goal is exploring the Bell’s Inequality experiments.

These posts will build on previous ones, so be sure to ask about anything you don’t fully understand — it may be important later.

Stay vectored, my friends! Go forth and spread beauty and light.

∇

]]>This past week, courtesy of online library books, I finally did, and I do regret to report that I found the series rather underwhelming. I ended up skimming through the last half of the last book just to find out how it all turned out.

I think the biggest issue for me was lack of action. There was a ton of narration, explanation, internal monologue, and talking, but there wasn’t much action.

The trilogy consists of ** The Collapsing Empire** (2017),

I recently learned the term *castle opera*, which strikes me as the perfect term. In the past I’ve used *court drama* or *court intrigue*, but castle opera hits the nail squarely. (For one thing, *court drama* could refer to court *room* drama, so it’s ambiguous.)

In castle opera, as one might imagine, the storytelling generally takes place at the level of royalty — kings and queens; princes and princesses; lords and ladies. All that goes with monarchy. A frequent part of such stories is the dirty double-dealing pretty much everyone is involved in.

The intersection between speculative fantasy and castle opera is large. Such fantasies often have a Medieval setting, so of course also castles and royalty. *Game of Thrones* is a modern example.

It’s a bit less common in standard science fiction stories perhaps because kings seem a little old-fashioned compared to spaceships and aliens.

There are exceptions, of course. *Dune* (the novel), is one notable example. There is also the *Foreigner* series, by C.J. Cherryh (I’ve enjoyed the first nine of what is now a 21-book series). Both are spacefaring castle opera (complete with plenty of dirty double-dealing).

[One might think *Star Wars*, but that series is far more Medieval fantasy than space opera. I see it as a child’s fairytale.]

**§**

Thing is, I’ve never been a fan of castle opera. *Dune* is a classic for good reason — it’s an outstanding story. (I re-read it about once a decade.) C.J. Cherryh is known for the quality of her writing, the depth of her characters, and her very interesting aliens, all of which elevate the *Foreigner* series (and her work in general).

There is also that Cherryh’s central character in *Foreigner*, Bren Cameron, isn’t himself royalty, but a working Everyman we come to like and care about. His day job happens to be dealing with alien royalty, including their king. (I rank that series among my favorites.)

A problem with castle opera is that it’s hard to relate to royalty, so the reader needs that hook into caring. We need a Bren Cameron.

Scalzi offers **Count Marce Claremont**, far more mathematician than Count, and not someone I found hugely engaging. Or really even there. He could have been replaced by any number of other narrative options. He’s also half of the tepid and boring yet central love sub-story.

(I suspect it’s hard for someone who isn’t a scientist to write a good scientist character. The best scientist characters come from authors who know their science. A tip to authors: Don’t try to write above yourself. It shows.)

The other characters are all royalty, a key one being the Emperox of known human space, **Grayland II**, formerly lowly **Cardenia Wu**. (Emperox is a genderless alternative to Emperor. Credit Scalzi for being strong on gender equality.)

Another key character is **Lady Kiva** — she’s a young, very horny, and even more foul-mouthed than Avarasala from the *The Expanse*. Marce starts off with Lady Kiva but the relationship with Cardenia is the other side of the love story I mentioned.

That said, there is a certain sameness to Scalzi’s characters. The narration and dialog all have the same sardonic oh-so-clever tone. I found, after a while, that it got grating. *Really* grating.

**§**

The premise is that known human space in the far future consists of 48 star systems connected by the Flow — an “otherspace” that allows travel between star systems in reasonable time.

The concept isn’t new (*Babylon 5* used it, for example). Nor is the idea that otherspace only connects certain points. The 48 systems are united *because* they’re connected by the Flow.

What Scalzi has done (by his own admission) is make it a climate change metaphor. The Flow — which has been assumed an eternal natural resource — is about to collapse and isolate all 48 worlds. (There is no FTL in this reality.)

Adding to the *they should have known better* factor, the ancient history of the Interdependence includes the loss of the Flow path to Earth as well as to a more recent system. The possibility of the Flow changing was always known. (Included with the metaphor, commercial and government interests not wanting to disrupt business.)

The problem is they depend on the Flow, not just for their economy but for their very existence. The Independence was deliberately engineered to make all 48 systems, as the name says, interdependent. The threat to civilization comes from the notion that no system can support itself.

**§**

Which I’m not sure I buy. If one has technology, energy, and an entire star system, what more is needed?

According to the story, the Merchant houses each hold exclusive rights to important commodities. One house does grapes and wine, another all types of citrus, yet another does grain. The idea is that licensing rights are sold and seedstock provided, but the genetics are engineered so the crops fail after a certain number of generations. This insures licensees uphold quality… and payments.

Another wrinkle to the picture is that 47 of the star systems have no habitable planets. (Or apparently anywhere near habitable.) All of humanity (with one exception) lives in habitats, either underground or in space.

Part of the Interdependence is the need to keep the habitats running, especially those in orbit. (Although being stuck underground on a tidally locked planet with searing heat on one side and existential cold on the other is just as bad.)

The exception is the livable planet End, so named because it’s the furthest and most isolated in the Flow. An assumption of the story is that anyone who doesn’t get to End before the Flow collapses is toast.

The thing is, there’s a fix here that immediately suggests itself, and it’s a fix that does ultimately get used. (Share the manufacturing secrets and genetic code and end the monopolies because they’re ending anyway.)

**§**

Because of its remoteness, End is the dumping ground for some of the Empire’s undesirables. One is **Lord Claremont** who, by all appearances, must have offended someone, because he’s banished to End as the Imperial Tax Collector.

In reality he’s a Flow physicist the previous Emperox stashed on End to keep him out of the way while Lord Claremont gathered data and studied the flow. His son, Count Claremont, has followed in his shoes, and become his dad’s partner.

They have determined the Flow is about to collapse (for the foreseeable future), and the first system to go will be End. Lord Claremont sends his son to Hub to inform the Emperox. The ship the Count takes is one of the last before the path to Hub collapses. The path back to Hub will remain open for quite some time, yet, so there is hope he — along with the rest of humanity — can return.

To make things exciting, one of the Merchant families has schemed to take over End so they can rule, and they’ve blockaded the path back to End.

**§ §**

Sounds like an exciting setup to a rollicking space yarn. Yet somehow it’s all a lot of talk and internal monologue and pages of narration and explanation. At one point I skimmed through 12 pages of internal monologue. It ended up being sand in the story gears.

At one point, there was some excitement, and I thought the story was going to get into gear, but the next chapter featured narration and internal monologue, and the story bogged down again.

I was completely disengaged the last half of the last book, which should be the part where it all comes together and gets extra exciting. Instead I just wanted it to be over.

And I can’t say the ending much impressed me. So much for the great love affair.

Everyone is very nice and clever (except when the story needs them to miss a beat), the villains are quite lurid and villainous. It all works out nicely in the end, and everyone is happy except the villains.

I think the bottom line is the book just didn’t work for me.

**§**

In the afterward, Scalzi confesses he was distracted by world events while trying to write the book. I think we can all relate, but it does seem to have affected the storytelling.

Or maybe Scalzi’s quippy style just didn’t fit the topic (for me anyway, your mileage may vary). That style pulled me out of the narration repeatedly. It got grating and annoying.

There is the idea of transparent writing, where the writer tries hard to not be noticed. Scalzi has a definite noticeable style that worked very well in the other two books I read, but made this story seem trite and childish.

Stay flowing, my friends! Go forth and spread beauty and light.

∇

]]>With all that more or less behind us, I have time to be wowed by interesting (and depressing) information about the insidious infection infesting the country and the world. I mention both because I became intrigued by difference between them.

It all started when I noticed the COVID-19 graphic on CNN.

What struck me was the first-glance apparent disparity between the numbers for the USA and the numbers for the world. With an incompetent and self-serving administration, I knew we hadn’t dealt with the pandemic very well, but that first glance seemed to make us look especially bad.

I made a mental note to try to check out the numbers at some point and see how case and death rates in the USA compared to those worldwide.

The idea occasionally crossed my mind again, and at one point I wasn’t in the middle of something, so I tried a quick search. I kind of assumed COVID-19 data would be easy to find, and it was.

I downloaded a dataset of stats partitioned by country and date, wrote a little Python code, and created some charts…

**§**

Looking at COVID-19 cases and comparing raw numbers worldwide with the USA shows we have a fair chunk of the cases, but the rest of the world *seems* to have more than we do. Here are the raw numbers:

But raw numbers are misleading. The USA has a very large population. Still, it has to give one pause to consider that over 25 million people in the USA have been diagnosed with the disease.

Look at what happens with the same data charted as a *percentage of population*:

Now we’re way ahead of the curve (which isn’t good). And just consider the implication that **8%** of our population (that 25+ million) has been diagnosed with COVID.

The picture is about the same looking at fatalities. First, the raw numbers:

It’s worth mentioning again that more people have died in *less than* a year’s time due to COVID-19 (**439,463** as of this dataset) than died during all six years of WWII (**419,400**).

Here’s the death rate as a percentage of the population:

And we’re still leading the world. Yay?

It’s sobering to realize that **0.1** percent of our population has died due to this disease.

**§**

However, we’re neither alone nor leading the pack when we compare to other countries, at least in terms of percentages.

In terms of raw numbers, *“USA, we’re Number One!”*

Number one by quite a stretch.

[*Top ten on the left*: **United States**, India, Brazil, United Kingdom, Russia, France, Spain, Italy, Turkey, Germany.]

[*Top ten on the right*: **United States**, Brazil, Mexico, India, United Kingdom, Italy, France, Russia, Spain, Iran.]

We do better when ranked by percentage of the total population, but look at the company we’re keeping (nice people, surely, but hardly world powers or even just power hitters):

But we’re still a contender given there are some 190 countries in the dataset.

[*Top ten on the left*: Andorra, Montenegro, Czechia, San Marino, Luxembourg, Slovenia, **United States**, Panama, Israel, Portugal.]

[*Top ten on the right*: San Marino, Belgium, Slovenia, United Kingdom, Czechia, Italy, Bosnia and Herzegovina, North Macedonia, Liechtenstein, **United States**. *Interesting that the UK is actually leading us here.*]

The embarrassing thing is that, for a country of our status, power, ability, wealth, experience, science, and technology,… this really should have gone a lot better.

Not that there’s any mystery about why it didn’t.

Here’s a look at how fatalities are shared among the 16 countries with the largest death counts:

That’s an unfortunately big slice. Almost exactly one quarter.

[*In clockwise order from 12-oclock*: United States, Brazil, Mexico, India, United Kingdom, Italy, France, Russia, Spain, Iran, Germany, Colombia, Argentina, South Africa, Peru, Poland.]

**§**

I’ll end by adding a few more countries to the percentage charts.

First the cases:

Then the deaths:

We’re not entirely out front by ourselves, but we sure have put our foot in it.

**§ §**

I don’t know if you remember, but four years ago people used to wonder what might happen if some sort of major national disaster occurred. Would that administration rise to the occasion?

Well, I guess that question kinda got answered, didn’t it.

And after all this, the outright insurrection just four weeks ago, those *may-they-be-eternally-damned* Republicans still peddle their bullshit. There are times when I hope COVID wins.

**§**

My thanks to everyone at the **Our World In Data** site (**OWID**). It’s a pretty awesome site with a lot of tools and a lot of datasets. At some point I might look into doing some USA states charts. The COVID section of the site is really amazing.

It has a lot of interactive charts and goes far beyond just COVID data. It’s worth checking out!

Stay masked, my friends! Go forth and spread beauty and light.

∇

]]>