When I was in high school, bras were of great interest to me — mostly in regards to trying to remove them from my girlfriends. That was my errant youth and it slightly tickles my sense of the absurd that they’ve once again become a topic of interest, although in this case it’s a whole other kind of bra.
Understanding them is one of the many important steps to climb.
The notation uses the vertical bar (|) and the angle brackets (〈 and 〉) to construct |kets〉 and 〈bras|.
Which we would refer to verbally as ket zero and ket one, respectively.
Part of the utility of the notation comes from the ability to put anything one wants into a ket. For example, recalling the infamous cat, we might write:
Some authors even use the cat icons: |😺〉 and |😿〉 (which I fear risks not being rendered correctly on all systems; they are, respectively, the happy and sad cat icons for those with systems that didn’t know what to do with the Unicode).
The point is that kets can be evocative as well as mathematical. They provide a convenient way to talk casually about quantum states.
When they are mathematical, a ket is a column vector representing a quantum state vector. In a two-level system, the canonical |0〉 and |1〉 states are defined:
Why vertical column vectors rather than the more familiar horizontal row vectors? Because when we apply an operator to a vector to get another vector, we’re multiplying the vector by a square matrix, and that only works with column vectors:
It’s not a legitimate operation to multiply a square matrix by a row vector:
That’s because matrix multiplication requires the column count of the left matrix match the row count of right matrix. In a two-level system, the operator matrix has two rows and columns, but a row vector has only one row. A column vector has two rows, though, so the multiplication works.
So a ket is a column vector representing a quantum state. The complementary bra is a row vector, but it’s a little more than just that.
Given some ket |a〉, the bra 〈a| is its complex conjugate transpose. Recall that the transpose of a matrix is a flip along its main diagonal, and that the conjugate of a complex number reverses the sign of the “imaginary” part.
For a two-level quantum system:
Where x* and y* are the complex conjugates of x and y.
Note that the definitions of both kets and bras are a little more involved mathematically (see the Wiki page), but the above will get one through most of the basic situations. (For instance, it’s enough for everything in this series.)
So what can we do with kets and bras?
Two of the most common operations are reflected in the canonical representation of a two-level superposition:
Firstly, we can add kets, each a quantum state, to create a new state that is a superposition. This isn’t limited to only two:
For as many as we need. We just add the column vectors. (Which does require that each has the same number of rows, but that is normally the case with multiple states of a given system.)
Secondly, as shown in both examples above, we can multiply a ket by a numeric value (which can be real or complex):
We multiply each component of the column vector by the numeric value. (Using eta (η) as the numeric value is fairly common. It looks a bit like an n, which can stand for normalization constant.)
Another common operation is to take the inner product of two vectors. We do this by converting one of them to a bra:
Note that the result of this operation is a single numeric value, not a vector. (That value can be complex if the vector components are complex.)
One can think of an inner product as the product (i.e. the multiplication) of two multi-dimensional numbers. When such numbers have just one component (making them essentially ordinary numbers), then the inner product reduces to simple multiplication:
Note the alternate way of writing the inner product operation: 〈a,b〉. The general form 〈·,·〉 is often used to denote the inner product space.
Another common notation is to use the dot operator: a·b (using the “middle dot” symbol, not the period), because when dealing with vectors, the inner product is sometimes called the dot product.
It’s also sometimes called the scalar product of two vectors — referencing the notion of a product operation and a single numerical result.
More formally the inner product is, in part, the projection of one vector onto the other. This is especially helpful in determining if two vectors are orthogonal to each other — if they are, their inner product is zero.
For instance, the canonical |0〉 and |1〉 states are orthogonal:
The inner product of a vector with itself gives the length (or magnitude) squared of the vector. Given some vector v=(2,3):
Which is the same as the Pythagorean length:
So the square root of the inner product of a vector with itself is its length. If the vector is normalized, the length (and its square) are 1.
The term inner product raises an obvious question: Is there an outer product? There is, and it looks like this:
Unlike the inner product, which returns a single numeric value, the outer product returns a square matrix. That means we can use the notation as a convenient way to define operators.
For example, the outer product |0〉〈0| is:
And the outer product of |1〉〈1| is:
We don’t have to use the same vectors, we are free to mix and match. For example:
We can also use other values. We’re not restricted to the |0〉 and |1〉 kets.
We can combine outer products to create more interesting matrices. For example, if we add |0〉〈0|+|1〉〈1| we get the identity matrix:
We can multiply either or both by a number to create something different. For example:
Which is the Z-axis spin operator (see: Quantum Spin).
I wrote it with an explicit -1 to illustrate multiplying one of the outer products by a constant, but a cleaner way (and the usual way) is to subtract the second outer product from the first, which gives us the necessary minus value:
While we’re at it, here is how we can construct the X-axis spin operator:
And here how we can construct the Y-axis spin operator:
Pay attention to the various combinations and minus signs!
As you can see, a large part of the value of bra-ket notation is due to the ability to so easily represent operations like inner and outer product as well as adding and multiplying by constants.
One thing we cannot do with two kets is multiply them. For example:
Is not |a〉 times |b〉, because we cannot multiply two column vectors. For both the column count is one and the row count is two, so there is no way to multiply them. (Lacking a plus sign, nor is it their sum.)
The notation is sometimes used simply to indicate a pair of unrelated quantum states (say two non-entangled particles) in a given system, but it is more often used to describe entangled particles.
In that case, we use the tensor product, which gives us a new column vector with twice as many rows:
I’ll return to this when I post about entanglement.
As a final note, the angle brackets used in kets and bras are not the less-than and greater-than symbols (‘<‘ and ‘>’).
In HTML, the angle brackets are ⟨ and ⟩ — the “right angle” and “left angle” symbols .
In the LaTeX system (which is how all the math is implemented in these posts) they are \langle and \rangle.
Their Unicode code points are 10092 and 10093, respectively, or 276C and 276D in hex, so they can also be included in HTML using numerical character references. WordPress insists on converting these, but the generic form starts with &# (for a decimal value) and &#x (for a hex value), then has the numeric value followed by a semi-colon.
For purists, the vertical bar (|), which is an ordinary ASCII code, does also have a Unicode code point close to the angle brackets: 10072 (2758 hex).
Just don’t use the greater-than and less-than symbols. It looks wrong and it is wrong.
Stay bra-ket-ed, my friends! Go forth and spread beauty and light.