Exponents

After addition, multiplication (serial addition), subtraction (addition in reverse), and division (inverse multiplication), comes exponentiation (serial multiplication).

Most of us learned about the basic x² stuff in school — it just means x-times-x. Likewise, x³ just means x-times-x-times-x. Serial multiplication. No problem.

But sometimes it’s x⁻² or x½, and it’s hard to see how those work as x-times-x-etc. As it turns out, they, and much more, can be understood based on a single axiom:

\displaystyle{x}^{n}={x_1}\times{x_2}\times\ldots\times{x_n}

This is the rule behind the basic understanding you already have. The exponent says how many times to multiply a value times itself. Put in formal math terms, it looks like this:

\displaystyle{x}^{n}=\prod^{n}_{i=1}{x}_{i}

Which says the same thing. From this axiom we can very easily derive the first theorem:

\displaystyle{x}^{1}={x}

Any number to the power of one is just that number because there is only one instance of it in the multiply chain.


We can also derive something so important that it’s often presented as a second axiom (but here is treated as a derived theorem):

\displaystyle{x}^{a+b}={x^a}\times{x^b}

Because:

\displaystyle{x}^{n}={x}^{a+b}=\left({x_1}\cdot{x_2}\cdot\ldots{x_a}\right)\times\left({x_1}\cdot{x_2}\cdot\ldots{x_b}\right)

The multiply chain can be broken into two parts consisting of acount instances and bcount instances (where a+b=n). Those two parts are obviously multiplied together, so we derive the second theorem:

\displaystyle{x}^{a+b}={x^a}\times{x^b}

It’s perfectly possible to treat this and the first theorem above as the axioms and derive the axiom shown above (along with everything else). Doing it this way requires defining two axioms because we need the first theorem above as the seed:

\displaystyle{x}^{1}={x}\\[0.2em]{x}^{2}={x}^{1+1}={x}\times{x}\\[0.2em]{x}^{3}={x}^{1+2}={x}\times({x}\times{x})\\[0.2em]{x}^{4}={x}^{1+3}={x}\times({x}\times{x}\times{x})\\[0.2em]\vdots

This generalizes to the basic axiom we started with above (making it a derived theorem).


We derive another key equality by noting that (by addition and the first theorem):

\displaystyle{x}^{1+0}={x}^{1}={x}

And therefore (using the second theorem):

\displaystyle{x}^{1+0}={x}^{1}\times{x}^{0}={x}

If we divide the last two terms by x¹ (which, per the first theorem, is just x)…

\displaystyle\frac{{x}^{1}\times{x}^{0}}{{x}^{1}}=\frac{x}{{x}^{1}}

…we end up with the very important third theorem:

\displaystyle{x}^{0}={1}

Any number raised to the power of zero is just one. The exception is 0⁰ because analysis leads to contradictions: For all non-zero exponents, 0=0, but according to the third theorem, 0⁰=1. So, in some math circles, 0⁰ is considered undefined (like division by zero).

Alternately, we can accept that 0 is a weird function that acts somewhat like the Dirac Delta function. It returns zero for all n except zero, where it returns one. The justification comes from viewing any multiplication as starting with the multiplicative identity (one) and then applying successive multiplications. Then 3×42 is viewed as first taking one to three (1×3) and then taking three to 126 (3×42). This seems redundant with regular numbers but makes more sense where multiplication is abstracted to other types of numbers.

Note that 0 can be made to trigger on other values of n, just like the Dirac Delta:

\displaystyle{f}(x)={0}^{(n-x)}

Which returns zero for all x except nx=0 (for some n). But, of course, this requires an algebra where 0⁰=1. (The Windows 10 calculator does.)


The theorem for x lets us derive a rule that might be surprising. We start by noting that:

\displaystyle{x}^{(n-n)}={x^0}={1}

By subtraction and the third theorem. If we re-express this as:

\displaystyle{x}^{\left(n+(-n)\right)}={x^0}={1}

Then we then can invoke the second theorem to say:

\displaystyle{x}^{\left(n+(-n)\right)}={x^n}\times{x^{-n}}={1}

If we divide the last two terms by xⁿ (similar to how we did it just above), we have the fourth theorem:

\displaystyle{x}^{-n}=\frac{1}{x^n}

Which says that negative exponents are the inverses of positive ones. A negative exponent just means “one-over” the positive version.


We can also derive what happens with non-integer exponents. We’ll start with a simple example by first noting that:

\displaystyle{x}^{(\frac{1}{2}+\frac{1}{2})}={x}^{1}={x}

By addition and the first theorem. As we’ve done above, we can invoke the second theorem to give us:

\displaystyle{x}^{(\frac{1}{2}+\frac{1}{2})}={x}^{\frac{1}{2}}\times{x}^{\frac{1}{2}}=\left({x}^{\frac{1}{2}}\right)^2={x}

And if we take the square root of the last two terms, we get a special case of the fifth theorem:

\displaystyle{x}^{\frac{1}{2}}=\sqrt{x}

We can make this more general by starting with:

\displaystyle{x}^{(\frac{1}{n}+\frac{1}{n}+\cdots\frac{1}{n})}={x}^{1}={x}

Where the fraction 1/n is repeated n times, and then, using the second theorem:

\displaystyle{x}^{(\frac{1}{n}+\frac{1}{n}+\cdots\frac{1}{n})}=\left({x}^{\frac{1}{n}}\right)^n={x}

Using the same logic as above (taking the nth root of last two terms) we get the general case of the fifth theorem:

\displaystyle{x}^{\frac{1}{n}}=\sqrt[n]{x}

So fractional exponents (with a numerator of one) give us roots.


Finally, recursive exponents such as:

\displaystyle\left({x}^{a}\right)^{b}

Note that, by the initial axiom:

\displaystyle\left({x}^{a}\right)^{b}=(x^a)_{1}\times(x^a)_{2}\times\ldots(x^a)_{b}

And each xa expands to:

\displaystyle{x}^{a}={x}_{1}\times{x}_{2}\ldots{x}_{a}

So, we have x times itself a times, times itself b times, which is just a times b in terms of total instances of x. This gives us the sixth theorem:

\displaystyle\left({x}^{a}\right)^{b}={x}^{({a}\times{b})}={x}^{({b}\times{a})}=\left({x}^{b}\right)^{a}

Note that a and b commute here because multiplication commutes. Recursive exponents just multiply. This works nicely in reverse. For instance:

\displaystyle{x}^{\frac{3}{5}}\!={x}^{(\frac{1}{5}\times{3})}\!=\!\left({x}^{\frac{1}{5}}\right)^{3}\!=\!\left(\sqrt[5]{x}\right)^3\!=\!\left({x}^{3}\right)^{\frac{1}{5}}\!=\!\sqrt[5]{x^3}

Which can be very helpful with fractional exponents that have numerators other than one.


Amazing the theorems we can derive from that initial axiom, eh?


With regard to that last example, we have the general case:

\displaystyle{x}^{p/q}={x}^{\textrm{int}(p/q)+\textrm{rem}(p/q)}

Where the int function returns the integer part of the division, and the rem function returns the remainder. Using the second theorem, we get:

\displaystyle{x}^{p/q}={x}^{\textrm{int}(p/q)}\times{x}^{\textrm{rem}(p/q)}

Which we can simplify to:

\displaystyle{x}^{p/q}={x}^{n}\times{x}^{r},\;\;{n}\in\mathbb{Z},{r}\in\mathbb{R}

Which means any fractional exponent can be divided into an integer and a remaining fraction. Note that n can be zero, and 0 <= r < 1.0. See the chart below for an idea of how that plays out.


In some cases, the fraction is the sum of one or more roots. Then the exponent, r, is the sum of a finite set of rational numbers in the form 1/q. In the simplest case, the fraction is 1/2, which is the square root of x. [See the fifth theorem above.]

But we could also have:

\displaystyle{x}^{3/4}={x}^{1/2}\!\cdot{x}^{1/4}=\sqrt[2]{x}\cdot\sqrt[4]{x}

Which is a simple example with just two roots. Other fractions can more involved, requiring lots of roots. The general form is:

\displaystyle{x}^{r}={x}^{\frac{a}{2}}\cdot{x}^{\frac{b}{3}}\cdot{x}^{\frac{c}{4}}\cdot{x}^{\frac{d}{5}}\cdot{x}^{\frac{e}{6}}\cdots

Where a,b,c,d,e,… ∈ {0, 1}. A coefficient is one if the root should be included in the sum, otherwise zero. A real exponent requires an infinite sum, but a rational exponent is a finite sum of the above series.


Here’s a little table that illustrates the above theorems and shows a progression from very small values to vary large:

This Equals Because
x⁻³ 1/(x⋅x⋅x) 4th theorem
x⁻² 1/(x⋅x) 4th theorem
x⁻¹ 1/(x) 4th theorem
x⁻¹⸍² 1/(²√x) 4th+5th theorem
x⁻¹⸍³ 1/(³√x) 4th+5th theorem
x⁻¹⸍⁴ 1/(⁴√x) 4th+5th theorem
x⁰ 1 3rd theorem
x⁺¹⸍⁴ ⁴√x 5th theorem
x⁺¹⸍³ ³√x 5th theorem
x⁺¹⸍² ²√x 5th theorem
x⁺³⸍⁴ ²√x⋅⁴√x x⁺¹⸍²⋅x⁺¹⸍⁴
x⁺¹ x 1st theorem
x⁺³⸍² x⋅²√x x⁺¹⋅x⁺¹⸍²
x⁺² x⋅x initial axiom
x⁺⁵⸍² x⋅x⋅²√x x⁺²⋅x⁺¹⸍²
x⁺³ x⋅x⋅x initial axiom

The actual values, of course, depend on the value of x.


The Log Function

The log function is the opposite of exponentiation. Rather than getting a result by applying an exponent to some base number, the log function starts with the result and returns the exponent necessary to get that result (given some base number):

\displaystyle\textsf{given: }{x}^{a}={N},\;\;\textsf{then: }\log_{x}(N)={a}

Where x is the base number (typically 2, 10, or e), a is the exponent, and N is the result. Which gives the following identity:

\displaystyle{x}^{a}={x}^{\log_{x}(N)}={N}

We see the utility of the log function when we combine the above with the second theorem of exponents:

\displaystyle{A}\times{B}={x}^{\log_{x}(A)}\times{x}^{\log_{x}(B)}={x}^{\log_{x}(A)+\log_{x}(B)}

Logs let us calculate products by calculating (much easier) sums. They’re how slide rules work. [See Abacus and Slide Rule]


The log function allows easily changing the base number in exponentiation. The identity between base numbers A and B is:

\displaystyle{A}^{x}={B}^{{x}\times\log_{B}(A)}

This is useful if you want to convert to a more familiar or compatible form. For example, suppose you wanted to compare 2⁷⁰ and 10²⁴. The conversion between them is:

\displaystyle{2}^{x}={10}^{{x}\times\log_{10}(2)}={10}^{{x}\times{0.30103}\ldots}\\[0.5em]{10}^{x}={2}^{{x}\times\log_{2}(10)}={2}^{{x}\times{3.32193}\ldots}

So, 2⁷⁰10²¹ and 10²⁴2⁷⁹.

Another example would be converting between the familiar log 10 and the natural log, which is based on the number e. That conversion is:

\displaystyle{e}^{x}={10}^{{x}\times\log_{10}(e)}={10}^{{x}\times{0.43429}\ldots}\\[0.5em]{10}^{x}={e}^{{x}\times\ln(10)}={e}^{{x}\times{2.30258}\ldots}

Note that the log(base) term in the exponent is just a constant. Note also that, in each of the pairs above, the respective constants are the inverse of each other. That is:

\displaystyle\log_{a}(b) = \frac{1}{\log_{b}(a)}

Which means only one constant is necessary to convert either way between two bases. When the log used for the constant matches the target base, multiply the exponent the constant. When it matches the source base, divide the exponent by it. For example, suppose you just took the base 10 log of 2.0. Which is ∼0.30103. The conversions then are:

\displaystyle{2}^{x}={10}^{{x}\times\log_{10}(2)}={10}^{{x}\times{0.30103}\ldots}\\[0.5em]{10}^{x}={2}^{{x}\div\log_{10}(2)}={2}^{{x}\div{0.30103}\ldots}

In the first case, the constant’s base ten matches the target base (literally 10), so the original exponent is multiplied by the constant. In the second case, the constant’s base ten matches the original base, so divide the original exponent by the constant. Another interesting example is:

\displaystyle{x}^{n}={e}^{{n}\times\ln(x)}\\[0.2em]{n}^{x}={e}^{{x}\times\ln(n)}\\[0.2em]{x}^{x}={e}^{{x}\times\ln(x)}

The bottom line is that Ax and By can always be equated by using the log function.


Lastly, note that you cannot take the log of zero because there is no exponent a that satisfies:

\displaystyle{x}^{a}={0}

For any x (with the possible exception of zero but see below). The means the reverse operation, trying to find the exponent a by taking the log of zero:

\displaystyle\log_{x}{0}={a}

Is undefined.

The minor fly in the ointment is 0a, which does return zero, but zero isn’t a valid base because the result is zero for any n. Which implies the log of zero could be any value of a. Or all the values of a. There’s no one result, and functions have to return a definite result.

(There’s a slightly bigger fly regarding 0⁰, which some systems treat as 1, but technically it’s an undefined number.)



One response to “Exponents

And what do you think?