Exponents

After addition, multiplication (serial addition), subtraction (addition in reverse), and division (inverse multiplication), comes exponentiation (serial multiplication). Most of us learned about the basic x² stuff in school — it just means x-times-x. Likewise, x³ just means x-times-x-times-x. Serial multiplication. No problem.

But sometimes it’s x⁻² or x½, and it’s hard to see how those work as x-times-x-etc. As it turns out, they, and much more, can be understood based on a single axiom:

\displaystyle{x}^{n}={x_1}\times{x_2}\times\ldots\times{x_n}

Which is the rule behind the basic understanding you already have. The exponent says how many times to multiply a value times itself. Put in formal math terms, it looks like this:

\displaystyle\prod^{n}_{i=1}{x}_{i}

From this axiom we can very easily derive the first theorem:

\displaystyle{x}^{1}={x}

Any number to the power of one is just that number because there is only one instance of it in the multiply chain.


We can also derive something so important that it’s often presented as a second axiom (but here is treated as a derived theorem):

\displaystyle{x}^{a+b}={x^a}\times{x^b}

Because:

\displaystyle{x}^{n}={x}^{a+b}=\left({x_1}\cdot{x_2}\cdot\ldots{x_a}\right)\times\left({x_1}\cdot{x_2}\cdot\ldots{x_b}\right)

The multiply chain can be broken into two parts consisting of acount instances and bcount instances (where a+b=n). Those two parts are obviously multiplied together, so we derive the second theorem:

\displaystyle{x}^{a+b}={x^a}\times{x^b}

It’s perfectly possible to treat this and the first theorem above as the axioms and derive the axiom shown above (along with everything else). Doing it this way requires defining two axioms because we need the first theorem above as the seed:

\displaystyle{x}^{1}={x}\\[0.2em]{x}^{2}={x}^{1+1}={x}\times{x}\\[0.2em]{x}^{3}={x}^{1+2}={x}\times({x}\times{x})\\[0.2em]{x}^{4}={x}^{1+3}={x}\times({x}\times{x}\times{x})\\[0.2em]\vdots

This generalizes to the basic axiom we started with above (making it a derived theorem).


We derive another key equality by noting that (by addition and the first theorem):

\displaystyle{x}^{1+0}={x}^{1}={x}

And therefore (using the second theorem):

\displaystyle{x}^{1+0}={x}^{1}\times{x}^{0}={x}

If we divide the last two terms by x¹ (which, per the first theorem, is just x)…

\displaystyle\frac{{x}^{1}\times{x}^{0}}{{x}^{1}}=\frac{x}{{x}^{1}}

…we end up with the very important third theorem:

\displaystyle{x}^{0}={1}

Any number raised to the power of zero is just one. The exception is 0⁰ because analysis leads to contradictions: For all non-zero exponents, 0=0, but according to the third theorem, 0⁰=1. So, in some math circles, 0⁰ is considered undefined (like division by zero).

Alternately, we can accept that 0 is a weird function that acts somewhat like the Dirac Delta function. It returns zero for all n except zero, where it returns one. The justification comes from viewing any multiplication as starting with the multiplicative identity (one) and then applying successive multiplications. Then 3×42 is viewed as first taking one to three (1×3) and then taking three to 126 (3×42). This seems redundant with regular numbers but makes more sense where multiplication is abstracted to other types of numbers.

Note that 0 can be made to trigger on other values of n, just like the Dirac Delta:

\displaystyle{f}(x)={0}^{(n-x)}

Which returns zero for all x except nx=0 (for some n). But, of course, this requires an algebra where 0⁰=1. (The Windows 10 calculator does.)


The theorem for x lets us derive a rule that might be surprising. We start by noting that:

\displaystyle{x}^{(n-n)}={x^0}={1}

By subtraction and the third theorem. If we re-express this as:

\displaystyle{x}^{\left(n+(-n)\right)}={x^0}={1}

Then we then can invoke the second theorem to say:

\displaystyle{x}^{\left(n+(-n)\right)}={x^n}\times{x^{-n}}={1}

If we divide the last two terms by xⁿ (similar to how we did it just above), we have the fourth theorem:

\displaystyle{x}^{-n}=\frac{1}{x^n}

Which says that negative exponents are the inverses of positive ones. A negative exponent just means “one-over” the positive version.


We can also derive what happens with non-integer exponents. We’ll start with a simple example by first noting that:

\displaystyle{x}^{(\frac{1}{2}+\frac{1}{2})}={x}^{1}={x}

By addition and the first theorem. As we’ve done above, we can invoke the second theorem to give us:

\displaystyle{x}^{(\frac{1}{2}+\frac{1}{2})}={x}^{\frac{1}{2}}\times{x}^{\frac{1}{2}}=\left({x}^{\frac{1}{2}}\right)^2={x}

And if we take the square root of the last two terms, we get a special case of the fifth theorem:

\displaystyle{x}^{\frac{1}{2}}=\sqrt{x}

We can make this more general by starting with:

\displaystyle{x}^{(\frac{1}{n}+\frac{1}{n}+\cdots\frac{1}{n})}={x}^{1}={x}

Where the fraction 1/n is repeated n times, and then, using the second theorem:

\displaystyle{x}^{(\frac{1}{n}+\frac{1}{n}+\cdots\frac{1}{n})}=\left({x}^{\frac{1}{n}}\right)^n={x}

Using the same logic as above (taking the nth root of last two terms) we get the general case of the fifth theorem:

\displaystyle{x}^{\frac{1}{n}}=\sqrt[n]{x}

So fractional exponents (with a numerator of one) give us roots.


Finally, recursive exponents such as:

\displaystyle\left({x}^{a}\right)^{b}

Note that, by the initial axiom:

\displaystyle\left({x}^{a}\right)^{b}=(x^a)_{1}\times(x^a)_{2}\times\ldots(x^a)_{b}

And each xa expands to:

\displaystyle{x}^{a}={x}_{1}\times{x}_{2}\ldots{x}_{a}

So, we have x times itself a times, times itself b times, which is just a times b in terms of total instances of x. This gives us the sixth theorem:

\displaystyle\left({x}^{a}\right)^{b}={x}^{({a}\times{b})}={x}^{({b}\times{a})}=\left({x}^{b}\right)^{a}

Note that a and b commute here because multiplication commutes. Recursive exponents just multiply. This works nicely in reverse. For instance:

\displaystyle{x}^{\frac{3}{5}}\!={x}^{(\frac{1}{5}\times{3})}\!=\!\left({x}^{\frac{1}{5}}\right)^{3}\!=\!\left(\sqrt[5]{x}\right)^3\!=\!\left({x}^{3}\right)^{\frac{1}{5}}\!=\!\sqrt[5]{x^3}

Which can be very helpful with fractional exponents that have numerators other than one.


Amazing the theorems we can derive from that initial axiom, eh?


With regard to that last example, we have the general case:

\displaystyle{x}^{p/q}={x}^{\textrm{int}(p/q)+\textrm{rem}(p/q)}

Where the int function returns the integer part of the division, and the rem function returns the remainder. Using the second theorem, we get:

\displaystyle{x}^{p/q}={x}^{\textrm{int}(p/q)}\times{x}^{\textrm{rem}(p/q)}

Which we can simply to:

\displaystyle{x}^{p/q}={x}^{n}\times{x}^{r},\;\;{n}\in\mathbb{Z},{r}\in\mathbb{R}

Which means any fractional exponent can be divided into an integer and a remaining fraction. Note that n can be zero, and 0 <= r < 1.0. See the chart below for an idea of how that plays out.


In some cases, the fraction is the sum of one or more roots. Then the exponent, r, is the sum of a finite set of rational numbers in the form 1/q. In the simplest case, the fraction is 1/2, which is the square root of x. [See the fifth theorem above.]

But we could also have:

\displaystyle{x}^{3/4}={x}^{1/2}\!\cdot{x}^{1/4}=\sqrt[2]{x}\cdot\sqrt[4]{x}

Which is a simple example with just two roots. Other fractions can more involved, requiring lots of roots. The general form is:

\displaystyle{x}^{r}={x}^{\frac{a}{2}}\cdot{x}^{\frac{b}{3}}\cdot{x}^{\frac{c}{4}}\cdot{x}^{\frac{d}{5}}\cdot{x}^{\frac{e}{6}}\cdots

Where a,b,c,d,e,… ∈ {0, 1}. A coefficient is one if the root should be included in the sum, otherwise zero. A real exponent requires an infinite sum, but a rational exponent is a finite sum of the above series.


Lastly, here are some useful equalities. Just take them on faith for now. They’re all based on this general one:

\displaystyle{A}^{x}={B}^{{x}\cdot\log_B(A)}

This can be useful if you have a large number in exponential form and want to convert it to a more familiar form (or just a form compatible with some other exponential number). For example, we can convert between the natural logarithm and the more familiar base ten:

\displaystyle{e}^{x}={10}^{{x}\cdot\log_{10}(e)}={10}^{{x}\cdot{0.43429}\ldots}\\[0.5em]{10}^{x}={e}^{{x}\cdot\ln(10)}={e}^{{x}\cdot{2.30258}\ldots}

It’s also useful for translating large powers of two into (or from) base ten:

\displaystyle{2}^{x}={10}^{{x}\cdot\log_{10}(2)}={10}^{{x}\cdot{0.30103}\ldots}\\[0.5em]{10}^{x}={2}^{{x}\cdot\log_{2}(10)}={2}^{{x}\cdot{3.32193}\ldots}

Note that the log(base) term in the exponent is just a constant. Note also that, in the two pairs above, the constants are the inverse of each other.


Basically, log is a function that reverses exponentiation:

\displaystyle\textsf{given: }{x}^{a}={N},\;\;\textsf{then: }\log_{x}{N}={a}

Which means:

\displaystyle{x}^{\log_{x}{N}}={N}

We see the utility of the log function when we combine the above with the second theorem:

\displaystyle{x}^{\log_{x}\!{A}+\log_{x}\!{B}}={x}^{\log_{x}\!{A}}\times{x}^{\log_{x}\!{B}}={A}\times{B}

Logs let us do products by doing (much easier) sums. They’re how slide rules work.


Here’s a little table that illustrates the above theorems and shows a progression from very small values to vary large:

This Equals Because
x⁻³ 1/(x⋅x⋅x) 4th theorem
x⁻² 1/(x⋅x) 4th theorem
x⁻¹ 1/(x) 4th theorem
x⁻¹⸍² 1/(²√x) 4th+5th theorem
x⁻¹⸍³ 1/(³√x) 4th+5th theorem
x⁻¹⸍⁴ 1/(⁴√x) 4th+5th theorem
x⁰ 1 3rd theorem
x⁺¹⸍⁴ ⁴√x 5th theorem
x⁺¹⸍³ ³√x 5th theorem
x⁺¹⸍² ²√x 5th theorem
x⁺³⸍⁴ ²√x⋅⁴√x x⁺¹⸍²⋅x⁺¹⸍⁴
x⁺¹ x 1st theorem
x⁺³⸍² x⋅²√x x⁺¹⋅x⁺¹⸍²
x⁺² x⋅x initial axiom
x⁺⁵⸍² x⋅x⋅²√x x⁺²⋅x⁺¹⸍²
x⁺³ x⋅x⋅x initial axiom

The actual values, of course, depend on the value of x.


The Exponential Function

An examination of exponents wouldn’t be complete without a few words about the exponential function:

\displaystyle\exp(x)={e}^{x}

It’s formally defined:

\displaystyle\exp(x)=\sum_{n=0}^{\infty}\frac{\;\;x^n}{n!}

Which expands to the conceptually infinite series:

\displaystyle\exp(x)=\frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}\cdots

Or, evaluating what we can:

\displaystyle\exp(x)=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{24}\cdots

What’s great about this definition is that for x we can plug in anything with a multiplication operation — complex numbers, even matrices. Each appearance of x here has a positive integer power, so each appearance is just x times itself some number of times (including zero and one). This makes the exponential function widely applicable.

It’s common in complex math where it’s one side of Euler’s Formula:

\displaystyle{e}^{i\theta}=\cos(\theta)+{i}\sin(\theta)

This is the source of Euler’s Identity, which has been called the most beautiful equation in math:

\displaystyle{e}^{i\pi}+1=0

See the Beautiful Math post for an overview. Also see Circular Math and Sideband #70: The exp Function for more on the exponential function.


It’s worth digging into the mechanics of taking the derivative of the exponential function. It begins with the basic fact that the derivative of the exponential function is the exponential function:

\displaystyle\frac{d}{dx}\,{e}^{x}={e}^{x}

The slope of ex at x is always just ex. Note this is true when the exponent is just x. (We’ll sort of see why below.) If the exponent is an expression containing x, then the expression is a function of x, and the chain rule applies.

The chain rule is:

\displaystyle\frac{d}{dx}\,f(g(x))={f'}(g(x))\cdot{g'}(x)

If a function, f, depends on a sub-function, g, multiply the derivative of the outer function times the derivative of the inner function.

To see how this works, we can see what happens when we apply the chain rule to the basic exponential function (which we know derives to itself). We’ll treat the plain x in the exponent as a function, g(x), that just returns x:

Applying the chain rule and setting u=g(x) to simplify the exponential:

\displaystyle\frac{d}{dx}\,{e}^{g(x)}=\frac{d}{du}\,{e}^{u}\,\cdot\,\frac{d}{dx}\,g(x)={e}^{u}\,\cdot\,\frac{d}{dx}\,{x}

Since the derivative of x is just 1, we end up with, as expected, the original exponential. Rearranged a bit, this is a basic formula for deriving the exponential function:

\displaystyle\frac{d}{dx}\,{e}^{u}=u'\;{e}^{u}

Where u is some expression containing (at least one occurrence of) x. Just take the derivative of that expression and multiply it to the exponential.

With that understanding, let’s try this:

\displaystyle\frac{d}{dx}\,{e}^{2x}=\frac{d}{dx}\,\left[{2x}\right]{e}^{2x}={2}\cdot{e}^{2x}

Because the derivative of 2x is just 2.

Here’s one with a square of x, such as appears in the Gaussian exponential function:

\displaystyle\frac{d}{dx}\,{e}^{x^2}=\frac{d}{dx}\,\left[{x}^{2}\right]{e}^{x^2}={2x}\cdot{e}^{x^2}

Because the derivative of x² is 2x.

If there was a constant in front of the exponential, it just gets multiplied by the derived exponent:

\displaystyle\frac{d}{dx}\,{ae}^{x^2}=\frac{d}{dx}\,\left[{x}^{2}\right]{ae}^{x^2}={2ax}\cdot{e}^{x^2}

We can break this down as an instance of the product rule, which is:

\displaystyle\frac{d}{dx}\,f(x)\,g(x)=f(x)g'(x)+f'(x)g(x)

But since a is a constant, it derives to zero, so:

\displaystyle\frac{d}{dx}{ae}^{x^2}\!=\!\left[{a}\cdot\frac{d}{dx}{e}^{x^2}\right]\!\!+\!\!\left[\frac{d}{dx}{a}\cdot{e}^{x^2}\right]\!=\!\left[2ax\cdot{e}^{x^2}\right]\!\!+\!\!\left[0\cdot{e}^{x^2}\right] 

Which gives us the result shown above.

Finally, note that, since a constant derives to zero, an exponential function with a constant exponent also derives to zero:

\displaystyle\frac{d}{dx}\,{e}^{k}=\frac{d}{dx}\,\left[{k}\right]{e}^{k}={0e}^{k}=0

Hopefully this helps make it clear how to derive more involved exponential functions!


(more to come).

\displaystyle{x}=\Omega


One response to “Exponents

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: