Chapter 02
Exponents and Exponential Functions: The Math of Growth and Activation
Exponentiation is repeated multiplication of the same base; an exponential function fixes the base and uses the exponent as the variable. Used in activation and loss design in deep learning.
Math diagram by chapter
Select a chapter to see its diagram below. View the flow of basic math at a glance.
Example: , , ,
What are exponent and exponential function?
An exponent is how many times a number (the base) is multiplied by itself. Like the fact that folding a piece of paper 42 times would reach the moon, repeated multiplication (not addition) makes values grow explosively (exponential growth).
An exponential function puts that repeated power in a variable: . In polynomials the variable is in the base (); in exponentials the variable is in the exponent. That means growth proportional to current size. If , the value shoots up as increases (exponential growth); if , it quickly approaches 0 as increases (exponential decay). Radioactive decay and half-life in interest calculations are examples of exponential decay.
The natural constant (about 2.718…) is the most important base in math and AI. The function is the only one that stays unchanged when differentiated. That property dramatically reduces the amount of calculus in deep learning.
In AI, exponentials are the building blocks of activation functions. Linear computation () alone cannot solve complex problems; exponentials are used to bend the signal (nonlinearity) or smoothly squeeze values between 0 and 1.
Because the output is always positive. The graph of always lies above the -axis: for any real , . AI cannot say "the probability is -50%," so exponentials are essential when we need outputs to be positive (e.g. probabilities or positive scores).
They amplify small differences. Inputs 1 and 2 differ by 1, but and differ by 90. AI uses this to sharply separate similar data and classify with confidence.
Efficient differentiation: Backprop is a long chain of derivatives. The exponential keeps the same shape when differentiated (or stays in a simple form), which is crucial for fast, stable training.
Used in the softmax function. When AI chooses one out of 1000 images, it applies to each score. Slightly higher scores get much larger values and lower ones shrink toward 0, so the model can say "this is the answer with 99% confidence."
The sigmoid function squeezes the input into (0, 1). The output never exceeds 1 or goes below 0, so the neuron acts like an on/off switch.
- Expression
- Value1
- Expression
- Value2
- Expression
- Value4
- Expression
- Value8
- Expression
- Value16
- Expression
- Value9
- Expression
- Value27
| Expression | Value |
|---|---|
| 1 | |
| 2 | |
| 4 | |
| 8 | |
| 16 | |
| 9 | |
| 27 |
In the visual below, gives for , for , for , for . Use it to see how base and exponent relate.
Problem types and how to solve them
- TypeFind value
- Description
- How to get the answerMultiply base by itself times. E.g. .
- TypeFind exponent
- Description
- How to get the answer"How many times do we multiply to get this value?" That count is the answer. E.g. .
- TypeCompare
- DescriptionWhich is larger: 1) , 2) ?
- How to get the answerCompute each, then compare. If(1) is larger enter 1, if(2) enter 2.
- TypeProduct, same base
- Description
- How to get the answerAdd exponents: . (Rule: )
- TypeQuotient, same base
- Description ()
- How to get the answerSubtract exponents: . (Rule: )
- TypePower of power
- Description
- How to get the answerMultiply exponents: . (Rule: )
| Type | Description | How to get the answer |
|---|---|---|
| Find value | Multiply base by itself times. E.g. . | |
| Find exponent | "How many times do we multiply to get this value?" That count is the answer. E.g. . | |
| Compare | Which is larger: 1) , 2) ? | Compute each, then compare. If (1) is larger enter 1, if (2) enter 2. |
| Product, same base | Add exponents: . (Rule: ) | |
| Quotient, same base | () | Subtract exponents: . (Rule: ) |
| Power of power | Multiply exponents: . (Rule: ) |