Chapter 03

Logarithm

A logarithm answers 'how many times we multiply the base to get this number?' It is the inverse of exponentiation and is used with exponentials in loss and probability in deep learning.

Math diagram by chapter

Select a chapter to see its diagram below. View the flow of basic math at a glance.

Logarithm is the inverse of exponent. y=log2xy = \log_2 x means 2y=x2^y = x. Below are the graphs of y=log2xy = \log_2 x and its inverse y=2xy = 2^x.

012345678012345678xy(x=1.0, y=0.00)(x=0.0, y=1.0)

Example: log21=0\log_2 1 = 0, log22=1\log_2 2 = 1, log24=2\log_2 4 = 2, log28=3\log_2 8 = 3 (when 2y=x2^y = x, yy is log2x\log_2 x)

Purple: y=log2xy=\log_2 x, Teal: y=2xy=2^x

What is the logarithm?

The logarithm is the inverse of exponentiation. When ax=ba^x = b, we write logab=x\log_a b = x: "aa raised to what power gives bb?" Here aa is the base, bb is the argument, and the log value is the exponent (often written as xx).
Example: 23=82^3 = 8 so log28=3\log_2 8 = 3. log10100=2\log_{10} 100 = 2 (102=10010^2 = 100). When the base is ee, we write natural log ln\ln, used often in deep learning and statistics.
Log sum and quotient: loga(bc)=logab+logac\log_a(b \cdot c) = \log_a b + \log_a c (product becomes sum in log space), loga(b/c)=logablogac\log_a(b/c) = \log_a b - \log_a c (quotient becomes difference). In AI, multiplying probabilities uses this form frequently.
In AI, loss functions (e.g. cross-entropy) and probability expressions use log\log so that products become sums, making computation and differentiation easier. Why use log? When probabilities are multiplied many times, numbers get too small; with log\log, products become sums, so computation is stable and gradient descent is easier.
In deep learning, loss functions often use log\log on probabilities to measure 'how wrong' a prediction is. Knowing logarithms helps you see why log\log appears there.
When probabilities are multiplied many times, numbers get very small. Log turns products into sums, so computation is stable and gradient descent is easier to work with.
In AI, log is used to put probabilities or scores on a log scale. Cross-entropy loss uses the negative log probability of the correct class so that as the model gets it right, loss goes to zero. The log-sum form log(p1p2)=logp1+logp2\log(p_1 \cdot p_2) = \log p_1 + \log p_2 appears often in loss and probability expressions.
ExampleValue
log28\log_2 83 (23=82^3=8)
log24\log_2 42
log39\log_3 92
The log is an integer only when the argument is a power of the base.
Operations frequently used with logarithms (often in AI loss and probability):
OperationFormulaNote
Log sumlogab+logac=loga(bc)\log_a b + \log_a c = \log_a(b \cdot c)product → sum
Log differencelogablogac=loga(b/c)\log_a b - \log_a c = \log_a(b/c)quotient → difference
Powerloga(bn)=nlogab\log_a(b^n) = n \cdot \log_a bexponent out front
ExampleCalculation
Log sumlog22+log24=1+2=3\log_2 2 + \log_2 4 = 1 + 2 = 3
Log differencelog28log22=31=2\log_2 8 - \log_2 2 = 3 - 1 = 2
In the problems below, find log values, arguments, log sums, or log differences.