Chapter 13
Summary: A map of AI at a glance
You can see what you learned in Ch01–Ch12 in one neural network diagram.
Deep learning diagram by chapter
As you complete each chapter, the diagram below fills in. This is the structure so far.
Chapter 01 Vector dot product: Finding similarity between data
Left X1,X2,X3 and right Y1,Y2,Y3 are connected by lines. Each right node is the dot product of the left with weights.
Chapter 02 Matrix multiplication: The magic of computing at once
Left is one row of matrix A; right Y1–Y3 are dot products with columns of B. Together they form the matrix product A·B.
Chapter 03 Linear layer: Weights that decide importance
This block is a linear layer. Input is computed to the next layer at once as Y = W·X + b.
Chapter 04 Activation function: Adding judgment to AI
Representative activation functions where output Y changes nonlinearly with input X. (3-level quantized version)
Node values change in a nonlinear way through ReLU or σ. The last layer Y1, Y2, Y3 come from that.
Chapter 05 Artificial neuron: A unit that gathers information and sends signals
Inside the dashed circle is one artificial neuron. Input (X) times weights (w·x+b), then ReLU, gives output (Y).
Chapter 06 Batch processing: Learning together in one go
So output Y also comes out as one table at once.
So when we merge inputs into one table, output Y also comes out as one table at once.
Chapter 07 Weight connections: The countless chains that build intelligence
Each line between layers is a weight (w). Multiply input by weights, add them, then add bias (b) to get the next layer Y.
Circles are values, lines are weights (w). Add bias (b) to the weighted sum to get the next layer Y.
Chapter 08 Hidden layer: The invisible depth of thought
We only see input (X) and output (Y). The layer in between is used only inside the network, so it’s the hidden layer.
Visible: input→Hidden: H→Visible: output
Values flow input → hidden → output. The hidden layer is an internal representation we don’t see.
Chapter 09 Deep network: The power to solve more complex problems
Deep = many hidden layers (middle steps). The “deep” in deep learning is this depth.
More steps mean a deeper network. Deeper networks can learn more refined patterns.
Chapter 10 Width and neurons: Finding more features at once
The number of neurons in one layer is the width. Wider layers can handle more features at once.
Chapter 11 Softmax: Turning results into confidence
3raised to 27(3^3)
27/31=27 ÷ 31
Chapter 12 Gradient and backpropagation: Learning from mistakes
Y → H → X
Summary
The diagram below collects everything from Ch01–Ch12 into one network: input X → hidden layers (A, B, C, D) → output Y, with weights (W), activation (ReLU, etc.), batch, and gradient (∇) shown.
Real training repeats forward pass (compute output) → loss → backward pass (gradients) → update weights. After this course you can follow that flow in the math.