Chapter 00

What is Deep Learning?

What deep learning is and what you learn in Ch01–Ch12 at a glance.

Deep learning diagram by chapter

As you complete each chapter, the diagram below fills in. This is the structure so far.

What you learn in Ch01–Ch12

  • Chapter 01
    Vector Dot Product

    The most basic operation that multiplies two vectors' direction and magnitude into a single scalar.

  • Chapter 02
    Matrix Multiplication

    The product of two matrices is a new matrix whose entries are dot products of rows of the first and columns of the second.

  • Chapter 03
    Linear Layer (Weights and Bias)

    A layer that multiplies the input by a weight matrix and adds a bias vector.

  • Chapter 04
    Activation (Nonlinear)

    A function that makes a neuron's output nonlinear.

  • Chapter 05
    Artificial Neuron (Weighted Sum and Activation)

    A unit that computes a weighted sum of inputs and applies an activation function.

  • Chapter 06
    Batch (Compute All at Once)

    A group of samples processed together in one forward pass.

  • Chapter 07
    Weight Connections

    The weighted links between layers and neurons.

  • Chapter 08
    Hidden Layers (Invisible Layers)

    Layers between the input and output layers.

  • Chapter 09
    Depth (Deep Network)

    Having many hidden layers; the 'deep' in deep learning.

  • Chapter 10
    Width (Number of Neurons per Layer)

    Having many neurons in a single layer.

  • Chapter 11
    Softmax (Turn into Probabilities)

    A function that turns a vector into a probability distribution (values in [0,1], sum 1).

  • Chapter 12
    Gradient (Backpropagation)

    The direction and rate of change of the loss with respect to parameters.

What is Deep Learning?

Deep learning is when a computer learns patterns from huge numbers of examples by itself. Instead of humans writing every rule, we give it data and it finds "for this input, this output" on its own. The structure used for this is an artificial neural network: small units like brain neurons stacked in many layers—that's deep learning.

Why does deep learning matter? Many technologies around us run on it. ChatGPT, Gemini, conversational and text-generating AI, Tesla-style self-driving (recognizing lanes, pedestrians, traffic signs from cameras), Netflix and YouTube recommendations, translation, and face recognition all share the same idea: turn input into numbers, repeat multiplication and addition layer by layer, and get results (classification, prediction, generation). In practice, machine learning and deep learning research dominates across many industries (IT, healthcare, finance, manufacturing, etc.) and academia.

What if we just use off-the-shelf high-performance models? Using high-performance models from the market, or building deep learning and machine learning models, is possible with vibe coding. But to use that model well, modify it, or leverage it, you need the foundational knowledge covered in these chapters (dot product, matrix multiplication, gradients, etc.). That's why learning deep learning and following the chapters matters.

What one layer does is multiply incoming numbers by weights and add them, then pass the result to the next layer. With many layers, simple information gradually turns into bigger features like "edges," "eyes and nose," "dog vs. cat." Learning is showing correct examples and adjusting the weights little by little to get closer to the right answer. Gradient tells you "what to change and by how much"—you'll see it in Ch12.

How does this course teach it? One layer is just multiply and add repeated. You learn step by step: Ch01 dot productCh02 matrix multiplicationCh03–05 linear layer, activation, artificial neuronCh06–10 batch, connection, hidden layers, depth, widthCh11–12 softmax and gradient.

Check the roadmap below to see what each chapter covers. If you follow from Ch01, you'll be able to understand the math inside systems like ChatGPT and self-driving cars.