Ch.11
Regularization: Beyond Rote Memorization
It is the key technique that keeps ML models from becoming 'rote memorizers' that only memorize answers from the workbook. Fitting the training data too tightly means the model flounders when faced with slightly different new problems—this is overfitting. Regularization reduces the model's data error while imposing a penalty (cost) so the model does not become overly complex or forced. In this way, the model prunes the twigs and learns only the essential patterns, becoming strong in real-world generalization.
ML diagram by chapter
Select a chapter to see its diagram below. View the machine learning flow at a glance.
We add a penalty for the model becoming too complex, not just for data error, so the model generalizes instead of memorizing.
Regularization: loss + λ·penalty to reduce overfitting and improve generalization.