Ch.01
Vectors and Vector Space: Magnitude and Direction Beyond Scalars
Math diagram by chapter
Select a chapter to see its diagram below. View the flow of intermediate math at a glance.
Vector = direction + length
Same direction · length × k
Baseline uk·u
A vector is both a bundle of numbers and an object that encodes magnitude and direction at once. In machine learning each sample becomes a feature vector ; in deep learning embeddings and weights are vectors. This chapter builds the shared language of vectors in and prepares you for Ch.02 Dot Product.
Vectors and Vector Space: Magnitude and Direction Together
What is a vector? An ordered list and, geometrically, an arrow with magnitude and direction. When a function has several real inputs, packing them into one vector keeps notation clean.
Navigation apps say “3 km east, 4 km north”—direction and distance together. On the plane that is one arrow—a 2D vector. In components ; length is .
More formally, consists of real vectors with components. Addition is componentwise; scalar multiplication multiplies each component by a real number. The zero vector has all zeros. The Euclidean norm is ; exercises often use as an integer.
In supervised learning, features are and linear weights . Deep networks stack dot products and matrices; this chapter is the first step. In Ch.10 Hessian you will read second derivatives (curvature) on the same vector space.
In sum, vectors unify geometric (direction, magnitude) and algebraic (components) views; is the space of all -dimensional real vectors. Addition and scalar multiplication are componentwise; inner product, matrices, and derivatives build on this. Ch.02 turns “how similar” into a number.
Calculus “functions and continuity” becomes the habit of one vector for many inputs. ML features, distances, and classification—and DL dot products and matrix multiply—all rest on vector language.
“Add only in the same dimension”; “scalar multiply hits every component the same way”—that is vector space structure. Mastering it reduces confusion later for independence, basis, rank, and eigenvalues.
Feature vector: one table row (height, weight, …) as ; preprocessing, normalization, and distances are vector operations. kNN / clustering often use norms of differences.
Deep learning: a neuron computes dot product of input and weight vectors (next chapter) plus bias and activation. Embeddings are vectors in a “meaning space.” Vector = minimal bundle of numbers AI reads.
The table summarizes formulas and symbols; the item-by-item notes below explain each definition. Worked examples walk through all 10 problem types once.
- Formula
- Meaning = vector; = -th component.
- Formula
- Meaning-dimensional real vector space (all real -tuples).
- Formula
- MeaningSquared Euclidean norm (integer in exercises).
- Formula
- MeaningDot product (next chapter in depth).
- Formula
- MeaningComponentwise sum.
- Formula
- MeaningScalar multiple: multiply each component by .
- Formula
- MeaningDimension = .
- Formula (2D)
- MeaningSigned parallelogram area; ⟺ parallel.
| Formula | Meaning |
|---|---|
| = vector; = -th component. | |
| -dimensional real vector space (all real -tuples). | |
| Squared Euclidean norm (integer in exercises). | |
| Dot product (next chapter in depth). | |
| Componentwise sum. | |
| Scalar multiple: multiply each component by . | |
| Dimension = . | |
| (2D) | Signed parallelogram area; ⟺ parallel. |
Notes on each row
① The list is ordered: is “the number in slot .” Permuting entries gives a different vector. In the plane we often write for the coordinates.
② The set of all vectors with exactly real components. Addition and scalar multiplication never change the number of components, so results stay inside the same space (closed under and by a scalar).
③ Square each component, then add. It equals the square of the Euclidean length , so it shows up like a squared distance along axes. Our drills often ask for the square only so the answer stays an integer.
④ Multiply matching indices and add; in 2D that is . The result is always a scalar (one number). When it is , the vectors are often orthogonal; the next chapter links this to angles and projections.
⑤ Defined only when dimensions match (same for ). Rule: . Think of subtraction as .
⑥ Multiply every component by . If , the direction flips; if , the vector stays on the same line through the origin but its length scales by . If , you get the zero vector .
⑦ Intuitively, the number of independent directions that span the space is ; the standard basis has exactly vectors.
⑧ (2D) Related to the signed area of the parallelogram built from the two vectors from the origin (positive for counterclockwise order). If one vector is a scalar multiple of the other, they lie on one line, area is , and the expression is .
Worked examples
Example 1 — Definition true/false
Problem: Enter 1 if true, 0 if false: “The Euclidean norm can be negative.”
Solution: Norms satisfy → statement is false → 0.
Example 2 — Multiple choice (option number)
Problem: What is the dimension of ?
①4
②5
③6
Solution: → dimension 5 → option
② → enter 2.
Example 3 — Squared norm in
Problem: For , find .
Solution: → 25.
Example 4 — Dot product
Problem: , . Find .
Solution: → 1.
Example 5 — Component of a sum
Problem: , . Find .
Solution: Add -components: → 3.
Example 6 — Component of a scalar multiple
Problem: , . Find .
Solution: → 8.
Example 7 — Dimension of
Problem: Dimension of ?
Solution: → 4.
Example 8 — Number of components
Problem: How many components does a vector in have?
Solution: 6 components.
Example 9 — in 2D
Problem: , . Find .
Solution: → -2 (a leading “−” is shown for negative answers).
Example 10 —
Problem: , . Find .
Solution: , → difference 4.
문제
Read the instructions below, find the answer (integer), and enter it in the blank (?).
For , what is (integer)?
1 / 10