Everyone's AI
Machine learningAI Papers
Loading...

Learn

🏅My achievements

Ch.01

Vectors and Vector Space: Magnitude and Direction Beyond Scalars

Math diagram by chapter

Select a chapter to see its diagram below. View the flow of intermediate math at a glance.

Vector = direction + length

xyuvu+v

Same direction · length × k

k·u
Baseline uk·u
A vector is both a bundle of numbers and an object that encodes magnitude and direction at once. In machine learning each sample becomes a feature vector x\mathbf xx; in deep learning embeddings and weights are vectors. This chapter builds the shared language of vectors in Rn\mathbb R^nRn and prepares you for Ch.02 Dot Product.

Vectors and Vector Space: Magnitude and Direction Together

What is a vector? An ordered list v=(v1,…,vn)\mathbf v=(v_1,\ldots,v_n)v=(v1​,…,vn​) and, geometrically, an arrow with magnitude and direction. When a function has several real inputs, packing them into one vector keeps notation clean.
Navigation apps say “3 km east, 4 km north”—direction and distance together. On the plane that is one arrow—a 2D vector. In components (3,4)(3,4)(3,4); length is 32+42\sqrt{3^2+4^2}32+42​.
More formally, Rn\mathbb R^nRn consists of real vectors with nnn components. Addition is componentwise; scalar multiplication multiplies each component by a real number. The zero vector 0\mathbf 00 has all zeros. The Euclidean norm is ∥v∥=∑ivi2\|\mathbf v\|=\sqrt{\sum_i v_i^2}∥v∥=∑i​vi2​​; exercises often use ∥v∥2=∑ivi2\|\mathbf v\|^2=\sum_i v_i^2∥v∥2=∑i​vi2​ as an integer.
In supervised learning, features are x∈Rd\mathbf x\in\mathbb R^dx∈Rd and linear weights w∈Rd\mathbf w\in\mathbb R^dw∈Rd. Deep networks stack dot products and matrices; this chapter is the first step. In Ch.10 Hessian you will read second derivatives (curvature) on the same vector space.
In sum, vectors unify geometric (direction, magnitude) and algebraic (components) views; Rn\mathbb R^nRn is the space of all nnn-dimensional real vectors. Addition and scalar multiplication are componentwise; inner product, matrices, and derivatives build on this. Ch.02 turns “how similar” into a number.
Calculus “functions and continuity” becomes the habit of one vector for many inputs. ML features, distances, and classification—and DL dot products and matrix multiply—all rest on vector language.
“Add only in the same dimension”; “scalar multiply hits every component the same way”—that is vector space structure. Mastering it reduces confusion later for independence, basis, rank, and eigenvalues.
Feature vector: one table row (height, weight, …) as x\mathbf xx; preprocessing, normalization, and distances are vector operations. kNN / clustering often use norms of differences.
Deep learning: a neuron computes dot product of input and weight vectors (next chapter) plus bias and activation. Embeddings are vectors in a “meaning space.” Vector = minimal bundle of numbers AI reads.
The table summarizes formulas and symbols; the item-by-item notes below explain each definition. Worked examples walk through all 10 problem types once.
  • Formulav=(v1,…,vn)\mathbf v=(v_1,\ldots,v_n)v=(v1​,…,vn​)
  • Meaningv\mathbf vv = vector; viv_ivi​ = iii-th component.
  • FormulaRn\mathbb R^nRn
  • Meaningnnn-dimensional real vector space (all real nnn-tuples).
  • Formula∥v∥2=∑ivi2\|\mathbf v\|^2 = \sum_i v_i^2∥v∥2=∑i​vi2​
  • MeaningSquared Euclidean norm (integer in exercises).
  • Formulau⋅v=∑iuivi\mathbf u\cdot\mathbf v=\sum_i u_i v_iu⋅v=∑i​ui​vi​
  • MeaningDot product (next chapter in depth).
  • Formulau+v\mathbf u+\mathbf vu+v
  • MeaningComponentwise sum.
  • Formulakvk\mathbf vkv
  • MeaningScalar multiple: multiply each component by kkk.
  • Formuladim⁡(Rn)\dim(\mathbb R^n)dim(Rn)
  • MeaningDimension = nnn.
  • Formulauxvy−uyvxu_x v_y - u_y v_xux​vy​−uy​vx​ (2D)
  • MeaningSigned parallelogram area; 000 ⟺ parallel.
FormulaMeaning
v=(v1,…,vn)\mathbf v=(v_1,\ldots,v_n)v=(v1​,…,vn​)v\mathbf vv = vector; viv_ivi​ = iii-th component.
Rn\mathbb R^nRnnnn-dimensional real vector space (all real nnn-tuples).
∥v∥2=∑ivi2\|\mathbf v\|^2 = \sum_i v_i^2∥v∥2=∑i​vi2​Squared Euclidean norm (integer in exercises).
u⋅v=∑iuivi\mathbf u\cdot\mathbf v=\sum_i u_i v_iu⋅v=∑i​ui​vi​Dot product (next chapter in depth).
u+v\mathbf u+\mathbf vu+vComponentwise sum.
kvk\mathbf vkvScalar multiple: multiply each component by kkk.
dim⁡(Rn)\dim(\mathbb R^n)dim(Rn)Dimension = nnn.
uxvy−uyvxu_x v_y - u_y v_xux​vy​−uy​vx​ (2D)Signed parallelogram area; 000 ⟺ parallel.
Notes on each row
① v=(v1,…,vn)\mathbf v=(v_1,\ldots,v_n)v=(v1​,…,vn​) The list is ordered: viv_ivi​ is “the number in slot iii.” Permuting entries gives a different vector. In the plane we often write (vx,vy)(v_x,v_y)(vx​,vy​) for the x,yx,yx,y coordinates.
② Rn\mathbb R^nRn The set of all vectors with exactly nnn real components. Addition and scalar multiplication never change the number of components, so results stay inside the same space (closed under +++ and ⋅\cdot⋅ by a scalar).
③ ∥v∥2=∑ivi2\|\mathbf v\|^2=\sum_i v_i^2∥v∥2=∑i​vi2​ Square each component, then add. It equals the square of the Euclidean length ∥v∥=∑ivi2\|\mathbf v\|=\sqrt{\sum_i v_i^2}∥v∥=∑i​vi2​​, so it shows up like a squared distance along axes. Our drills often ask for the square only so the answer stays an integer.
④ u⋅v=∑iuivi\mathbf u\cdot\mathbf v=\sum_i u_i v_iu⋅v=∑i​ui​vi​ Multiply matching indices and add; in 2D that is uxvx+uyvyu_xv_x+u_yv_yux​vx​+uy​vy​. The result is always a scalar (one number). When it is 000, the vectors are often orthogonal; the next chapter links this to angles and projections.
⑤ u+v\mathbf u+\mathbf vu+v Defined only when dimensions match (same nnn for Rn\mathbb R^nRn). Rule: (u1+v1,…,un+vn)(u_1+v_1,\ldots,u_n+v_n)(u1​+v1​,…,un​+vn​). Think of subtraction as u−v=u+(−1)v\mathbf u-\mathbf v=\mathbf u+(-1)\mathbf vu−v=u+(−1)v.
⑥ kvk\mathbf vkv Multiply every component by kkk. If k<0k<0k<0, the direction flips; if ∣k∣>1|k|>1∣k∣>1, the vector stays on the same line through the origin but its length scales by ∣k∣|k|∣k∣. If k=0k=0k=0, you get the zero vector 0\mathbf 00.
⑦ dim⁡(Rn)=n\dim(\mathbb R^n)=ndim(Rn)=n Intuitively, the number of independent directions that span the space is nnn; the standard basis e1,…,en\mathbf e_1,\ldots,\mathbf e_ne1​,…,en​ has exactly nnn vectors.
⑧ uxvy−uyvxu_x v_y-u_y v_xux​vy​−uy​vx​ (2D) Related to the signed area of the parallelogram built from the two vectors from the origin (positive for counterclockwise order). If one vector is a scalar multiple of the other, they lie on one line, area is 000, and the expression is 000.

Worked examples

Example 1 — Definition true/false
Problem: Enter 1 if true, 0 if false: “The Euclidean norm ∥v∥\|\mathbf v\|∥v∥ can be negative.”
Solution: Norms satisfy ∥v∥≥0\|\mathbf v\|\ge 0∥v∥≥0 → statement is false → 0.

Example 2 — Multiple choice (option number)
Problem: What is the dimension of R5\mathbb R^5R5?
①4
②5
③6
Solution: dim⁡(Rn)=n\dim(\mathbb R^n)=ndim(Rn)=n → dimension 5 → option
② → enter 2.

Example 3 — Squared norm in R2\mathbb R^2R2
Problem: For v=(3,4)\mathbf v=(3,4)v=(3,4), find ∥v∥2\|\mathbf v\|^2∥v∥2.
Solution: vx2+vy2=9+16=25v_x^2+v_y^2=9+16=25vx2​+vy2​=9+16=25 → 25.

Example 4 — Dot product
Problem: u=(1,2)\mathbf u=(1,2)u=(1,2), v=(3,−1)\mathbf v=(3,-1)v=(3,−1). Find u⋅v\mathbf u\cdot\mathbf vu⋅v.
Solution: 1⋅3+2⋅(−1)=11\cdot3+2\cdot(-1)=11⋅3+2⋅(−1)=1 → 1.

Example 5 — Component of a sum
Problem: u=(2,5)\mathbf u=(2,5)u=(2,5), v=(1,−3)\mathbf v=(1,-3)v=(1,−3). Find (u+v)x(\mathbf u+\mathbf v)_x(u+v)x​.
Solution: Add xxx-components: 2+1=32+1=32+1=3 → 3.

Example 6 — Component of a scalar multiple
Problem: u=(2,3)\mathbf u=(2,3)u=(2,3), k=4k=4k=4. Find (4u)x(4\mathbf u)_x(4u)x​.
Solution: (ku)x=k⋅ux=4⋅2=8(k\mathbf u)_x=k\cdot u_x=4\cdot2=8(ku)x​=k⋅ux​=4⋅2=8 → 8.

Example 7 — Dimension of Rn\mathbb R^nRn
Problem: Dimension of R4\mathbb R^4R4?
Solution: dim⁡(R4)=4\dim(\mathbb R^4)=4dim(R4)=4 → 4.

Example 8 — Number of components
Problem: How many components does a vector in R6\mathbb R^6R6 have?
Solution: 6 components.

Example 9 — uxvy−uyvxu_x v_y - u_y v_xux​vy​−uy​vx​ in 2D
Problem: u=(1,2)\mathbf u=(1,2)u=(1,2), v=(3,4)\mathbf v=(3,4)v=(3,4). Find uxvy−uyvxu_x v_y - u_y v_xux​vy​−uy​vx​.
Solution: 1⋅4−2⋅3=−21\cdot4-2\cdot3=-21⋅4−2⋅3=−2 → -2 (a leading “−” is shown for negative answers).

Example 10 — ∥u∥2−∥v∥2\|\mathbf u\|^2-\|\mathbf v\|^2∥u∥2−∥v∥2
Problem: u=(2,1)\mathbf u=(2,1)u=(2,1), v=(1,0)\mathbf v=(1,0)v=(1,0). Find ∥u∥2−∥v∥2\|\mathbf u\|^2-\|\mathbf v\|^2∥u∥2−∥v∥2.
Solution: ∥u∥2=5\|\mathbf u\|^2=5∥u∥2=5, ∥v∥2=1\|\mathbf v\|^2=1∥v∥2=1 → difference 4.

문제

Read the instructions below, find the answer (integer), and enter it in the blank (?).
For v=(1,4)\mathbf v=(1,4)v=(1,4), what is ∥v∥2\|\mathbf v\|^2∥v∥2 (integer)?
1 / 10