Interactive visualizations and explorations for linear algebra โ from first principles to matrix decompositions
Welcome to your interactive journey into linear algebra! Let's start with the absolute basics and build up from there.
Think of it like a table or spreadsheet:
โ 1 2 โ โ 3 4 โ
This is a 2ร2 matrix (2 rows, 2 columns).
We read it as "2 by 2".
Matrix A = โ a b โ
โ c d โ
Entry notation:
โข Aโโ = a (row 1, col 1)
โข Aโโ = b (row 1, col 2)
โข Aโโ = c (row 2, col 1)
โข Aโโ = d (row 2, col 2)
Here's the magic: Matrices transform vectors into new vectors. Imagine a matrix as a machine:
The matrix "transforms" the input into something new!
Let's see how a matrix transforms a vector (point):
Matrix โ 2 0 โ Input โ 3 โ
โ 0 3 โ ร Vector โ 2 โ
Result = (2ร3, 0ร3+3ร2) = โ 6 โ โ Output Vector
โ 6 โ
Rotate, scale, and move 3D game objects. Every rotation in a video game uses matrices!
Instagram filters? Matrices! They transform pixel colors to create effects like blur, sharpen, and edge detection.
Neural networks are built from matrices. Every AI model you use (ChatGPT, image recognition) runs on matrix math!
Organize and analyze large datasets. Matrices help find patterns in millions of data points (like Netflix recommendations).
Model electrical circuits, structural forces, and quantum mechanics. Engineers use matrices to predict how systems behave.
Encrypt secret messages! Some encryption methods multiply your message by a secret matrix to scramble it.
Good news: Adding matrices is like adding regular numbers โ just do it element-by-element!
โ 1 2 โ โ 3 4 โ
โ 5 6 โ โ 7 8 โ
โ 6 8 โ โ 10 12 โ
Each element adds: (1+5=6, 2+6=8, 3+7=10, 4+8=12)
Warning: Matrix multiplication is NOT element-by-element! It's more like "mix and combine".
Row 1 of A = [1, 2] Column 1 of B = [5, 7] Result = (1ร5) + (2ร7) = 5 + 14 = 19
The transpose of a matrix flips rows into columns (and columns into rows).
โ 1 2 3 โ โ 4 5 6 โ
โ 1 4 โ โ 2 5 โ โ 3 6 โ
Notice: Rows became columns! The first row [1,2,3] became the first column.
The determinant tells you if a matrix is "invertible" and measures scaling. For 2ร2 matrices:
โ a b โ โ c d โ
Matrix: โ 2 3 โ
โ 1 4 โ
det = (2ร4) - (3ร1) = 8 - 3 = 5
Find det(A) for:
A =
โ 3 -2โ
โ 1 4โ
det(A) = (3ร4) - (-2ร1) = 12 - (-2) = 12 + 2 = 14
Now try these operations yourself! Load presets or enter your own matrices.
Just like numbers have special cases (0, 1, ฯ), matrices have "celebrities" โ special types that show up everywhere!
What is it? Ones on the diagonal, zeros everywhere else.
โ 1 0 0 โ โ 0 1 0 โ โ 0 0 1 โ
A ร I = A (just like 5 ร 1 = 5)
What is it? Rotates vectors by angle ฮธ without stretching.
โ cos(ฮธ) -sin(ฮธ) โ
โ sin(ฮธ) cos(ฮธ) โ
cos(45ยฐ) โ 0.707, sin(45ยฐ) โ 0.707
What is it? Only diagonal entries are non-zero.
โ 2 0 0 โ โ 0 5 0 โ โ 0 0 3 โ
(x,y,z) โ (2x, 5y, 3z)
What is it? Equals its own transpose: A = Aแต (mirror across diagonal).
โ 4 1 2โ
โ 1 3 5โ โ Notice symmetry
โ 2 5 6โ across diagonal!
What is it? Preserves lengths and angles. QแตQ = I
โ 0.8 -0.6 0โ
โ 0.6 0.8 0โ
โ 0 0 1โ
What is it? All entries are zero (the "0" of matrices).
โ 0 0 0โ
โ 0 0 0โ
A ร 0 = 0
Use the presets below to explore each special matrix type.
A 2ร2 matrix takes every point (x, y) in the plane and moves it to a new location. Think of it like molding clay!
Matrix times vector equals new vector
โ a bโ โ x โ โ ax + byโ
โ c dโ โ y โ โ cx + dyโ
The matrix "mixes" x and y coordinates to create the new position!
โ cos(ฮธ) -sin(ฮธ) โ
โ sin(ฮธ) cos(ฮธ) โ
โ sx 0 โ
โ 0 sy โ
โ 1 k โ โ Horizontal shear
โ 0 1 โ
โ 1 0โ
โ 0 -1โ โ Flip y-coordinate
Try transforming the unit square! Enter matrix values or use presets.
Remember solving systems by hand? Painful! Matrices make it systematic and scalable to 100s of variables.
2x + 3y = 8 x - y = 1
โ 2 3โ ยท โ xโ = โ 8โ
โ 1 -1โ โ yโ โ 1โ
A x b
A = coefficients, x = unknowns, b = right-hand sides
If A is invertible (det โ 0), the solution is: x = Aโปยนb
Step 1: Find det(A) = (2)(-1) - (3)(1) = -2 - 3 = -5 โ (non-zero!)
Step 2: Find Aโปยน (use formula for 2ร2):
Aโปยน = (1/det) ร = (1/-5) ร =
โ d -bโ โ-1 -3โ โ 0.2 0.6โ
โ-c aโ โ-1 2โ โ 0.2 -0.4โ
Step 3: Multiply x = Aโปยนb:
โ xโ = โ 0.2 0.6โ ยท โ 8โ = โ (0.2ร8)+(0.6ร1) โ = โ2.2โ
โ yโ โ 0.2 -0.4โ โ 1โ โ (0.2ร8)+(-0.4ร1)โ โ1.2โ
Answer: x = 2.2, y = 1.2
โ Check: 2(2.2) + 3(1.2) = 4.4 + 3.6 = 8.0 โ | 2.2 - 1.2 = 1.0 โ
When: Matrix A is invertible (full rank)
Means: Exactly ONE solution exists
Solve: x = Aโปยนb
Example: det = 2 โ Unique solution!
โ 2 3โ
โ 0 1โ
When: Equations are dependent (one is a multiple of another)
Means: Infinite solutions (a line/plane of solutions)
Example: 2x + 4y = 6 and x + 2y = 3 (same line!)
Matrix: det = 0 โ Infinite solutions
โ 2 4โ
โ 1 2โ
When: Equations are contradictory
Means: No solution exists (parallel lines never meet!)
Example: 2x + 4y = 6 and 2x + 4y = 10 (impossible!)
Same matrix but different b โ No solution
Solve your own linear systems! Enter the number of equations and let the matrix solver work its magic.
Write this system as Ax = b:
3x + 2y = 7 x - 4y = -5
A = x = b =
โ 3 2โ โ xโ โ 7โ
โ 1 -4โ โ yโ โ -5โ
Solve using matrix inverse:
โ 1 2โ ยท โ xโ = โ 5โ
โ 3 4โ โ yโ โ11โ
det(A) = (1ร4) - (2ร3) = 4 - 6 = -2 โ
Aโปยน = (1/-2) ร =
โ 4 -2โ โ -2 1โ
โ -3 1โ โ1.5 -0.5โ
x = Aโปยนb = ร = =
โ -2 1โ โ 5โ โ(-2ร5)+(1ร11)โ โ 1โ
โ1.5 -0.5โ โ11โ โ(1.5ร5)+(-0.5ร11)โ โ 2โ
Answer: x = 1, y = 2
How many solutions does each system have?
System A: System B:
โ 2 6โ โ 1 2โ
โ 1 3โ โ 3 5โ
System A: det = (2ร3) - (6ร1) = 6 - 6 = 0 โ Either infinite solutions or no solution (need to check consistency) System B: det = (1ร5) - (2ร3) = 5 - 6 = -1 โ 0 โ Unique solution exists! โ
Most matrices rotate AND stretch vectors. But there are special directions where the matrix ONLY stretches!
"Matrix A just scales vector v by ฮป โ no rotation!"
Matrix A = Eigenvector v = Eigenvalue ฮป = 4
โ 3 1โ โ 1โ
โ 0 4โ โ 0โ
Let's verify Av = ฮปv:
Av = ร = =
โ 3 1โ โ 1โ โ 3ร1+1ร0โ โ 3โ
โ 0 4โ โ 0โ โ 0ร1+4ร0โ โ 0โ
โ This should equal 4v
ฮปv = 4 ร = โ They DON'T match! So v=[1,0] is NOT an eigenvector.
โ 1โ โ 4โ
โ 0โ โ 0โ
Try v = :
โ 1โ
โ 1โ
Av = ร = and 4v = โ Match! This IS an eigenvector!
โ 3 1โ โ 1โ โ 4โ โ 4โ
โ 0 4โ โ 1โ โ 4โ โ 4โ
det(A - ฮปI) = 0 Why? We need Av = ฮปv, which rearranges to (A - ฮปI)v = 0. This only has non-trivial solutions when det(A - ฮปI) = 0.Step 2: Calculate the determinant (gives a polynomial)
Example: A =
โ 4 2โ
โ 1 3โ
A - ฮปI =
โ 4-ฮป 2โ
โ 1 3-ฮปโ
det(A - ฮปI) = (4-ฮป)(3-ฮป) - (2)(1)
= 12 - 4ฮป - 3ฮป + ฮปยฒ - 2
= ฮปยฒ - 7ฮป + 10
Step 3: Solve for ฮป (find roots)
ฮปยฒ - 7ฮป + 10 = 0 (ฮป - 5)(ฮป - 2) = 0 Eigenvalues: ฮปโ = 5, ฮปโ = 2Step 4: Find eigenvectors (plug ฮป back in)
For ฮป = 5: (A - 5I)v = 0 โ v = 0 โ Eigenvector vโ =
โ-1 2โ โ 2โ
โ 1 -2โ โ 1โ
Generate a random square matrix to explore its eigenvalues and eigenvectors.
Every matrix can be broken down into three simple pieces!
Any transformation = Rotate โ Stretch โ Rotate!
Keep only the TOP k singular values โ throw away the rest โ get close approximation with less data!
Original 1000ร1000 image matrix = 1,000,000 numbers After SVD: Keep top 50 singular values โ Storage needed: 50 ร (1000 + 1000 + 1) โ 100,000 numbers โ Compression ratio: 10x smaller! Only 10% of original size Result: Image looks almost identical but file is 90% smaller! ๐
This is exactly what JPEG compression does! (with a twist using DCT, a cousin of SVD)
Principal Component Analysis uses SVD to find the "most important directions" in your data.
Generate a random matrix or enter one to see its Singular Value Decomposition.
Generate a random dataset (rows=samples, cols=features) and find its principal components.