Interactive visualizations and explorations for linear algebra โ from first principles to matrix decompositions
Welcome to your interactive journey into linear algebra! Let's start with the absolute basics and build up from there.
Think of it like a table or spreadsheet:
โ 1 2 โ โ 3 4 โ
This is a 2ร2 matrix (2 rows, 2 columns).
We read it as "2 by 2".
Matrices can be any size! Here's a 3ร3 matrix:
โ 5 8 3 โ โ 2 9 7 โ โ 6 1 4 โ
This is a 3ร3 matrix (3 rows, 3 columns).
In the real world, matrices store actual data. Each row might represent a person, and each column a different feature:
| Person | Age | Income ($) | Health Score | Zipcode | Population Density |
|---|---|---|---|---|---|
| Person 1 | 25 | 45,000 | 85 | 10001 | 12,500 |
| Person 2 | 42 | 78,000 | 92 | 90210 | 8,200 |
| Person 3 | 67 | 95,000 | 78 | 60601 | 15,800 |
| Person 4 | 31 | 62,000 | 88 | 77001 | 9,100 |
As a matrix: 4 people ร 5 features = 4ร5 matrix
โ 25 45000 85 10001 12500 โ โ 42 78000 92 90210 8200 โ โ 67 95000 78 60601 15800 โ โ 31 62000 88 77001 9100 โ
Rotate, scale, and move 3D game objects. Every rotation in a video game uses matrices!
Instagram filters? Matrices! They transform pixel colors to create effects like blur, sharpen, and edge detection.
Neural networks are built from matrices. Every AI model you use (ChatGPT, image recognition) runs on matrix math!
Organize and analyze large datasets. Matrices help find patterns in millions of data points (like Netflix recommendations).
Model electrical circuits, structural forces, and quantum mechanics. Engineers use matrices to predict how systems behave.
Encrypt secret messages! Some encryption methods multiply your message by a secret matrix to scramble it.
Good news: Adding matrices is like adding regular numbers โ just do it element-by-element!
โ 1 2 โ โ 3 4 โ
โ 5 6 โ โ 7 8 โ
โ 6 8 โ โ 10 12 โ
Each element adds: (1+5=6, 2+6=8, 3+7=10, 4+8=12)
โ 1 2 โ โ 3 4 โ
โ 5 6 7 โ โ 8 9 10 โ โ11 12 13 โ
Warning: Matrix multiplication is NOT element-by-element! It's more like "mix and combine".
โ 1 2 3 โ โ 4 5 6 โ
โ 7 8 โ โ 9 10 โ โ11 12 โ
โ 58 64 โ โ 139 154 โ
The transpose of a matrix flips rows into columns (and columns into rows). In real life, you might need to transpose data when your features are stored as rows but your algorithm expects them as columns (or vice versa).
โ 1 2 3 โ โ 4 5 6 โ
โ 1 4 โ โ 2 5 โ โ 3 6 โ
Notice: Rows became columns! The first row [1,2,3] became the first column.
The determinant tells you whether a matrix is invertible, meaning the transformation preserves information so the original input can be uniquely recovered, and it also measures how the transformation scales area. For 2ร2 matrices
โ a b โ โ c d โ
Matrix: โ 2 3 โ
โ 1 4 โ
det = (2ร4) - (3ร1) = 8 - 3 = 5
This means the area or volume would scale by 5.
โ 2 0 โ
โ 0 3 โ
โ 1 โ
โ 1 โ
โ 2 โ
โ 3 โ
det = (2ร3) โ (0ร0) = 6 โ stretches but keeps orientation
โ 1 0 โ
โ 0 -1 โ
โ 1 โ
โ 1 โ
โ 1 โ
โ -1 โ
det = (1รโ1) โ (0ร0) = โ1 โ inverts orientation (mirror)
โ 1 2 โ
โ 2 4 โ
โ 1 โ
โ 1 โ
โ 3 โ
โ 6 โ
det = (1ร4) โ (2ร2) = 0 โ not invertible, collapses to a line
Find det(A) for:
A =
โ 3 -2โ
โ 1 4โ
det(A) = (3ร4) - (-2ร1) = 12 - (-2) = 12 + 2 = 14
Calculate det(B) and predict how it transforms the area of a 2ร2 square:
B =
โ 0.5 0 โ
โ 0 0.5โ
det(B) = (0.5ร0.5) - (0ร0) = 0.25
Area scales by factor of 0.25 (shrinks to 1/4 the original area)
What is it? Rotates vectors by angle ฮธ without stretching.
โ cos(ฮธ) -sin(ฮธ) โ
โ sin(ฮธ) cos(ฮธ) โ
cos(45ยฐ) โ 0.707, sin(45ยฐ) โ 0.707
Blue shows the starting vector. Orange shows the rotated vector after applying the rotation matrix.
What is it? Only diagonal entries are non-zero.
โ 2 0 0 โ โ 0 5 0 โ โ 0 0 3 โ
(x,y,z) โ (2x, 5y, 3z)
The light square is the original shape. The green rectangle shows how a diagonal matrix stretches x and y independently.
What is it? Equals its own transpose: A = Aแต (mirror across diagonal).
โ 4 1 2โ
โ 1 3 5โ โ Notice symmetry
โ 2 5 6โ across diagonal!
What is it? Preserves lengths and angles. QแตQ = I
โ 0.8 -0.6 0โ
โ 0.6 0.8 0โ
โ 0 0 1โ
Blue: basis vectors. Purple: after Q โ same lengths, same 90ยฐ angle.
A 2ร2 matrix takes every point (x, y) in the plane and moves it to a new location. Think of it like molding clay!
Matrix times vector equals new vector
โ a bโ โ x โ โ ax + byโ
โ c dโ โ y โ โ cx + dyโ
The matrix "mixes" x and y coordinates to create the new position!
โ cos(ฮธ) -sin(ฮธ) โ
โ sin(ฮธ) cos(ฮธ) โ
โ sx 0 โ
โ 0 sy โ
โ 1 k โ โ Horizontal shear
โ 0 1 โ
โ 1 0โ
โ 0 -1โ โ Flip y-coordinate
What happens? Click generate to create an invertible 2ร2 matrix, send a vector through it, then use the inverse to recover the original vector.
Blue shows the original vector v, orange shows y = Mv, and green shows the recovered vector Mโปยนy.
Try transforming the unit square! Enter matrix values or use presets.
Remember solving systems by hand? Painful! Matrices make it systematic and scalable to 100s of variables.
2x + 3y = 8 x - y = 1
โ 2 3โ ยท โ xโ = โ 8โ
โ 1 -1โ โ yโ โ 1โ
A x b
A = coefficients, x = unknowns, b = right-hand sides
If A is invertible (det โ 0), the solution is: x = Aโปยนb
Step 1: Find det(A) = (2)(-1) - (3)(1) = -2 - 3 = -5 โ (non-zero!)
Step 2: Find Aโปยน (use formula for 2ร2):
Aโปยน = (1/det) ร = (1/-5) ร =
โ d -bโ โ-1 -3โ โ 0.2 0.6โ
โ-c aโ โ-1 2โ โ 0.2 -0.4โ
Step 3: Multiply x = Aโปยนb:
โ xโ = โ 0.2 0.6โ ยท โ 8โ = โ (0.2ร8)+(0.6ร1) โ = โ2.2โ
โ yโ โ 0.2 -0.4โ โ 1โ โ (0.2ร8)+(-0.4ร1)โ โ1.2โ
Answer: x = 2.2, y = 1.2
โ Check: 2(2.2) + 3(1.2) = 4.4 + 3.6 = 8.0 โ | 2.2 - 1.2 = 1.0 โ
When: Matrix A is invertible (full rank)
Means: Exactly ONE solution exists
Solve: x = Aโปยนb
Example: det = 2 โ Unique solution!
โ 2 3โ
โ 0 1โ
When: Equations are dependent (one is a multiple of another)
Means: Infinite solutions (a line/plane of solutions)
Example: 2x + 4y = 6 and x + 2y = 3 (same line!)
Matrix: det = 0 โ Infinite solutions
โ 2 4โ
โ 1 2โ
When: Equations are contradictory
Means: No solution exists (parallel lines never meet!)
Example: 2x + 4y = 6 and 2x + 4y = 10 (impossible!)
Same matrix but different b โ No solution
Solve your own linear systems! Enter the number of equations and let the matrix solver work its magic.
Write this system as Ax = b:
3x + 2y = 7 x - 4y = -5
A = x = b =
โ 3 2โ โ xโ โ 7โ
โ 1 -4โ โ yโ โ -5โ
Solve using matrix inverse:
โ 1 2โ ยท โ xโ = โ 5โ
โ 3 4โ โ yโ โ11โ
det(A) = (1ร4) - (2ร3) = 4 - 6 = -2 โ
Aโปยน = (1/-2) ร =
โ 4 -2โ โ -2 1โ
โ -3 1โ โ1.5 -0.5โ
x = Aโปยนb = ร = =
โ -2 1โ โ 5โ โ(-2ร5)+(1ร11)โ โ 1โ
โ1.5 -0.5โ โ11โ โ(1.5ร5)+(-0.5ร11)โ โ 2โ
Answer: x = 1, y = 2
How many solutions does each system have?
System A: System B:
โ 2 6โ โ 1 2โ
โ 1 3โ โ 3 5โ
System A: det = (2ร3) - (6ร1) = 6 - 6 = 0 โ Either infinite solutions or no solution (need to check consistency) System B: det = (1ร5) - (2ร3) = 5 - 6 = -1 โ 0 โ Unique solution exists! โ
Most matrices rotate AND stretch vectors. But there are special directions where the matrix ONLY stretches!
"Matrix A just scales vector v by ฮป โ no rotation!"
Matrix A = Eigenvector v = Eigenvalue ฮป = 4
โ 3 1โ โ 1โ
โ 0 4โ โ 0โ
Let's verify Av = ฮปv:
Av = ร = =
โ 3 1โ โ 1โ โ 3ร1+1ร0โ โ 3โ
โ 0 4โ โ 0โ โ 0ร1+4ร0โ โ 0โ
โ This should equal 4v
ฮปv = 4 ร = โ They DON'T match! So v=[1,0] is NOT an eigenvector.
โ 1โ โ 4โ
โ 0โ โ 0โ
Try v = :
โ 1โ
โ 1โ
Av = ร = and 4v = โ Match! This IS an eigenvector!
โ 3 1โ โ 1โ โ 4โ โ 4โ
โ 0 4โ โ 1โ โ 4โ โ 4โ
det(A - ฮปI) = 0 Why? We need Av = ฮปv, which rearranges to (A - ฮปI)v = 0. This only has non-trivial solutions when det(A - ฮปI) = 0.Step 2: Calculate the determinant (gives a polynomial)
Example: A =
โ 4 2โ
โ 1 3โ
A - ฮปI =
โ 4-ฮป 2โ
โ 1 3-ฮปโ
det(A - ฮปI) = (4-ฮป)(3-ฮป) - (2)(1)
= 12 - 4ฮป - 3ฮป + ฮปยฒ - 2
= ฮปยฒ - 7ฮป + 10
Step 3: Solve for ฮป (find roots)
ฮปยฒ - 7ฮป + 10 = 0 (ฮป - 5)(ฮป - 2) = 0 Eigenvalues: ฮปโ = 5, ฮปโ = 2Step 4: Find eigenvectors (plug ฮป back in)
For ฮป = 5: (A - 5I)v = 0 โ v = 0 โ Eigenvector vโ =
โ-1 2โ โ 2โ
โ 1 -2โ โ 1โ
Generate a random square matrix to explore its eigenvalues and eigenvectors.
Every matrix can be broken down into three simple pieces!
Any transformation = Rotate โ Stretch โ Rotate!
Keep only the TOP k singular values โ throw away the rest โ get close approximation with less data!
Original 1000ร1000 image matrix = 1,000,000 numbers After SVD: Keep top 50 singular values โ Storage needed: 50 ร (1000 + 1000 + 1) โ 100,000 numbers โ Compression ratio: 10x smaller! Only 10% of original size Result: Image looks almost identical but file is 90% smaller! ๐
This is exactly what JPEG compression does! (with a twist using DCT, a cousin of SVD)
Principal Component Analysis uses SVD to find the "most important directions" in your data.
Generate a random matrix or enter one to see its Singular Value Decomposition.
Generate a random dataset (rows=samples, cols=features) and find its principal components.