Eigenvalues & Diagonalization

Discover the intrinsic structure of matrices through eigenanalysis.

Eigenvalues & Eigenvectors

Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue

An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.

The Characteristic Equation

det(A − λI) = 0
This yields a polynomial of degree n in λ

Example: 2×2 Matrix

A = [[3, 1], [0, 2]]

det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0

Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic.

For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).

Diagonalization

A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)

A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.

For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.

Applications

When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.