Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue
An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.
The Characteristic Equation
det(A − λI) = 0
This yields a polynomial of degree n in λ
For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).
Diagonalization
A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)
A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.
For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.
Applications
Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data
Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs
Google PageRank: The dominant eigenvector of a link matrix ranks web pages
Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.