Eigenvalues & Diagonalization

Discover the intrinsic structure of matrices through eigenanalysis.

Eigenvalues & Eigenvectors

Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue
ShxDC0+of7B8guZ7q8OaF5PB3G+VQQyoiTRSzGtjDCV22UzsdTrikenRHYYckN6lUgKmu+9bL0/jNR063X3ZIuXk6z8xmV2q9K+LZvae07RuRkKrI3FRH9LEdtPOvDdeRBq+hVSJx7Q90ubhMJWfT2V9EV2osv9rUqQvgq+Ez687Ogb7tuR7MhSxhfutbicTQzC7v/qWv4LQ2IUa9wlj44HDhCJ1/GIQD6cXK6MG5zYo9Xt029uyp9PhlnXAUvyV+wA1pJyaBsAzknmIIA/Xg5iWSb6YhnggfJmX/KSzJJ2ALv4pb1xmOMa1DblPOUk+iGqV6Fa9lrezERgylHBeD3Xpz268CbanojcCTqJkdFk3aszDp364X/OwbKenMR/dEZTQQEU5rUCibgOeIMzyKCKni6aYgoDQ6///I4fFp0LAWiQRIQJjVeTLyOjPxFoMg3S3YDVTBhMIY9/Wh9T8aUXuJplZ/RVnP9qYatR78aC4YMzb/dc3S3YfLXnsYFDCvHej0U8pI8yjT8jAcNdXIzPQ5aNF5Q4QopHVzIdIvPGQacs1vXyKKPI4MUKPamZv0jm5mlMgmffzNOuCu3RgYTdaJ9NW6WPGq69ZctjkR7b+uubFAnYOkpT0WZ/Rs/v4cAdEhnX0MrAByM+k3ECXzGVoTL5Nmv4JRlMQuBcKY0NAVFAjohsdJxFMOPRvyndcB9rcAy8Lsf0YAYubcsvB+R0duHO2XpSdEaLcdq8GW8CTE1Av7WLFJRCLiEic6a6O5DNDkAsah49wvtzxTSVVyMIU144AM7F7vFMBW

An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.

The Characteristic Equation

det(A − λI) = 0
This yields a polynomial of degree n in λ
gyyaQFDoOjq0e6b0vnopBjs0bFDqd1ea7pKUEgs5uYOpazq2RAMShh7KS/NgBnOaz6e5nujoIpnuchwRPtg6/f/XA853vCnCwB3qUuLIE0v2lMe+mPUKIcZF6/QmB9FI13ha0Am61nUADn5BCMpj2cGGc5AWVA0fcJ1L9EnZ0Ic2hOhedxDjLeD4r8ThOMTXSFukmizlelvzx7G+oiaGnZKMvxy1ZJ4r6zx5cQLOCFAACVJ3tQ7K9vxwpe6l+DrGxpgsz8O0h+1wS6mW5tckJ15wl8LMuQ+GUyrSWZD4OryDrE7kvCyVvNKFZO+3QOiU4OPsAvSKWWz+p61nJWrfTeACNkgrHsGHfIzuNwxqW8uVdgrmYW0dLepCiDnjs5qH5lIozne8KeJbURmN5wiZ1TeWQDUF+6Z/4knwyg3YOXE1pZE2SCcXd4cTdnLgSu9MSu0gxd+qU9zyZRwRDknCPIZE5hVdxNpMPrHMsRoFrVYAed7bKYwXv3p5ljDZajbjqFgkt2CO/iTaxlWEbaWoWQB5jmn6Fe8bA93wyo2lbSIrmNaCf0ffApDHU8bONuqkRvsI1I4p+LwO7f76bJN/DBLU2+10HsEr6zwh6cf5AlOHSKSabH89pgl5m+sqEWn9nLuMCwGk5QtSCi6F4GIgGaBqqGSDRAtL8Ec3dkWNdD4SPph7m8Id8BfMIl+B0gQjPKPE3RT0jsXI16ZcmFzENLmSWgKFDvBemYmBj+JEwesmlHMWeBCm0lhRIAYNKsGbemjYKIFy0q8mvqLsy6DzE5QtZdFo9pf/Hq4Y

Example: 2×2 Matrix

A = [[3, 1], [0, 2]]

M7+fzJmcyJUL+XVcj9p+JH2bb9kPMNDql4EOqygG979uv/MoM4vKEcCqG+JL87ZTK5uYSegCVsueAtyaMOZme/oxv9hrYYOTAPUD/zBpAvg6Oqr2K5Z7QC0n8cbPDvztUApbnZ1Yj88qcSEtyAwLBNZ6HPzpY73dA6M/iVU4VYT7SDUisgZTAowjSoRRZT1AOgBcYXlDwTyXeGlrNUYhaKZcOsP9fQYQ9YQo9D7wAaaf/IbT8VpgpRLUNbVTVjYpyx9u25FieeBjWZ8cthgI21BkhGYggvLVQIJ2XvgAAIGmE/SU4rAIoQ1uQfTP/gT2qe1ZWQyKctryQAmVQsL1KfFAhMmYULnDf0yDpGzoe5rxqVVIi0xvt7yQA21PxhAswpBhRxhne7H5WiHwNo7mSdfQ11ILemEzdT0ojWXjEmQbT5yhbf3qhwmRRPRzsSvYOQW4hs/4Q3dcppEXD/n/9Zzbp7tHdZWuFlTVe880RPapf9XmFLz6nyFZdnSMbsJnoDKpxpKcpekyjfZlz54LlhaNUOc7oL1hjHR+61/oQpF/oGGz+gbV4NoBCnypTq1AtHSo5ueTfhB6+4mlRjnTFB+/ndCf3Qbg7NobXHnkgqOpBMPwKoQSYAt59cRSOqdnfkDYU56aCtEK3f+uMkkg1aqZVa5EPOoufvymK1c0CWZbWCELeT7eb0LC412wT9/ZLXrky1naDBQ3BxqBzH8gFesBYz2ehCpzfGUOzN+xSBwWg68EFk3Yn120Oj8wgc1sME7+ExOkhQGx6py1MyVJ+uVXVhbrpXKa4Hz0

det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0

SiePQ0Hw7lnoBNhnYPpHng178JM36edCyRSYs1LG9jEPfxunRb7Nd/8+DSoBMkusL5jEbLN9vmNlRxvCFaXQYFRqc7fdJXv3cRLMmAeevygxGpO/FjXppOc8VMSBrPxlHW64dvgRN8DKhk8iMxBFf0+X06UU9yby2E4O8O/mFts8tPUWpz+lY23H6mrc64GhgMF66c35RH9AadwIBUC/9dVnP41B278ssi7tG/cQoRMO85uYd+m9Z43x9CzTtFeCQ3Jy4qQF+g6lA7uQwPuCmhrZNdcFno4Rmg/6sJQ1lSBcLIaap1j+NEd/90HZwdCPDda9U0JHZe2yWCuPfGAyEeImU+0A/TNgW9ScMUTRnYdWYZ8SADu7wCcqbhZcshm+IWGiw4j0tfj8oUNF0hQwEDZcBKhlNDIhnyNeQI71MXc+cxwQIDMS+upMW9LflKtUvgJKWD3C9Of1XGyyfa2q3hg2Ug/mdPy8yocFL9iP6OP40/W8uR9UDZJ3KFAty4JOJhabpZsDxkf66u4M2fpzf61psws2U04xyOXSQI8VW+7uMMovBEaQjBgW6zQGfu3eComOsA7r4mfDWRy09RzQScH8eZnqYxZsxIJ8NRLvpz5TJYicAZ/VB6zTsTcNlNvC0OdJ7jylzVQAtfELhqeV/5C8TEIW38O4aCDRlUfIwfP6kUU/PSDO4FO3u52AJinzL6BMQKhtOKioWMADR3nvnbmnLXRhzijfuZybSDj3Gpbt6/ISPeAojzG2Fy5rFOr0bY4oVipImcDpgIN3XYr/xF2SUooHXfE1i21f

Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic.

For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).

Diagonalization

A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)

A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.

For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.

Applications

  • Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data
  • Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs
  • Google PageRank: The dominant eigenvector of a link matrix ranks web pages
  • Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.