jyzxBOqPJXIj3fzP6zkWERUbdwMHhmUAIdUQEEgxpN1ZXNSOwXYnBT3Lms+/CNfhNz+CVRrjH8vYFDukmdRUJIrLx5eJK2QnOEY9/wpOe434WnEwC4jGHP9EEAMcnk9Q6vMCbNHpZM0AlyDjBCGv/smM7Dmix4mr+TZMOoHS71a0DZ1VpPTTNBMSBChSOn1PO1mNMVe6oB5kXVg0nbtoMh3vujm1ECJnToRrGya9kcgOnx3R9lJA49bE+BEot2HrawtU2W9UXHS/M93VQDC07/MeszWj9Gvzm+ccZKEICQ8KcbHq5Ammn4bt/3avab4vPrG8DVBnOGLKqPV6UAItyUX6e8Ae0Ju06emPktN4oCqaTZK4geHhrORQDljXlaZXsBprWnwC4e0de83fMoL+fE3ZOJV7yYzVsHC3H6KPFzb5hDaNKui6ZrpXz2aCVhG+7d+mgb28nXpeGBKQZD+zCKm/Hs9v11YaxsBpD39Ta+o9w/YwvsC/iulJ6qQVKDWKQ6X2RpvXm2w+dFDz0uNo9X5MGOADM97z4ezZVC5qMj2qr1z83oNDTwGjIzhoYSXLKxM22f/q45fWtDoR/Jf7aa3P3c2cJNPr8M5QCU+4Jtszr1CfUsU1FDFa1C3otAqNwmirq4jtlHh0o+gBt3LwpznXSfmx4h5xfcycRzj+NHARTLo/pHvOqRc7+gyjxvLGRZrACFYIoJSyjrHh2TrqXpKI9vpX/gyc3K2HGVletm8KZrHW5na9DY9b94YmQuMzpz+ZUvceYEOcbEv5WHSbOdMJiAJl3QMYb

Eigenvalues & Diagonalization

Discover the intrinsic structure of matrices through eigenanalysis.

Eigenvalues & Eigenvectors

Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue

An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.

The Characteristic Equation

det(A − λI) = 0
This yields a polynomial of degree n in λ

Example: 2×2 Matrix

A = [[3, 1], [0, 2]]

det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0

Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic.

For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).

Diagonalization

A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)

A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.

xBZqdZD2quD3xeN271icnHQNONq1AtsbY2e+4mDOd/oxN4i/fDs2Dhxj404l9WkEwiE2Aj3eqabDHKb1eygPTOnS0kqGLu154hdZU1akAVz7/s2s/sqbj+t+sprSJ9aZF/XG+b+JWR3qcPXiqamCZa6/Dc8aDmONlew8+xM4T8S02v3pKg+DmgJ1QPWGBtMz3G4C+wKPG8OpKTM4BSbFlukKKG3x1EAXpxKWaBP1txi4N702Tpi0QEHyk1O/17eomVQgqAglnrI8giosb/4p35eONzfKpemoTUFftjWe9/XNvsHjSXvzNcfkS28cn1VH/Rhp3PDEDdRD9BT6rpPuSXXAveCGEYgQ2AD4F4/MJEXOexEDdhQREKyZ8k7nZ3kGeuTLvjbfQZRt2Tmtb34BgV2sRC+yEG4WYZTDmaAIfKs4SQz+Z5rkvOcy1Okd47fAdaM6BBPnXuoJgZpxNbS3aOb8UaKCWGqBJldIL5KWLYml8lpQ9MDex9LyvJArIhOwkQQkWcVuVkm7BDRe9c1Lj0HsmGW8BCTrE/spQxMAqd206UNopVUDI8g2ts4CpRr/nnpwSPQfBI8Ds4Ny954QPYXqZguRVzIlQ+hS4nUw65zuZbWWKO4wZ5jN9cWfKVgLmfc+cvXTaPF/shR6HIb/lA+iGG0tZQBVYXrMNr0gDmEs9SBZ9ipbjd69imhNHPaDghCiUd4UaibZCfHQ02y6c++IttGf5CMIixOiCcVsJAKHsr/3TuFrdOaGdvYT+iNn5xVPG3ttDEG8+TkttUthF3sVNjgTxuzeWI

For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.

Applications

  • Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data
  • Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs
  • PcNMpoh+cYc315+oSNLBXxbF3VutvXtPAl1/Jt0XIqe9G2MeELSjkmOaO7ogMW09dE8psazf1JVofW+8BKzWgUZyLFc2w8BojawVIJHGlZQZbxa+pcAY8g/HthjPiNb7Bd2dBSPqA4vV5fpo8JhfJa1cWjMyzDCijowlULr4WU7DLE4woVVYPTwPk1/n242mCIoc8OXHwH9nykCLUKfmCQa54sz9nugB3cyQiWa3BRqP6kcOCHprGfT/vG21w/Cwjefx/o1G/ti5xTIbbcD258kNhh8PB35IRwCRHntZSWM62vNsx1FMEl4P0vzdjd68j3CV7iV4mr4pLuIzIazMcUn6cF5xlKAwm/Zmmc9fnEoT+oiTCBNuOcGQLgHlE3N1y8e2Q57Vy0mTEe4OKzD9u1Xnpeczse6xo8Efw/dGAZPs1FuKdcqzPoeJQCbkg9HZl+QCCRDzrjM4vzsmxK9DVDuan/IWvRT8EgCsTx/KTgQWN4mP48ywEeIaTk1N9TFlBrzbG5R9/nkIeTlO9/+PMpWEgt4IzSQhg9ZXjO/WROgrEHU/Mlh+zOEHqTi1AI+2AEkTy+82wVIfy3NwTnRx90uZUz/qboIUN7J5U7NOQMu3HXUE755P81xq2HxjlKn004y/2u8rOvZE48Zk22zmg3bLaJc15kVSN0uKkqNxFPEuB+3wQcpAFVms6eNcYwp80oNYSkINEOO3h6IlLd/28GjjN7yKjJ9cMuink9MK7u7S2hmdhQUQAzuEQl/BEkVxXTXdXi43Juo/hKlydgNlJGspV5IeNkiqF
  • Google PageRank: The dominant eigenvector of a link matrix ranks web pages
  • Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.