Eigenvalues & Diagonalization

Discover the intrinsic structure of matrices through eigenanalysis.

Eigenvalues & Eigenvectors

Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue
gIiI8grzrxYioOLUHW31nzgs7+uUDUynVeV05WfjMCZ/ThEqvXdmt1WrK2w/s45ztF9j7qovGwfGebQn8mV38FRoyHG3xvQLFECArmHdIt10EJxPNmuwP1MOtKZCgTZjHkStK1WGCtQE2CDbEmH0uMus45mvAFvG31GEv0wTjBOK7O3IhHhKBF0airWnk4mCFrUA6i/xZ+ZJZV2tW4bOYR9HKJ+eC8WUyHuFefQRyPeyOo7mK2lFNN94XecSkouvkDzZSKHBzW0AX3K1nZo6mApDJt0fOEmfuqy3eRU1nFjve+FovcXiC+GPs3hlmXZG9kh/+z4X0jzYbDFU8lqP/okiu9ckxv6V8L6XFEyxUtftVjlWws71D7fnCiWVNibkZd6SZSkZmmtLsf5urYZqwOn6H9ns9UyfMJ+1iutnPWgNFBSfEPPh12kmCU8TN8h/dYXTNVBA43LofZRG1DNNwAgh8ZehDvorAcnj3NOqduyVBC+zhZqij0DQsAVMlYUwSPV6YXHGLpQgz13/jX6MIynfLPTJNioFujRV2ccX8az71IzRxgMGQUnhpol2n6iQIk2158wVCx3WYzjr8ANxt++iyG6uFg3s5TvfayxoNbazCHEkjC6m5I5OTIGXkR46/pvVPPzCVJRavz3jVjS3oX7L/wnPP5eWzsRJJW+B66rd6cR4iMDzF3Kc+8yV2nUoFiZY1fPg0Ta98KcWlNGKv69uaT/d1IdmoudPehEm4ORnFhlHLDXiJpAhsnFVkUhgzXgzU5ZW28VSNhrZHeJo3j6kuyPBpvk0AU

An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.

FKa6WqG7XlWDA8tYlgTZM4ERfMOCKrDECHtqQmn1kixngUjBPM57tImP7R6crKh9MNAppfayVouJFm5zfpNOIKD10FqaREAEO+r2mbJ+z5Ak9nkqCY4GjSo7veYUZJ9Wmd5mijI2VLpF8mKvFeCP5CeLWzgtAtVy8K+P76XcjcahVbj4KdMzcQ01l1LNXt4Ffn7Pb9guMnFGDpA2TyeUerCkXMsWBsF4AFFGG0SC692jPUPFKMXbJBNPWyAOwbfFZLkIhE69Sh79UBO0/Ya7GXakE05ymNRz6qVqf0IyJUeiOVnJtEz/X/giyQVui6mUpY590fpEjnmS2OTKuzC4HwS0o1bnFjbKWIz1Yl8xNOSiaP4bX+QYM+qmuMelRAt4QEi2C+c6ItaU+rRCXzGEEc5clQkSFAQW9VsxGU3Gh1fnom/UXpYk/ThsKRRsF9oIgBixSVCKhnGrmbdwLRA6BoVF2LPI9Z7t/p4YdZ3VvTowGRXl7ayI9YofCyXGhWCQoVcxmgXirR7E+k/8UhxMhKwvv3qPgaVr/F743Odftc9mz3PGrnaUeGRApsfaYAZzmsfMV+v58LuXpHZWBeYDxPGB8O8FzKJ57B+V2TFb432cJmltxH8bG+X8HKysPMhQTSgUx6643OhLPoihesoZ51JK7Q3C7MnfTCJPn9+y7+0APYYhSExMiNFijnJ7f4WM7PhFDySAEEnUYrOTMNmocfZ7zMfO9KNqDJOWKbzuPmVNyRJbBc79lJP/wIARqwPM68BIIaIisobX805nQ86htJP/D3cFXFobes

The Characteristic Equation

det(A − λI) = 0
This yields a polynomial of degree n in λ

Example: 2×2 Matrix

A = [[3, 1], [0, 2]]

det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0

Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic.

For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).

Diagonalization

A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)
LzM/iAt+WkF6959cSqlqQ9UIepUEleBKtvncy2dEQKp/ylq8bAJUIQTfk+P87cVi6m1GpiiZAKqhyS91u/edqY+2fw04xz1YfP6VRzohFCzkV+Cy9bDq6fjFPpkcvv0ZSjCtRYF+J4mvzEdMjKaw1eHLjo4i859G8l0RUB9yqm0rJw7pOSZxdHwhLXSNibuH95W2FxN24wKOCQTKxCt6H9xDU82J2LbksLo1IODVw7fN0Ffs56rLanwAvGH3RlKIfo0pzBxSgQna17+Kz82w5VEwIZT1Wv1pXACtZ45hjTgZPG7IuS5PvIuGcV9BuRjnpwkJ2yOrrWmt/xzzEL+S94GxUTcqUx0X4Uy1PG9RCEAFImIsqe0lTRNWGAIgvmWcqNagOUumc/R86eK0nYNf83VltkxbbsuJxnBytbgHINnEfBvZPSMiCyDr9SEg1AO6S31BPevX/EduAFUB7zMUB4LKNAwqDIDfGYkOyKdpmssMvUNR3747a1xUVwzhGGKmvm2DAPhDMrY5wefLKMnjF1AI/SfGuaD+5cUt13CaWs+Kdm6F0AQFAHlTJqsYublH1qkvwRQBC6iN1cqm0d5ZL9sEVi7xDwkFJKYhjEqpekPFB5no29WzTGI+ivjBg0F74O5hzZV8i0F4QT53k59DIU4M9LY3We9qTYMk7wzF/CHrGW6289aAljWpTiMQDRlwk2zhe20RoTuUt33knzGDZHRNkLbIvU9rB4KBfdkpcQzyHRRfqRL2fHQ6Vr4ID7JUJAKg6Wvov3oAazKkiV7hnafwu+tHV2aboT

A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.

wzUix5p/ydol2vicLkayRqmB9loDtE0NGHaTgKXruEWquGcrYO+Ria6MyzNe1PWgcVts1W041f4qrs0+nRgDpOp8cCFEw97KSais8oXmjC8Mcuhgb5dvI78vZycFuYKs9lYLVC0a+f/PC1o+FlM2tEpaC1ElEBND7whKOIWAyeXt7AS9laxOEEz998cES+lBzGDyVWrzTAOdqF4v068nd3x3UGPq4v7BFZrVg8WKOrhIjMBQ+h7XpdIlqjWsvTZWapZ0mIGoyYy0cWGb3pSfg1B6sO9rbpf2meCX3kC9LP8PWcZnau1q/pMXmD/kcc660OGOYWtGSCtBgN4nDsdNQavwGqBiFSBS4PXlwnikkk9U5rb2mkRB5hVFKmw10E2V+ugHMoFi07zXj9+DqhLW1uOgLm8II08FWbiOcASbdxrINPaA49NgO03YTSAQCynCN6TO2xiQ3NweTcn2HzaZXwemB9tqPiyVryF1n6KJ7/adJ18LN9S1HT/AoRbRaSURb+5CXySYGbsfpD4uU1O59mUojAyOlfAtxVWo2/+cznJLj49mEG9AHPQjIb1jd3j6+1vlg/3yiiFRLhVB0eg72SPbwBSg9mYIbiZXvSkXURLwm4vuEHnFfG3+BbFbrLrfy6D84uzJO6nDEn9YLu7zVov6wvaRDTRBSYxthNyEW3UKFWjuV/hA3iiJYMacYNyRN3rtDrSoGZdraEdVR4EoR/ZtqK4uSi/JyzkHOI++bXw6oxv7f+FP4Z2lhlAwQ/H80WPNXAZF593Wq6izSNjlRLSZag2CXQfWk/

For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.

Applications

  • Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data
  • Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs
  • Google PageRank: The dominant eigenvector of a link matrix ranks web pages
  • Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.
tAztzhM6K2O98Xcw5FjLQjpAB2YKOiQTPk7emFmW7L1Wnhc9LfgOKqz+QaEhW3QzIctUhE9dvjs/EbEZ+AmYbbqT36z7uASeodzc+geTrY0QjkRJhqd1paDYkAj49GjLZ/zCSIfzhM2avSAcoWW2PjGv3Pnh5oXXS7x8HimHkpbhHGqUqjhWBZ+67oy3mCVeDnb86A/v8W9T1XxARsCwJfseBYpyjRa1N1z4PilRlRIN7MTlXOaLXTucPwyJ2RwSvujpWS49HtxRsmV0Nl3IytECiswxAUrr6xtVt/YI8JnqsJRSfP5wPznI+cbEWHqZrH29xx9r0jwbj47f7x7voL5X3Fl++ikuRucfnXXxwW3k2S7EwKP+qeLvldKwSdbuT+c2uHLcprG+d/FFdjwx7aOJakhNc0TTcZhKIz9xlCRWkpj3kqPk24Rq6mWlJye9JA9uXhrkwJgv9ziKzbeEX9+aTVpe8dHVuBJRfTjngGjGyOOPp2G2s8Oioibyy/ULakfS96Y1HJlDd4oO+RUoIpP+uMnStvYRj6u2gyjGwdMf3GV+PrIqw8NVl9LV9ijq5FggbrJKTAUoxgVjMBoecMHaA15tVByat3dqBfrVYKvHlzeLJAPnqbzsNsIz/HN6D0bGukItbNOhedqWp6azmsnNTrZv6r6Qm5e+h+rs6+GXtQr2rizmgMvnlVkn2sgomAwojkiz/sGlHWk9lY7+cCkdw6JygJqAyA0RwC/7pEn0bha7mtfY0Avgkx4OQ2/SMozcEYwbZ1FRZxVf6RPm4IxgKdwiVPsPFUT