In This Lesson Eigenvalues & Eigenvectors The Characteristic Equation Diagonalization Applications Eigenvalues & Eigenvectors
Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue
An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.
The Characteristic Equation m+BLRyU6o2DOrVNUE6HLBn6AT/Od6T+vNgy8D5Us7yvq8feVcMJJgRe0B6+76eGnyfGqE3Hz6NgocUZezzYqOFjuxRVk+oowPOtaE6iktrDa1J22jjKMQY/Ri9fomr+WTLEtxJ7fZC6Yy2swZIqrS/0AdA7QPVOI49emtY8PBiLii9qv8ZlsS1My1C15P2t1jEAtMcvnDumnwlBCm4gFCw9ohBunvqO9FPNWOHksO5PKI1pDoXjQfLboU9K5l8xX8fNxPPmCmgSiFf7X76tvArTMq+KBjwpz6kxDxCXR+i5aKlf8ZTF8wojvWLyGKKZEuZduPGIuzC9fLA1R4XtJc9hHT2vCKjnyT2AKDzir53iUWegMmN/53nOw1o55i+YYocRFwAwv2+XcnVsup2sWxq+cyd09v68ybmFeWmykV00AmbLean2H75s/9kJi/I4OzPavm3o8bNjGf4xgYHswdMZQyiVVoltHWRufNDp7xhRXOnJdi6bDklKg+qscvDi2w3GUomF6XrZG/nojAUR0XFrtkIEmUDJi4FnTD4QyHr6m1YOl7HP4K+vOxOB+dEvYOP/NkoxMtD7Bzh2xmxYUhzPrZz2tAB5neiLHvSqdyn451TRSSzOH0STLgrFXre3ipINUwQMxncUcdMDe6dqdg1p4G/kAJmS+wVZSsDpeixAeh0WvEopyDZoBlODU9LgTn20i3KbQFSW1U8s1cnn8kGSKR5mjfFC3oUxkpSuMlRfY2mBIyJvY/nl2JGYgWquy+sDFCGtBnELPa8GjT4/7/jPy0+5YMCC4IF7liyt Example: 2×2 Matrix A = [[3, 1], [0, 2]]
det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0
Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic .
cYtnZPmYQvJ6pQUclWeLAPUvjjhsw8hHdCuZqGAb5xmd4qDoi6+TIYqnfL6aM39EL/AxrOIvBaXtG5GEYwHnGJ2mJXyfaP1ty11rfLCMfg86io8ZozgOB1nDiLaKbpXJKXlG9BT6WOAl4mmy73axoAvcLLTaiP4BiI8WsCQhjRr0otDjzET4U9uiYorJTFcEzcS/N8bbeFvvaqysnjXPRzkBTKANp+uOLGSLGeI9y5aLT3CL+GUk0RvT3Y16F3jx9Fs6pzn9RhAOtVaWaUgr3sxtrB9Vjgq5hvTzjRyXh9FlqsRjlOvw6yzGUYbf06CEXb2HrY++KNN+Q7+oXwOnfvgzuAVsC0dsI+LkPn/orx+C7gH7x9+BxI0lbcYy8tICvmOJBmGLU0tyHWP1kXarLoSyoKhWFnt85gMMYpRAHuR0noDivfJVJ3L7TYHnKxeRw6kXSzmWR+QZ8l2UHGs5l3DdxfLrRkPUrZifJM3c39eyViapIQoHf20XmgJgh3h5q2y479GLZmBrl1k1lNGOYlgG8yJFryIAZ4MJz/PKkIbWJ5SyI/QOdIudMrWJp2pxzs7YCgEWhyAGVKbHHykumBe3zdXyUntSW6PiSxAIk046sZKKOcu2pDB0g5nXUY6GD3aFhzEs7A7yihFZfelBP0f0WDrGqLjaJhVgsx4csPoWa0g2VAKYB1gw64pj/K+NYwapzfqasx3T7ZFyYvd6MagcOZc3PEBgD52A6oAHV30mNYYhmXVeNc55Il5Crq3y1KkkfMHaYWHcnWV1l2pJRAoZX2z/Dz67Rm6AoK For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).
Diagonalization
A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)
A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.
For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.
Applications Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs zxLg2NMgcEXw2GvD9SZVpfo0wQyg6SRSzGtuQlJvyU3kcHsC5AnRHEZ9UJ6lUgZqTVfuQ+N9BR062X3ZIuXkzJ87HMjcUTr3j4bGviM30cIrA0Fc4Ogjrv1SVt/n9+KmFswyKO6xqx4Sx2XT1Fnd0tsTQJSSbfnp55E6AoFtBr8xzrjP5raS2FrBiUMTptd5j188FI1ZunK2lCO1Dvxdh2b83czE9fgIBshMXxsPr82eOxi1K/vXVN12zMrmuw3rWiH3ClL4fKnWvit5B7mnpc1zfAB1CjZY4Fwc9eM6bFQjnYnER+baRVCdzarYE7w8S667eaX7z08Zkk6au2RUzgln63hWBkQoqmpgDXjoWDNi2ocQcvXdaYE6JLaFJlFh7rTHeSzmTyieej8fsMDllYx9dpOZFtyd/7w2LZzXar1onEQA6E6Rlf6j2gq6Vt3wFqzOr7tioR3tVjIM+6haMjeTU+oqFfG36Oqv/cnZu3lgW9QhlG1mgbvzRpNo2eMBnJWHaPluZRP9vj5vH6zFfkDM3mq1E/FMrk4phuq0Ts6GAcvTx+BF7Mvpx8EyNPJdye4C0EGVyftplaITfKiKs2Tct2j0LXsHy+08BQMdaDYiDC4MqVM75yA28OoHhGkXdnbYzNj+OfcfJ6KXiEgWb29j3Za8K28OBsQn3RzmbX3qwisvKBAhYf870cseSkAzfMuDtUakq1NuoJHnRavGTuLCrZ4FXDK5Z2fCSIfa1QrxVciN16mwXxrdX6ihVOuZmrEPEnccEzVzn4IjRsjh/gtqY7lceCMKKPkl9b8 Google PageRank: The dominant eigenvector of a link matrix ranks web pages/mnXyASoxLu1T7kJLxfEUw5I5bDHf3HedrGoL9WSzhWIdzXRR0zl7EOxYISlw46VqM9GVGICHnwWGzvDOT6LGFduxM3KpIDGYFigXTQe6ug8DPzwcOEXrtTtqQT/fkzta+Yof1bBdArMNLZrf9EHlJRe0/J6o0UhqjPlH/oenLk182GJMymeBFbQ5MUc9MjUEPkLT4fajqD2oxknYISIEyeupGbUFvaIEQlJp7sk/PMbClvVyHeGGxWxQk2ABfLF33Acrir1xskqxt1/irMBsyvALz1SjWt3H4HoKRSr/o0AqSYoBBPg82ClMWHGop5prSLPzwjglT4TpH+ibv1A0AGoC/qbcpFWo9smw7j1fjGc8MbQmabdJfaMlWCf3ScdgZqcT+irBHRxhsztea9+j3l0+xYSul9i9W0oEBW64ztsoPOWBWM++6KSd1tShF1SwtBYziP3CleTuOH1wJHeSdRnF8w8nWEejorVzJVwfjsztPuhQSooSEtByGS9lD+NGdtlL1wTX2dF4CGtIghuds/4+LkFYx1fq20cv8WAijx8dTb5Fwo/D3QDtgL1bgDKSouhw/YPaeR1rVnmW7G2S0SSNeg7pm69QViXCaqTYIIO9QUZmapjF7yZvGJ2mvYEqsXSUrouLCFn0Cs8XJOxgD/roY5x4O1d5LHnAntrgAKIevOip3ifdI2AO0OBL+uwzMAizY5sY9UL1PWmA8iDTwHcHB70AIWpD6iS93xN65X8ambINVRlTI9hk1A1vpTBtdZmpqeinqqL/ZDIs31T1P5PT2Ve9DByQYoTDkx Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications 5H9Gnzv4LaaY2kxxwX7CIYZGRFZNEmkSc27WKSLrWddD0BCujSInWN4plGG1eysy3J6Domn72p0WGzez6NWKLr/VAOBWQY1eHSKSa/B/Fp3HqIuAm53kojrWrv+Kz/18lshj5OZfD/Somlee+gOI5NSjpgKYo9gl9Q8F+k58FzZeRYRRBqd4QqWVzypJ71IWtdCTDND6s4gnoBzVBwNrkJB9/dfxpV1IXnK9K2WlXmu6SEBijsmOZB8MFEAu0iy43ulzHZjn1FiDtNXBZLhrlog2etjRATVyDDsDVvteub0r4k/4MRuuk2k7PDSAH/oVtbV/3HvUS46OLsBYAqg++BE7f2BgFWE0JK2c0DBm9plJtnHpNQP28n4UD7rTE2uxecl9spHtzmsXM59RYpXmJMW2Q+WyTvVPJfJWwdntMaGJudBDx5UP8BnJUfLESQ34OuaLGtNBrjJ7M8PQHpiDoVQEgZ0gu7i7NS9Y0ISBXOvtg4fps4zO8Rpehsw8IZNKasxj5inDNsBoQF/U/kseNRpHmo3QigpRuN4HSce19PofIRzcAB3Ka9Ss6kX+EHf1ecp21B30X2djQBDVBzNC/GKdhe1wNiGyqw7A+dMKgn+6SsGltU5zFEfaHJ2SQ2aRatmSfc4WMxebrGMNp391+Y5apkg9fLqbuyH7ISDpbLSAEvJoHirsCzn/rQ2Kt84oLhHqTt7acFhZ+E4jhB32pXegxKvAvZhKn2IJzSiurUhH8c08Ux+jdVvOw4PWQXNn8jByUYqv1QiefTVHzRLINmp97rMzRSyZK1cKBE+
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.