WUbE4J8xkU4jPR3otV8/xtVzTkByo2mBkOPdoUffMPSOseNMhi3AIcqSyQxkvzGI6MTOLrjqZJkdz+O+xq3SGtAWq71BV9YNayZq9u0M/UTyR+SabXKZq6ZP/2L9sodV5Y4l31T/7dYKCCcMvIMHE7LAwHzelKGbedfCY/hTn1UarNkR0x3f+hSH6tfHzY87CXdGvuJm3f7gR4+GxjG5sdgAXqq+DRvqktpgadzDXLS6JuDdHHPyJqXM76Td07mpAfS/RBU5eSJm9U2FnEgmxW5HJL1YOrJE4WOabuUxvxDH93c9EgEnQ09R0T6iGFMB6poYguO5jSDCfNM24nC/KNiDLuT6ehas2u0x+utwny/F1ThflL1aJNxAh1tKn8YxvJvdNrgYVECIQ1AAFZ3s5EDuHEm0Oi6fpbZjW4iYUcmK+hRATyL5i15ZM4FnMuuCEN2suvpRiz5cSi2Y64HOiLG17NOJy9Emwxa/9KoJIMk1kgvUuBRitzV80kmRZtbMmQbE4J8RFxK/zfwDXUFXlq9uAMYdenn8OvfkGi4zg0oWuoQa+QQYIrSZCCe1DDrc0U+AGeqoi5Q7cOK7K4jREEbk3LzkHeIFQqmlpAB8w8S0pWtJDNbM4kTf90UDhlJkqn6Bgdg5NspVxKLRVgW3aAftWq6wZwhndnWpiYWSJsCwTcDs6TfcMYal9XxmxTdZl96woU8qx0nqu76HkQ7h08QD6lhKQmSbIYCj9P8cVq8M0oVzAzBfZXWVcq08DUxmkhElgV8bCCHhm8UKAqFgLVhC2ZCai

Eigenvalues & Diagonalization

Discover the intrinsic structure of matrices through eigenanalysis.

xBZqdZD2quD3xeN271icnHQNONq1AtsbY2e+4mDOd/oxd4i/ej82Dhxj404l9WkEwgHbpXVlhIPxUTD1WgZEQSiZkRYQiz6ZdLKLDnOugLnzdU3s3Rr+iUAhVky3bUWvsznuWu8Dcd+JON1U+C7FstGyaZiFWGjz/KdB5afLIt7CQjkgbaVaFoDa8RWngVfx7MaGCxRUs1UUoPSctnxKUUgQsonIokWEhmxBkRbvgtGZ0ITr3CbfgPfAg998nuCHVlH5bu1grq2J3MZi5J2s+rqC2uxUfjBYht6S1MRBmy4pLDS8ZLCFy2zZLNuzbupFa5Xu5auOC1Rc94/2f0i56mMrPy6KaqEvwNPduiJ8CS3lpJm0OX8fTHNnmxlhiaF1Yn/HX3pMAnLdBmF/W71uAQr1zgUpaCO1RFHRRsbbSGBcztFYTseTUbw/t87RG1oxk8f2wfv0R5ePtjzQZYH++zSV8awWRq8WHpXZCdfeLPI0fBK+HY7bovcubK1sr0JJ8oY/fvUVbZ9ZXaxBnBP1OvBYEPdrGUA/2kkdfzBBg2CkMTdUjcMHTgny97PH0svmtdMSFztj0HXx/dGz66dUzIXDFXHu9wtwcGktZ++knjY2GB1UBL6RQAWnVjlT9bKIs9+v0WJ8+BiW27Q/5gLbsoc5AnRfbVx400GI6q1W9wGja7PxyD4SBW+JbgLNQuy5qSPnMSarsy8lNR58JItvafu1SN3e6I7yNAhHtgzak9Z/E9bMdE0oRb5giPZ2GzkQOexNQShfxApv7VfU7hrMQipfFeVKPj

Eigenvalues & Eigenvectors

Av = λv
A is a square matrix, v ≠ 0 is the eigenvector, λ is the eigenvalue
s1DoGC2oDaaal/YIK6f3SoBVSgYQJ9uc0Ek71wiCpvA1dn4neS3/cBl21h/MgLt+WP4IPAiE0j/uiM8bSecxdjItJs+Lnxaxg+m1iMPT/PKafnxVa3soBKX1qJ1QANpgjyPx/QoGXzlv7GdN0kiOcs5IYVKal72ohwQLgRHAl/xswROuxthXiqt/k9X2XgAdabnwBPRy+4V+3VSBHHvUhww5/r1mn/5a2vQjD694xQnirQlxOngDwSklasOH2e2atgXGNlEbblnYHXxIz2I81gaxUkumtm0CSl0p2yNmdy5sy/SBBiq9c9T18sm+8TcBFDyw/j52YV+QXdDjIMgCqvDpKaIUINj27O+b3xD3mvwmTa2nFsex0H8GkdxOoS1e4YdqOXplznPJ//O0QZyJ607U0iK22F+vFJEmi1Bw41JPcXLPETJ9/B+q/AQnyJ4TTR0P25XpZMIIcmEmdNCn8/QgeuMoG8/P7nAA8+PpzV84esl7Q9xudj0KUsyzeKhErXvTBwYcDyuY11HVVv15ab11MUMOaOXb/R3aT5VsxFsQuonmbkrw5ql+xOYVY7eqNK5qP7lNbs/EE1lB/P9wReAgJD59PW+yeju1hUKCHkyQVhlPuo/zuQSW5IwRThRtg/8g+/q8H/fAPsYeH8w9K76y+S7fsrya71xbYwOYQ7lYN7A2a8Pdwtt7cEWQHMbJOWftb5dHy4abegnTCQF1hHUAAmSKUu3UswCHpJDY3aBY/60kadpfrx6SwGd0GgEPEgv9zwYLnygMfOowV0KV3iAd0NBQt

An eigenvector of A is a nonzero vector whose direction is preserved (or reversed) under the transformation A — only its length scales by the factor λ. This captures the "natural axes" of the transformation.

Oo1/CbR0TdwizU8/xHS0erjrh7ggYbos3uB5PoXFgExQyg6yQizGtZStJ0h6hLmsbVtW+Mcx/Cwd+sWAiltk8lqugGtBQx1b4S1GJvZntowdZW/Cx8HlL9MDi5RdqDy86dGOf9xMW/JN3kWuitL5j5nXJ7gbpHltI2GywX4Y1+65YWV6MQnz2drQggbCrPerpq7h13FaUzIuDsbTea9129U0yPH4/YloxPKwBneqEpJyk2C6FlpvVTq+NDQD2LFlTbNgYEjdW6NhqRZQJ9C/Bc486rY2vRa5vr8TWK2EUFwpN5kCMyEJpsYpcO15WmtW6FDTEiSEF/KuopNTnoBGAcWv8Rr7wBcSb4K8smUnXklRutsG3L9DIR3YbZYNP7obI5P33e1R9lnJTiXt5QO79x81HAzJVxKGVHKV+AqWrnjVE6Kjc4E5UbTxNJd23KscZngF06ZMLv30INDUwcyfP8xdPJNsBXXOZcs/kueZol6Dj8LlWsy/Eps9i/tpzSJjTiDoxYmg5cPFGfyzIPY5uEjiZKgGyWdU4O32U2VtEmB8RR+ZIMmCrb0MPDwAMzsjtFpZAyv7M6FB8jupZd9XkXlPM4yPRrTKwGTnZaoSKZ8XfTMEsO2wOFjteG1JgUZitNI+JAqdANGSY4EIhrYbcdwNZ/2LN8o1PgK6BEi5BSCa24IAN9RmM3ATDVoZiy8PwI6YqBtvn/ry+yzMCKBi86EuO7QnhwfmXhVgf0YR5opvs3hxnrnJ2JP8l11MmESvJeh9Pl7e/1xwcfpYzfZ8paVfORoZIJ

The Characteristic Equation

det(A − λI) = 0
This yields a polynomial of degree n in λ

Example: 2×2 Matrix

A = [[3, 1], [0, 2]]

f3z7DiTnu6bqpr5rpPSbzKdxpwgVbYUKG0XG6+y/LSX6Yf+bEnz6kaHezLmVgmjLi4IsB8OQAb27l7I4sVmg1WknhMkwAkNzBdcZzpoQ22YWgPUiyrqEYVq91v/eIFII4OtaDnn/LPMri/QqYtav/bVJehkju0q02+oDFB5zvbYBL/HO7I35nWtp1gLJEp6V7jfH1lYwVSAP3tldnkhYolQy4N3cSkzTFNfjkOG4nbeNqrgZp54s3zgQZLpmn2Zdz9W8LiIh0azbJk3pHo9SCmpSTydYAcsDyEwXpvmRp2mMRbAtgClSTBoWl9NxvJ4VhSlY2JdEFrAL3HtVHYiZkooID+4SRaGogyhs9T7B1MolAYgW2XjN1g36t3c1M4mZgHCaMBQHTRMTrd6gB/DYt+uspvlRIND7R0VwsO28ni+d5djCngq4UD9hwNIAIr0Z5kzuGGVQ5QBT/GC2lbTCLqmdvfDtaoy6AwMvuzPelv5rMUJnE4oqXGgb0NVy78WziDQYhpEBrosf9lfOQZCwAR5tLUWod4HsP0zn1lP99Www5hVL03ImU+pi8kS6vWOfdc8BHtoLvGihT+HZ55OW3HXQQuU2mvj5G11QHdYNXV6KrxPD02jxtCt5v5DRyCr+SP+APhdHxTmYI1AE/aWMBI/VlXqmuVM1Y6gBWhygmBP9wH6exsy3G1D8lQNZSTnRg4MVkUVJbwCUJbCVEVKf6E1Bgsgg44bUCmZSDb2QvX3uAEk0D9wmtiDfz/42FkftmlYEPZl1r/cZalRMXYZV/b58R8Hoe

det(A − λI) = (3 − λ)(2 − λ) − 0 = λ² − 5λ + 6 = 0

Eigenvalues: λ₁ = 3, λ₂ = 2 — found by solving a quadratic.

For λ = 3: (A − 3I)v = 0 → v₁ = (1, 0). For λ = 2: v₂ = (−1, 1).

Diagonalization

A = PDP⁻¹
P = [v₁ | v₂ | … | vₙ] (eigenvectors as columns)
D = diag(λ₁, λ₂, …, λₙ)

A matrix is diagonalizable if it has n linearly independent eigenvectors. Diagonal matrices are easy to work with: Aᵏ = PDᵏP⁻¹ where Dᵏ just raises each diagonal entry to the kth power. This makes computing matrix powers and exponential functions of matrices efficient.

For symmetric matrices, the Spectral Theorem guarantees real eigenvalues and orthogonal eigenvectors: A = QDQᵀ.

Applications

  • Principal Component Analysis (PCA): Eigenvalues of the covariance matrix reveal the most important directions in data
  • Differential equations: Systems x' = Ax have solutions involving eˡᵗ·v — see first-order DEs
  • Google PageRank: The dominant eigenvector of a link matrix ranks web pages
  • Vibration analysis: Eigenvalues give natural frequencies — connects to wave applications
When a matrix isn't diagonalizable, the Jordan Normal Form provides the closest alternative. For real-world computation, the Singular Value Decomposition (A = UΣVᵀ) is the most powerful factorization — it always exists and handles rectangular matrices too.