Vectors & Matrices

The fundamental building blocks of linear algebra and modern computation.

Vectors

A vector is an ordered list of numbers representing magnitude and direction. In ℝⁿ, vectors can represent points, displacements, or abstract data.

Vector: v = (v₁, v₂, …, vₙ)
Magnitude: ‖v‖ = √(v₁² + v₂² + … + vₙ²)
Dot product: u·v = u₁v₁ + u₂v₂ + … + uₙvₙ
cos θ = (u·v)/(‖u‖·‖v‖)

The magnitude formula generalizes the distance formula from coordinate geometry. The dot product connects to trigonometric functions via the angle between vectors.

Matrices

A matrix is a rectangular array of numbers. An m×n matrix has m rows and n columns.

Identity matrix: I (1s on diagonal, 0s elsewhere)
Transpose: (Aᵀ)ᵢⱼ = Aⱼᵢ
Symmetric: A = Aᵀ

Special matrices: diagonal, upper/lower triangular, symmetric, orthogonal (QᵀQ = I). The identity matrix I acts like 1 in matrix multiplication — similar to how 1 is the multiplicative identity.

Matrix Operations

  1. Addition: (A + B)ᵢⱼ = Aᵢⱼ + Bᵢⱼ (same dimensions required)
  2. Scalar multiplication: (cA)ᵢⱼ = c·Aᵢⱼ
  3. Matrix multiplication: (AB)ᵢⱼ = Σₖ Aᵢₖ·Bₖⱼ (inner dimensions must match)
  4. gLF3RblWAQIrqwe4++zfmvSVi72/WTj29U2aCQF15jrbBXRZuFNLNboWJdKm8xZvmOxBTu2SZkGjsaAsGEwh3wlj0khzt7t5Ez1O6ZuxY+g321Al236195OWtMI75suRa5j/TxtQm2QJzHcDC00jBoqBtOsYp75W834RlN9X3wZqcA1FnUGJV2EHZnuo9Gwzsla6tUQhuHrwo0/aJTifwWZGJ/7mt8T0mLsi1+9/l58W5NAuf6JYcs8KhMoolFdTgdqJW69v/mIYgDNVzBz8WHymSW63No7C3Az1gSa6AXqNgYCzxsdlK4J8iIAxefl+Xm4y508eKBiqUk3GNMzo0XWTNvufUELb51zjcHESLCd/KwvCgakhiWAc7mFFmve4G5a7ZIBRXUeUDvNysTN4cpOkzLxIiy5muraUvjuaRf/oAPaiF1DazHHe2DDgq8JOVU7AuehXQwwlozHGZg1G//7IudXPP0lIUSpgrzllRp8xdQ6srvKIesqr+eLH3no5Uxk4liyjaYlROzuEvNE8l0mKeJ5149j40RQyTvzKAY6brQnOUt+bGjqPly9NSLHfX15ONel2iFJ1uJTNZ3oo+iOvMJWGUOKn0ad7XGJW2TutC+fRuAFx0IoFpYGWk1TDNWqncdf0Xeltm6eMxmU2GUaeDj/hmhFHlJ03xDMFNqMHKuWDD+t3AIM+9lEc20IV12UapGmwEnSWx135duxjmSjUncbuuNObaGI6DIT6yEc0moVn4f
  5. Inverse: AA⁻¹ = A⁻¹A = I (only for square, non-singular matrices)
e7HS5me3kpxMblFu1Tc+xcHCpZ7px+MDSwMsClmwN/Yhp1cTE5YV3gJWZVjWexmrp7K9lioaGTHm8drtmA07HFvOLwVvIBkNgJ56e6t/hKl0xMK0nCW92nOYVtHafWs0JgQychnC66uIAhMJSIoC84A7P9Qv9sUCnktBqsYcAhP87VkZ11sRg+Swin4Uq03SHo02Rg+KXeULlDZCU/xIQFoB9Z5TTsL675LyOGt0MFXgRNgzelZxSsKo/+y38qNt2gYe9UmcBb3wLYv6lb4dXJ4MZkcSAKRVw5/2rU2ubtf603Re0pTqlmz612w6QQbVg6k/xd4YpCS7RDIvCvzV0KZZcV3T/RkBZj+/km9QohVkJA5lWUjMpqkbkM33AiP6pJNp0yHsiW2PaAfJ1dAASLpfjGSMRfKF7UZCkbIfiNONnjcnS64o02sf66LrXRqs0yu7kZEszh/QAqrnj+iOxOiAtE2DQVv8em+AiiCO31w9NrpAettesm/sBSBaPG7byaakrHchSjs3aZGOMkQ9lkd/45ePV7TsMGU97zVVABzxCl2KVxgtqCuVUJiauxRZnx/h7ACbXWKXGauDuhGhnbEHS4YuTzF/nvy1Xru/TgLECrKNZ99px7KgFgVJh13kp6iba6+uQnv0wHWGV77RaP28VsSci3F7ptf27WkeyUJaCVt4fmw6lYlti4or3clVvBnbfqMJUzuTh03PqfIdhnbq3C+FfoHbQ7GJ9EFO1WYiXzQ4=
Matrix multiplication is not commutative: AB ≠ BA in general. However, it is associative: (AB)C = A(BC). This algebraic structure is fundamental to linear transformations.
/pBZnd6jDB37qBMfrnk5Br5QV2fiCoWvlc7NLa89wWC/n1aq2baZ5/x6gzgrNSjpDpcQuwb0xF1IKbtRCCp+b+nN7vWmJdKfzJ+8UdHLiOJ1kHw9ZbGZPnsXcRhL+tazLy1rgWhGkPlWnAszWACTfnX96lRjGOhYJ3eh5n0T7ZHJLfgD78OLOO1K2knyLmcntvvmPe3fPEmTPhGauXh+cvhF1nCZre4FwK3YMYI5DM5/Thlm7k7RzH5rvTO//Ovu0NkuulXreWB4RBC0jmj50PCMloU/Qr9bVysL3iKuKM8wniTeoQ4f6wyN3zdxwwgAayxbUwzwjEkSZcrY1zbqqk7U8O3JDoLSvrFgCKHVDSbe+/cSpYvEoLdATTgzzE0qYLhCsrPrZ+HpYQsvSCAK/vakZp6REpvi74KQLZ9kmfzvArHORZxf9XdxTWscfK+CfzkiTvnexfR0J5B79p++0L/erCdlBcGQ9POeQ0h5podxTLFmzlz0CacIrS23uDxdzwoZumylKW6GbVrG4/iz6wAwue7yKUFQ+LlTX/3UMyizN22cqGAZeNSm2qZaB567qSlxEEiNJ0Bw9FKfMY5k+BZLxI1D0eBFOXn4m7FhQ+4/lAspmi0SeesgWhdhsLczHeqj7L/Qkww7YRNytEJJpqsYO/WxEWIFl088Dqa2MjB+MbqVtXmWd4fLxxaviJeGUrnIhNTxF/qaAjWvbFw+fEZnlC+QsKHnJbE4esAxDiVb7gnluZ

Solving Linear Systems

Linear systems Ax = b can be solved via:

  • Gaussian elimination: Row reduce to echelon form
  • Matrix inverse: x = A⁻¹b (when A is invertible)
  • Cramer's rule: xᵢ = det(Aᵢ)/det(A)

These generalize the methods for solving systems of equations to any number of variables. In regression analysis, the normal equations have the form b = (XᵀX)⁻¹Xᵀy.

uPDpETWmJoYfGajxl2xn/EjzTue6DCkAMsfMl8LMI7D2QOeX82zjM7yG+qL4xSMni/NOZ+73FU4iyYqoZINk3pTLvzPqJud5KA6M/7hEas3zAXOmhKhBqQ9LQp7CvKWo1fKlujCf+PrXwUo42Qv38Y3Qd4POsHFPcrE6qwunrJqjrd3opirclfN1hmE7wHkY4TLdcJ2HVt1uUbPcj4jMMM3HIPMigFzD4GEAP7lWT6Df+gp50etrNwrM5O0qzl8MasY2kv11NVNeSNzgO+fLq6hH+UJLTtLhPOMHhtvDPhCxpMI9PnEYjyLmVPgzu2OxUwmmqSBnwawCnWNDalGMeIpEyCrHFZpaawYQGv1JboM6T+pX8sPzilY1V+71XoUyr3L62xfvYdJKKl3mNMNeIBfrAwOxSh/4+X87q9qrnzEabXt3MlDcxBnbBBnqvrgYkvGey+hGebTgW5f2XHPbOEY0kZgxPHTI5sjaZIqz0tmkHwWVNH/WjpvHaGo81lECYSQWGqYhtm9w/BftnyVtRH1pS2Zwn/v8AQigxqlYcMazjej8so76ROkh3RA0aUK4l0vewFzPNoWfBQERQeeL0gbb4+DzpAsPCbKzNz0eNS92OgF5Uprbngive7oGYJG3UvPdVuMk4tzXaTqYQwlovLLi2z9y85YEmZaf4fn49lVPgLMsTJs7PgKsmetHxktUpGUbDznZGTdlj8mkiIMtNqsFunYyjzHEvTh0/603CphVoo0UlC

Determinants

2×2: det(A) = ad − bc
3×3: cofactor expansion along any row/column
Properties: det(AB) = det(A)·det(B), det(Aᵀ) = det(A)

The determinant encodes whether a matrix is invertible (det ≠ 0), the scaling factor of the associated transformation, and the signed volume of the parallelepiped formed by column vectors.