Vectors & Matrices

The fundamental building blocks of linear algebra and modern computation.

Vectors

ABCoUvudxEzZFw4LsWxEVhc8celffzacIeqHUI4ACF8KLWBM/FPFcouYnc3QChSQmDPlJu7AlJjePH2Y/EeUklSsP8qiv7LGP5RIb/hw69pGuorx9ieadRO/Bs6jdZQ2bJ4vvewUADmgmXovFfQtRS0ZZeFs6El8qjX0+8et4aYoZ3wnsQvISsoB9twun2c9f4BL0wJlXJeGNKXy5BF9vMMlbNQ0DJUTcoNU1budDaEqzAdCHFjybs/kjzOrlDyNgRTgPR3Ff30+XI0SHmL5/xV5Cqb7efP6CvM0xz1GXz9dLrsQK29tGHJAirZRz9736AmbHvdK0C+xh+mbX2p3eHlUyKn9LZJBpjQnLCa8n9MFUPFyAz9tKW4GJcz5n4kNFlDZkGC2xnYpnvbo/a4FeDCuOy5fUoK/pFp0kokKH81tHtcKRxoY9xhjckIKjwfQ00R3DqlymFRzrod65+xmgDysOVnBtkx9Xvgim9o7HdnUKmLxtvtiFQ6azxHNuP7IHo6bHi59UzWgDVAi4IUqJIjo3fd3whxeL+aAkwvfpRUzyK3FgM5Z2X7URBFxPTGMUc6clAljJAQu9VsxGUvEh9cEgmOkkgNkUwJsX9o6Z9oIhhjRSbzrmn2r5bdwed9VUoojftGsSM7Lv7um9F9Jaz6zm3VQxJql74g/CwUmhWBUoFcwWgTiqh5E+q//UhxKhLAvp3qPkaVr/l7X+uYHMnfFPhU2SB++aXmi0BHBGhBzWt7FpcJ88CB7

A vector is an ordered list of numbers representing magnitude and direction. In ℝⁿ, vectors can represent points, displacements, or abstract data.

Vector: v = (v₁, v₂, …, vₙ)
Magnitude: ‖v‖ = √(v₁² + v₂² + … + vₙ²)
Dot product: u·v = u₁v₁ + u₂v₂ + … + uₙvₙ
cos θ = (u·v)/(‖u‖·‖v‖)

The magnitude formula generalizes the distance formula from coordinate geometry. The dot product connects to trigonometric functions via the angle between vectors.

Matrices

A matrix is a rectangular array of numbers. An m×n matrix has m rows and n columns.

Identity matrix: I (1s on diagonal, 0s elsewhere)
Transpose: (Aᵀ)ᵢⱼ = Aⱼᵢ
Symmetric: A = Aᵀ

Special matrices: diagonal, upper/lower triangular, symmetric, orthogonal (QᵀQ = I). The identity matrix I acts like 1 in matrix multiplication — similar to how 1 is the multiplicative identity.

Matrix Operations

  1. Addition: (A + B)ᵢⱼ = Aᵢⱼ + Bᵢⱼ (same dimensions required)
  2. Scalar multiplication: (cA)ᵢⱼ = c·Aᵢⱼ
  3. LSxKnrIpCD7A0Psqo89fBOkXk8k26N9CFZO+3WIgsKCaCxrOTu6Glgf1zoaaT/w03HhkpuUWBwpdwln7qUtNwrIUpmNPvq3q6wb3Jm8OCr74DEAuwi2aRqeUZcTn19kqt4o6EO6Cf7pI41st+XbSWBhNanwQXh4/FuyhY/X4XK9XVv4yrnsfeD8lxgE3vbFjqi+/AiQJXiqbtn6p5e3NVaMgLu5TnClrre414wXRK4WHVntu/FR1cxcbWkkx6i15nUbAtGVIy3Nv14Ek7oytjd7WnjIsZGXKRa6s144++ye53jyLXwKpNmjOhjF3rVVG2UYT8qWh+i5fSn/7v2zURZao0YMaMguT6B3fSvP/gBDcv8smm8TRGMZMUrdnnMeoszTeLXNWWHIh2dWnZRLGTE+0staNp2hSNWsaO/PBaGn35YxB7za6MjSbjkyJE6muEjKfMvDFZTi1HMcN3mJSxs+eAJ1yVBy5DLyUywiFtm+2p67/etLwh2sWJ9sia7DovAbcVN0BkJrJbjY375EZIZ85dzSGebyolf0FT4ZGgHXftfB7T05ZfFKMhuE41rlMEy33Z/dFg35wtarMhflYRfTHBBmunVtFI11Swt83FqmuaEwDyINr4F6VSBhzFK9iQNS5UmVS3f9BVJhb+byLrbVcfjuw53MtUm4tI7jtd6JIwZaclxToU7iOR2QJnJxoynWmEGcE8rH2tyHXEgDdUb8NHjAveYxGAc/nTADoRgDQ/fIaNnBzJVx
  4. Matrix multiplication: (AB)ᵢⱼ = Σₖ Aᵢₖ·Bₖⱼ (inner dimensions must match)
  5. Inverse: AA⁻¹ = A⁻¹A = I (only for square, non-singular matrices)
Matrix multiplication is not commutative: AB ≠ BA in general. However, it is associative: (AB)C = A(BC). This algebraic structure is fundamental to linear transformations.

Solving Linear Systems

Linear systems Ax = b can be solved via:

  • Gaussian elimination: Row reduce to echelon form
  • Matrix inverse: x = A⁻¹b (when A is invertible)
  • RaVOTl2+3E3Jdg4qZmijfNgSaxzZOSHn21uutJfV0eplFAI7eieZ539fWKND+dWl1M8YZlPyNic+85OiGrd1ZI3gUEtZQ2o7xel0nUg9hogMH/OyIz4lu3E1Bxxb0e5ELI1sXhrje0KiD9C2Fv0g9lgCkY1N3fj/rLS77xrq3hVG2ya6qqwty1pNYHWKPX+kLPl1pLT6e/Owl5dC/qSEfAvCa4QRXb94t4dZ/WV2M1YCtUDl6KbyCnfK5MVBHfl0JbWtKRe5E97ezQFpoQwquEbzOBU5M7Yd94UvrxgIecGgDh0fZmrn36QyctS8sgJIIOuRAxGqzhZbWdq7hbq1yyPVceWBQTJhTQyNi8URASxo4FrvUZn0DdlugzkZ53fsdjUwYqThLhNa4K6Slmueid0yNbrirrIl0cUblZ331DCl9zpymZAYlAb4qIKUAlhaQ/0KYX8ebii1hTv6n9ZmVpFzsj83QNIAvlJGaytdOPz1qC9PVBhq0oQSTz4Cd0fXMLD2Y0BQZzolVVO+AoSjHKm8pdYfaYdAKMxN1AjGuuzyjjvlFNeQ5kWmoRH0kPAYwM1rmVCRf4ZN1gLA6dA1cF62+sjz11N+GuxvHyl6DD/ckq23UmgIGxTwVqgAksmLCEzq0MgwQmExLVTY8OLvEeUNgH2ON8AtL03RVVb3zbmDogYxfhjcQ98hh7OOEwwq4b+nIg/PLwLsR3EHQ0JKNDQIPICXmxOiT69C4d8I1XnQ3fiN460eqQ=
  • Cramer's rule: xᵢ = det(Aᵢ)/det(A)

These generalize the methods for solving systems of equations to any number of variables. In regression analysis, the normal equations have the form b = (XᵀX)⁻¹Xᵀy.

Determinants

2×2: det(A) = ad − bc
3×3: cofactor expansion along any row/column
Properties: det(AB) = det(A)·det(B), det(Aᵀ) = det(A)

The determinant encodes whether a matrix is invertible (det ≠ 0), the scaling factor of the associated transformation, and the signed volume of the parallelepiped formed by column vectors.