In This Lesson Vectors Matrices PrgHDwk+9Xlc4dRjChi/EBO3R5EIbgdFYCHCAGHvhFgPzs36lBcs+0m+heb8BssFbZ8S0miuB2tkD5q2ylBB3a/hdbdIHBaDlwRWT/WXPzxM9XJuGjEWwZPWQwu1t8ah7BXA7nRH+EeZC2tcPLz2UjFIQKqPZJv5WlGCa67TW3FKdxfzi/tWhbNvFrSDnMzYCNtq2dKPYBC8tD0L6UVE1iQS2F/BBiF5U+ccvEA8LDFUlUu0ipXKd2MAXJkGT2GPStO+nF/OWTz+IpLF0yR9CSpkmjxcNTlKAw2KinV4Fri/8fdD//hVgKc6m9UcuH2n5v53CFq5r6W2vnEtkzieRA/mljP5f3nwe8PuQ3Cw9ajLbisY4aBvPpWaklK3ie8rj3K0xenoHUIrWsgP/evd+KdqJcZ6VGHBviypc6oGnqlQionWPHCombVDWY+5ozblyv+Q4yaeBmdkAyKqlV4Szv+BI94NZin0FFn+QE2sBCKMsxNPFl8oxw8DWtebEuwa/0XQvvRkZqecSyws+1h3h26vqmvqapAerMFtM4SPpDg56u7wCKbp2FujLWtN6RDHB1NVOG2Q+xIrJNhx8+MSIRpkAHc+7/v/gUdraPKb4PzyvTx51irApeMnriCLxiZVQOQGaSv4u5a07GOc+m9/B4CIaeH7YH8XYhuiQZX/c1QvxkayragldN2feGqA8eaGeB0KhL7zRMEjJRaWgHm/FD1OyjTLB7GcJEsrbAz+E10bDkiRha Matrix Operations Solving Linear Systems Determinants Vectors A vector is an ordered list of numbers representing magnitude and direction. In ℝⁿ, vectors can represent points, displacements, or abstract data.
Vector: v = (v₁, v₂, …, vₙ)
Magnitude: ‖v‖ = √(v₁² + v₂² + … + vₙ²)
Dot product: u·v = u₁v₁ + u₂v₂ + … + uₙvₙ
cos θ = (u·v)/(‖u‖·‖v‖)
The magnitude formula generalizes the distance formula from coordinate geometry. The dot product connects to trigonometric functions via the angle between vectors.
6C0CjdQS5bWoFt4xrVdXHJlxhRErvxvVxOnvh0TX5B0puyVlW0DNZ1mhK2R59T0YR7G50WwRahi8Je1thCbm71D9tibLp5mKU3Se19h+uHkY2n6Y+57LXehuAhJNmZRCCqpWHj2NawIBrtUr0JnqMVldE6k0OIpaQP3T6jnwpx5GWddqCjgJOuXXwDSj33m6K6gQ0Es9GSoOmpfjDdqxvY70pgn6J8guvL+MhWEnOi36un39611uFwIlOH1dKa9at+VRxQf6LWDM7O3xnHdWTgLKz0TwDinhc8wUwEtnCli2vR3Hogr4fvGhGm3BANkYiUdlA/mcjuBap/zJDFhefhBd2/+ZVblnZlUWR8xJWsMg8LR65yiTS31CItlBI8heiyOEAFZSBfyqTxk0m1S5QKEI8U0D1u81B1+YyyyiFvFxn8Tk+6I4Rz6TIjaW8Ifyyz+G2ydlIiVDdY4FGj7NSeEtRDpVVO3kQctEHbMRjlFZyGOyoF/GkUChZeMJBZqFTj/lIs+Gi4nLGyZqRLC0gE9mKC291pvK7lvpAsamsl1qaOcIpWBQtSP8h0CaFtcbo4Ya1NmLA4zG0ax0X8ugroyfuOpSEvWYCfre5tA4pP5VXL8BiuFaOYlJDvfpk/+1u0BzDarUELOFHqLEjTa1p4KjcOUAOuFvAY+nOq+eYFxK97odcqjo9SIg3Ze2+VdZ0yBjS9uAC442DNcEupfMgtgvgfqI3qijYcQx81lmJsTXARPJxfX Matrices A matrix is a rectangular array of numbers. An m×n matrix has m rows and n columns.
Identity matrix: I (1s on diagonal, 0s elsewhere)
Transpose: (Aᵀ)ᵢⱼ = Aⱼᵢ
Symmetric: A = Aᵀ
SKY9H3WtsIG3uvM3lcRxUGdwMvt0XL/dTecS36UezcSvtuT/FaomQ6R8AwHIqKutO4PDMlEoQbr3zLYhCPB/k6D2vN/5xe1ejboCOkRUHIbeqzusAjd2gz3+cCjnJvwB5ozKg8CvbRiKMWVNeCD4DHEZYiBgRkxKYaG0eG9PmorApQ21/xF7yzvB2jlfkF9APOFDFMpj0zogSblSJ90P69o/05DRTWhYhlxxOSEFjUSdHwkfacoVYqt2VdFxXAXhg9js6NSUbVOKdxqJCk8vHkWtuVxdyU4iByKADP/E+0YOC4FVQzgi+P97lzuhJel0ZplXDoPv9zAB816utDEAzNpIi4Bf7tDucAxKg5YUY8ZIw27UVJJwXM9CufdIqQ9arjqcENuyPX5KIZnHF8EDZhsCs5tEuJ/8qNeNTzYrji/xQ9446E7nJxjw7hk2Ng5iQHemkyDtq2txfcdWJ+DGevXLjZXnY0gk5HyjWcjIo8uMDPWWQCVBt/fBtiALVPirLmx4CP9Qm2ewSZrUOFV7ZHHfJqgY9c7lWTA5L5qavQJh31JQ3hWTT0S6+3bWfQ9qw2SL/BcZP77BL0CMNpZg53IlDWdveGK/hZ8fJLuFoeuHbkbIYL86C/2ia3OiBb5FNxWcF29Wte7Yj44NMNIIg49mAPMckD9MDBjF18lZZ2VWw10GC2/fPpgxvXBhp6fmK87ARYDm9Nn67YebgkrUmt8vXdDhm+Fo8aqCmSifDwgOkXKQbC Special matrices: diagonal, upper/lower triangular, symmetric, orthogonal (QᵀQ = I). The identity matrix I acts like 1 in matrix multiplication — similar to how 1 is the multiplicative identity .
Matrix Operations Addition: (A + B)ᵢⱼ = Aᵢⱼ + Bᵢⱼ (same dimensions required) Scalar multiplication: (cA)ᵢⱼ = c·Aᵢⱼ Matrix multiplication: (AB)ᵢⱼ = Σₖ Aᵢₖ·Bₖⱼ (inner dimensions must match) Inverse: AA⁻¹ = A⁻¹A = I (only for square, non-singular matrices)
Matrix multiplication is
not commutative : AB ≠ BA in general. However, it is associative: (AB)C = A(BC). This algebraic structure is fundamental to
linear transformations .
Solving Linear Systems Linear systems Ax = b can be solved via:
GW9ZHgrWdTnbbcXG1G0lx+4brUdo4O238VQ2OsK87/M7q3WqcyPPupVcrH0qtQ8zas96oY9FockDpR49QRelm3/3fllwsQANPBQ/P5Me+W+eGLF5oJTaWl6kalRDV/c+A+OLMDrPg6piBb8UWqSrR6JrtJ5jL0Pb9/6GDJzzFzbcqQzt7zdRAKNPnutTx15FL4mB7luEkhu4rP7cEfCRX4eARygRuP7GU2YmLPiASpqmV2zD1/dXfTQ+/ZtJyFz/XRkNJcP2C9Vxs5PcjI7hNiV/PpfUiEFzhCRkVnbPsH0kchOR2TwQRFtSdMRjZrnGMCSJ8gQAwrPfffFF6zmcue0FtlP4PXwAG9PZ+9vktBnG7GMUij1ml0g1bXLRm8PdaxGoAh+Qrw/1CAP6Q2v4fuLF/WYW6gkSno1CLPWdEjW9tNq/0pt/cxC0X6OmPf0Nm5IguR1gI5i9EBsa0lIpRTgydiCm7SvwMgzDsdVVzbhRhTWu0y+3HJJcAKOHMO+RZ4YD1+UK7YxCaqhicFaxa8hWJnUpFyRRdPYCdxEgghI7ZenSBukcFoLbLua+XSfuY31qBmSw6bG+ey02EOzwn9m9lS5381zP76zrLkhCrpboYRm2W5OHrZ18jFwxUrHYeijM1PYJfehIJHMlIeKOS5Ovkcgq3GWKkhKQqom0aHpM9JXe4KqegeqZbBS7wV7+ps0WiKwMAVUZ0wNvTWpZ7XGfMJMF+RD/+cu6cfgjZ5yCKnqrE3 Gaussian elimination: Row reduce to echelon formz6yQffEGGFnwalEIpGhXCaRQR+l+79cpg4cPUwpDtZpYqYCkWgdJNdwpnuVcCuBKapQQSJdYWGG195yVR2JvFPG8uK/Uu7Vg+BTq3Wzv8K99eJi+Ea3E+oRLwT66n1SGpSo5yZicA3zqUkhexE87MpXGL1LyJUehDoCBYbqtRVTTl2xMdKY3JQCSYAStratGzcId4XrzE69g4ZrrTEozIdBUNiJ5yeWkEflS1+dGWLBTYH4hY0mZH/PkRaiqFUTvJnRhq0bSQBzG/5wPTvDyuH3XR4oKShpN/sAIlnm4EMst2m3PyIe/9iitiY4UdbONW8+Kdm6BNxTsfe+oJXPv7kB3ccnXm9f3+p5RM6ku/UyCtq5z79M03AyNj2TpvH0jOEQE4XnGhNs/PGpQRA7C9HqGoD+MEWbQddYPxIOrxPOigTaaOdGnXoFrg/BPJHO7ELuKeCm61AkKDII/4gAQ0wh8FcyXhZrKW3+Wc9lqGG4zvMH9sAOJBOYasrREJHNGeoKTr8+Xhy5YK3KE4XmaM7bqpGykXw48BB3acCDkytRR2g3aVSDjM4pMcwq9Bavq61x9vms1vKhW0+aWKeUp65qUXWRPoNMKnvQ/+zVhahkMsd1kOhAUr7Wy5AQsZwLl54hImHTwFdfiLz/A42Mf/9Epv6jpvA74yXQbc8NDSMt5kmtVcEPzI6aL29OAqnE80XdNEEvLiJuezrNj+jsBeZuIP7ZOFT/XrC0xkKzjd9VBEqxuTr Matrix inverse: x = A⁻¹b (when A is invertible) Cramer's rule: xᵢ = det(Aᵢ)/det(A) These generalize the methods for solving systems of equations to any number of variables. In regression analysis , the normal equations have the form b = (XᵀX)⁻¹Xᵀy .
Determinants
2×2: det(A) = ad − bc
3×3: cofactor expansion along any row/column
Properties: det(AB) = det(A)·det(B), det(Aᵀ) = det(A)
The determinant encodes whether a matrix is invertible (det ≠ 0), the scaling factor of the associated transformation , and the signed volume of the parallelepiped formed by column vectors.