In This Lesson H5ux/3mIIw884GmONSaES4d2/rVV7WY/NFXuMPelGncTAuQgNtQ6hpUcH3ghpU+xfBwQszFaigrk6oQpOjMtJeJMpA2nmw1lF0knQxvEYEETw6SJeRg7c+At+8s7qmNMhWuEVOa9x22w5zvXb0P5ZgHL+zXAQ2+f/sJKvod438OCuKQCTa4B07D2K9OBbtNwpsSpGz7IFe+9Ja8bAtTuF6ZHROLykkWUGbbobvnrpvgDRy7DS8+PnE7b2+a1HOlTp9A+tqsUiSQa0RtZdqHV/FG0EZ57Asw1gOPEqvqpzIkJRzWtkNoTdD2cCeNiWHzRH/vgxLtVJyeZvbVxsXBh1vE1BmC5nbIOG/rpn1Tidlkvt2sU4mtZz7O3+hnRLc02fpBYbbsv/QwDpLjTaFP5Y8vfGfWCOBzT0xDr8BWIGcumSYGR1pAxHUv6C6Rxr2bzSS2nrH9IoAZzRtZWoAAlZnMdeWxDQaRqs8STzLV025kuPRrXN5y3NCq9gDSpBXraOFHm3aq3uJ7IOGfr6pdiy08RCd2cAT4kBQKHUactid1OoXggRA8x15mU+EOMkeLdawKqUoZpgMgFfNJFZglQvB9CB4tyiGizbvm7xv+5DnNnlfZLFNRGImf5FKwR0pFAwHtVDzIZ9RkbyCKY0G9NN2E1IOsoU/wF1xQYLPZekxY+J7qB6g4GCS7aNNyq1jn8tT9qRN3BC1BGFfwQecOj9rrqL6rWiYo5cba7gwA2LSwZ6CHVpwuPfcFB3jcZygH1youpanQClyp8tYJfgwBYueBjgnpBKNhgQEOhsHOwVzp181v9K8EL7ZxtoG3/ycyAE2gS3p3/MZEY4qONmYQeL6oee2hJDRVdVqKzl+ZoI7dRB+DF/8KeuPsZzg7+i0qFInsiQ68l1pRFRDAHD1t50PAnUu2+amJGI5/RPgx25+bJfhfjOG5mG5/ujBO9Wi2qyMOqgG0W24uXFno+1x7oXMbrFvdeLsD3AcW5/nFXgqgw0g7YDYXEPPCjP7h9IBz/n/RWQXaQgHakJ0wvl/uNxQuZO5yP8jB4HcPs2k0T/5 Point & Interval Estimation vAn7RsHQO1ZcD6pNv0Zn7ov7VoUkG3z7jftQ4QgzUNoKazc9nI2dnT8U+zBptvlxvLdh5NtM62Rzl/aPJpqG663N/owD1t2cbLGJgZiKIll5gDg9IiwiUDFKaKDWLv7qwpT9vpraZE0E72HZcnhyFZsF3JjY7Gv8t5A4PZu9fY4cvBfGgZS3u05yaPIcbAL9DeCoObdmzSu2pWhf+JC6QxREDNi308YTjnuvOKdy6HDQ+4nKEvBLkzJhd3qxq5px2gfAS397m2KKHHYV9UlwGq6uh+GY2iO5Y+fIpUKY/F5GlPK+ldprTaWTgjDJKvqyCurpW1KJbmPtKRMxtqNxXM82RVIGhzmjlAXJBP3CRAnbBHZ95O23H2QYe5RFJfTD/LULV+Dod+HYeAjnxTf//5qqqyluwR3s47hflkhg1dAKuXSGiFuDocpBZcbgrogSS/nl2luwe2zr6kui6x3YvnrsBt/S8wByF631UOMPkOoPxp/eJDHqlLKfe5i2frD5GcU2ceknNMJ6Nm9pRtpPc4v7XYQcLaIF8m6ne4kUEwbVGuvgX/fNN2GJ/M59sTbvVlkhi3RA+2B/4Oj49kkiuxOzDUnc2qTCdV5eqlb3MDEfQ09uQfVQ7lmqMWp5dCXZemsWzvxFmZoOOPcW39YRYU7wOZ9/o2WHnSEWZwWfiej94Ky3elui4koQCU9gSr7UQ8ltGKPyKopnoaHA9ktEn5uTLKwA4rRtMGg6ekeduWFJsskopUbpvUGqBDF9q28kCm6brp5mgP0GZzEC9owvG/jFN+XDNTMNvWALefVqqekNaKlsHApgcwWQDWVoCDGwxLrKzl1Pw1Z+XhpwuWTluTMky/DYZPYib1ANmQKIf/m4WKMhsAXu2Td+607wwrzC5IC0oNPELZ1nrxrUOlq1o8sI/vqXzGk+qp5StEyxeZYXg4FFrxU87c4kxRUkgsMV3wZ+efTl/DB0zCR2CN1kwX5UHcqjjS2dWVnl5dS45eXI5ok8wfsrCT1giHj2YE/FmoeLvn0KmwKK7EbFcBg3kN6+TiwM9BXArReenu2qUB Hypothesis Testing Type I & II Errors Regression Analysis ANOVA & Chi-Square Tests Point & Interval Estimation
Confidence interval for the mean:
x̄ ± z*(σ/√n) (known σ)
x̄ ± t*(s/√n) (unknown σ, use t-distribution)
A 95% CI means: if we repeated the sampling many times, about 95% of the intervals would contain the true parameter. This frequentist interpretation connects to probability theory . The margin of error shrinks as n grows — reflecting the limit behavior of estimation.
Hypothesis Testing The framework:
State hypotheses: H₀ (null) vs. Hₐ (alternative) Choose α: Significance level (usually 0.05) Compute test statistic: z = (x̄ − μ₀)/(σ/√n) Find p-value: P(observing data this extreme | H₀ is true) Decision: If p-value < α, reject H₀ 0CvFD92Wo1lJZ2VWw14Lite5Nez/KmYbHI6AFcdVQsCX4HVTx/KB1nRKnI7bUxFyrZDfxYI89YZMhp7LspMDT6tPL83NE7L5I86NxO9zsN/wUiJ3JwLQBemCu6FTtUoV6V+hlpc07AIZE1pE0Frv2tSKS42SaWU4+rxoHELLX5w46bsQrbcK39ZjeMOrKKFKsmX1r96xA4fbgM6m5k7M2ym7yw26QMWcne91F7ZcwE64rcOMrMayNaehO/QFV7yLFtY9muxu2Rdmb1XL8RK6B2DDon5f1rbkGiicC02p7peACNumnJ5Lqdh1CbzTQnCesBNseEBtR+5L7CwAiPatn4p7ZakpLY5Nc0vPuwPHVImBRfROR/81hMOhMmztTaJNJb6L0X2nVgpZXhCeu7c770D9Z6SZatfjLXMX3M1396Cd6ktrMPrAuXEOSqxfWF7mmSFhsgP3j/no2HfrmV2Z7FIlJFV1KiJKA0eDW4ECZ0fhlRblT4G42CntVkSUBUu0R2BsUnwN2fRjLIdaQIfSYh4puK2JczWX3FZgYLQEqWFcT7PTMEp48Jf9Vy4B1QH/+xdIEvzIJxPlZ+VwespWE26myBYjGeiHZ6qQhNil3Dty2+exSZrUutaiunZ8GmgtkwFg6eGwJ3dDoxbJ2bzP4I9tWI+wJ9PaFYji4IbOU5hky7otnksOh1laO41nJse9lyurTZppXTDCAJ80gVIyRKWS/XYGviJK1P4wlfi600nGAzX6We5Ot9GIj5SBAb052Lpo38yHrV92Gh78D+mzkpLbNY9ctNsb0liwBH/BvgD7EEQmMmnZ/icxLLRygEuwx/lmU4t/YrSTcCsOmLMxhclyjtpF3y1vaW9qeQzNXuzxcsZxXg22z++EhtllRDE15CCRXwtw/o0zj4hQ3Ed6JXDgB0dOk77EuwmObfhObDMzQFccvaK6CA97P5QMwAnoLMdDD1z0H1Axhqd5RuKaqKggSSY4F9ECnQAh7BybjsBa5USw5kRiOP7TuLrS3MbhHN0TjAIwmw6uB+SOPPz3rLI9AIREG7kpkY9KB2LbnJ Example: One-sample z-test Claim: μ = 500. Sample: n = 36, x̄ = 515, σ = 60
z = (515 − 500)/(60/√36) = 15/10 = 1.5
p-value ≈ 0.134 > 0.05 → fail to reject H₀
Type I & II Errors Type I (α): Rejecting H₀ when it's true (false positive) Type II (β): Failing to reject H₀ when it's false (false negative) Power = 1 − β: Probability of correctly rejecting a false H₀pkPnWi86CJCkrrffkGiaUo9XNk2jQ2o1C53s7GRmuEXHI6MTe9+rogKP+0z+i/bqRwSD3yhFAZlzjYzN4xMnbdOsvL1eVayxiu/gapbKNIJoOoV7FOguFzZIExW+hKWcNozDLYt6FBGyAdxtvAVmfnZmlT/xwga5ZinTRs7FpdesxSht67U8zKIvpLxVKcjOcKvuJMKDGcXuXdtZ045zrAbO6xA6f1L1aJMhC2a/lvdwq+4kotHNhelFVkXYGgon031g+QB/pvmiL6QcHdpyiLj6k/sNEUaxOyMDVc3q1Xwx0qLZnmXBLl7nN7wC+UpwWpLnktMK7v4QkPqJeI0tKa/9eIvhxCGp6aC+35CosAjsFcZbAWLkEGC8rt0zo5pExyOjE3oYIQcpgzenBY/hfoCYOt/OKin/unrnj0w5Q7kwIqPdGvvAraEqWd14h4QmZLB4XniEmOz8tyn9O3VJmuME+VQ9yDwLgMnuYqD3HSuCinEYdcgOs+vxvCv6w1PxCoNV/nG+IavtujH8/narvTHURTRuiy4gPLSjqN23+ZwpsZtst6IluumcIl2yAGkA1loDtK0OHHazQHYqO4voeA72rLAT6h6oQ5SBZHH+uES6y5azGlrzjguitLOq/nIHzA6haIMD5+siI1C/DMSXlGpnVRruD8uIxLO+Vt+A3d5jicTyEUvqBWsCadexhYrnAlcnil+Q33n0MBGmR7EibVJGGN+kWEfCqKutaapnIc8zNVfmQIwVk+TY8Fpoxzknd5+qT8bHaWto9KTiVGHonEdydVEHC02iR3OswjSd0Mp1wYGyRU0OKJE5fY86aYuI0IAiyl/MmkehSciCqf0B5Ueh+1EV8Vma6Tj3NHt2rYmlTpNY25e+K+EFeSSrsSIwh3RF9lrT87/+A+fIkB+KZWhrbNv3X81HKKapgAm9LluqFiZaSPwCCvz/KCXj6GCHslKsTiic04pGUdl6wh3YVmCU2Boxxr0vm9lEhn6BlCMnWjh4Qr3XDednngT4Q0ev5L6wlJwGaRWleRIB0IU5jy1ZtRt9B3UI2SS46hNNi/ Increasing sample size increases power without inflating α. These trade-offs are fundamental to experimental design.
Regression Analysis
Simple linear regression: ŷ = b₀ + b₁x
b₁ = Σ(xᵢ − x̄)(yᵢ − ȳ)/Σ(xᵢ − x̄)²
b₀ = ȳ − b₁x̄
R² = 1 − SS_res/SS_tot
Regression finds the line of best fit using calculus optimization (minimizing the sum of squared residuals). For multiple predictors, matrix algebra gives the solution: b = (XᵀX)⁻¹Xᵀy .
ANOVA & Chi-Square Tests ANOVA (Analysis of Variance) tests whether means differ across groups — it generalizes the t-test. The F-statistic = MS_between / MS_within. Chi-Square tests independence in contingency tables and goodness-of-fit for categorical data.
Modern statistics increasingly uses computational methods: bootstrapping, permutation tests, and Bayesian approaches. These still rely on the
probability and
descriptive foundations covered earlier, but add computational power to handle complex real-world data.