Make evidence-based conclusions about populations using sample data.
2SNMIkKlN4L6hlwXzuibbkC228hfL+WpfsXHtgqpc1ZsMi6/32wj6r5xcVGXWmeIp97rNb2vblYgDb8CYm6LXljSRG7zf+3BFbJ0Nt3NG9PqRO3BttULJUF/oSQJcqHObrCuZV3DUKB5Aa8awvLbJtYRVOJVJXZ6uuV4t38XJ1b5TagfCTF4kf5yXIhzEpVG1G1D+urkahfwiT1xrH9oBKg98ri2z071HAjLz9mxndynziw9vb8OYmmCy3RcTkgHGmyqcClXAJxf1eG5Bkg/3L8c4YSq1QwlQX5hPAnOi/4elD+61F66gLRCbZ4b+xjdZj0hCZbobmYe77Qkd1FHTNmGWYE6oJqjcyOdxLLKTSqPzi6a2m5wfyX/5DSAGpGamosru41MIL7CcmFHjH4cA86B4NIJjjGNRQbfxzifRU6bE9d5y/P+8ROKxATTVw6Iza5XQOuDfj70tMYQ1HWQkoA6WKfzCu5XOJhN3c3UJ1u4Ahh6NRmUXWplw0iaM8HpHHBEfRYB1MD4tGKpx/E4kyvJIz7xbTAsutCUYyXeL1Rwr8+Nl3n30PimTRChX49JaCTRgkU1TlPmvWGaiAq83aWFtbmNRGXzDkeXl3OrtJLWS+3ojS6ewV67iVhJhycNW99C18KL/ewIc+nHSjw0415Jscy8CqcLdyOuYpOXuyKeFiwxxU3DfEHK+9c40QAhpOZmA+jKoD4e8hcTNEweOQCgyfuOCTeffP4xAz+6GlznXeGwuKs+rJp0pa7ZJR+9W8PSv9b/iScSxEHdsfU3/Ya2MNSWzyk7vnSeYmvR/8rZKObrRBjJN70BSbBNyMxOQRhPkQZSA9jjk5rDYeEC+SmX2KwjmigYl1cnz3J1e+B4U70WUaPZjeqrKSyf/zNjbyQdhGY5HyPZZ2y8jwXtV7rgDqEBJlJPREcVVigzqrAuj94RZiUWIKUjyO2S11+jGdEYILCi03ZJPg4HUaZXv29mEF96EHdcPu75EVrIv8Vrpvf4g4MkldZoQZ7Elx+AFlZ/RaioGP1Vg/D0dLjWRqRHZ0pAqcq32sD690gR24XkvY=A 95% CI means: if we repeated the sampling many times, about 95% of the intervals would contain the true parameter. This frequentist interpretation connects to probability theory. The margin of error shrinks as n grows — reflecting the limit behavior of estimation.
The framework:
SCsu0JWHu1YcJc1SgNy6WI5Ckh493RCY4Ua34ZreVv69GCl1VWvAQEO4m9BNwEqr/zBS0a5Wvt/G39GcOQ/Y9ipiq1ioJrU6Di0E0pWFwZWRZQ+0MyIimgMlWAJRVyple7XEMgHrmpn8Uk5/kRrrQ8OJ7thxpX0GnZkqEZzF5G751274Qy4Q7POQN1ZtF0xOUzqIGzBh6N0BRckORY7fVxi2RxVuSnyCK5QzXunAIozQSZSefmyyU7lkONeKKK5TA6Jp4J5sjdovnYjf8pK0iiY2aorMFH0sgOsPjq2J1Fxk+7pIphFg3MB/3d+4GwQZXT37t7D7vO/ZX9xjmGDkzRLP1846D4kUjVuJeA5kplEkHA7JedzR7KZUCPELBkWZdvSxftTCqHZyi2exoCvIx02l/HI6la03U/NDSb0g05HOaThVpNg07GmUsAN2/ISv2vl98UIXJYro+1CAZCrxv2LuWWKPKtKnJuUj702RvCp31E2fxb/TX5h2sCky2F1xByBoadpg0EAQhEqV0xnUIWzdhoQr5c1v3BBsEXn4QtcHkTFZfd7au5fxwz5mY6XCfu7VupKokpJodciPWux0AFFWRy9q1LtYB/3n2SfZafic8uuQP4+rpFlKRmPQh9QN2XbCqqnQwDSjeH2Ef9NYiC5r0auF/6WAUYOGe2I/6YRfmLbBPJJyQFe0kNTMYFIXXaSqyAzZbW2VL/QjYQOt25KVzjlHEvE8dbERQqLeKgm7qgprn1+Uc/H7jWHvR6D33t/BsZpZgpjxsZJBV2puAvlpTk58vwe9Ieyui82Q8M4UuGdb/eeqcNMh3AePnIhM623etOMEOLYMxuy5gKsN3teYAqtLbRs9cIuiuuXe4AgWOAQjcdDynTvmHT8nBdnMK2tnwUkdOp48ITxGLQnwcP7y46HPN2TwR+hEMpiUdJgc8Efxn2+8EPnzTEf8cOXAvx7qChQQkzpAIKvjysv/5XiJJE+5oNy/OtwKbAZKsJo2eoabozSfZx1/SrMBcR2uhWzvzmaJvPbcZNxR+lCdPv1z5ils/wwIvZVVIMiIFxwAmNeClaim: μ = 500. Sample: n = 36, x̄ = 515, σ = 60
z = (515 − 500)/(60/√36) = 15/10 = 1.5
p-value ≈ 0.134 > 0.05 → fail to reject H₀
Increasing sample size increases power without inflating α. These trade-offs are fundamental to experimental design.
Regression finds the line of best fit using calculus optimization (minimizing the sum of squared residuals). For multiple predictors, matrix algebra gives the solution: b = (XᵀX)⁻¹Xᵀy.
ANOVA (Analysis of Variance) tests whether means differ across groups — it generalizes the t-test. The F-statistic = MS_between / MS_within. Chi-Square tests independence in contingency tables and goodness-of-fit for categorical data.