In This Lesson Point & Interval Estimation Hypothesis Testing Type I & II Errors Regression Analysis qfMnJWFWuR3hD18jVKzq2tx3F8BFXtC47I7Od6dYcHNWNvnBmSGp1Q2IJlp8RXPoalEsed/wOEc6zKpKdriAIURi8e0Q8VcE6p6Ml8yAlYYwsQcrjGMvHiTt2l2iqbe9qixYjAAB9EjkXTD7oaDYDTfe3lijPAhd2161hAk5sZnDThLY8ZJhrc3vnNSZJFjg0XYWuC6zHWKVcGP6rFLAOr1dUPqM6MjjlatH7zhOZhGvTOZBQVkbuHoi60vO7IK8c9/xg531/DMeBi5QUItvFjz39ZntDgDLfb9mculEHHGbmU9cUieP8uwO8YQEBCvBRp43nrKg99x/rcmfI4lGsQVrjpWnFBNtg2nzrL67t+Y3f13+t/vrw32PhvpOXxEduvecTuoJTGIeYIKdMj5j1Bcni4G8r4EFOXjZzBNJrIPIgp6Ukl3t5VhTDsKHsiPatKqaXLxJfEKCZLAcid1Qe7jlu1cLjyfVLbyHMRDjmKUlkshWySjBfSwNAnfHKPSjQ5lrtR7VFS9/pRX9WQFqNO4yzAehWltAXXOAd7LYmgRSrjjgRtHSFuGwnTajRdsTmMLb0nYdcIxOJN7lvqdNqsCuc3Lqr3ue9e5HCNDtSInuVmPr5yuFHM2xMDJHD4EhT3oOHDPzRVG1vP8tAK8erkuOrrX4ZpUNrZDX1palfPJlWYi5+3fzmboGnCgxvcSlmX1TblR5Ft7tVRQOEr5jyFsABg3u1FI3I9L/kjX5WJiyFbtp6MvPPDard8SEbjP9VmeMfjbwxXHJMKa5c9Hw5PQ5/u00R7gQWF6NA/MjsKspNvKGN+qHCVntboBjDqK2X4E6gQ3IYC+/ZKq37O7affhr1w6wgLEsr5EB1bVsWO4QCSuBOa9Yi82ZzOMBPbDdIZUbvm778JIisBuI55Kfsh4d7cnxeM+kCS8yaL+DpTXh8oH4V2cg2t/lDQFnNQhxqc+UtWrAbsJMGAcGd1Ha/ouTPLmyEUd8AZE7sO8YM70m4tDiIoj77K2TRUHgHQhlgwnhrTqSrzyl2//bOYRjZbPUjeZZiWdl8brMBWAmi4vVPkXK ANOVA & Chi-Square Tests Point & Interval Estimation
Confidence interval for the mean:
x̄ ± z*(σ/√n) (known σ)
x̄ ± t*(s/√n) (unknown σ, use t-distribution)
A 95% CI means: if we repeated the sampling many times, about 95% of the intervals would contain the true parameter. This frequentist interpretation connects to probability theory . The margin of error shrinks as n grows — reflecting the limit behavior of estimation.
Hypothesis Testing The framework:
State hypotheses: H₀ (null) vs. Hₐ (alternative) Choose α: Significance level (usually 0.05) Compute test statistic: z = (x̄ − μ₀)/(σ/√n) Find p-value: P(observing data this extreme | H₀ is true)tjhUToNF3Rp/w2zc5oxQa3fw4soMGhaSR6qcCpcQPNHrYezMIUxIGBVfOZCqHU5W2QmrGjbbWRcGm947G/kVGAEPoBP4i+3n4QEGJaf5qXfYnW4NjS/PxWUpAgFA1COyE2rsD/8vokrGCftVNkGlRPtI1XlQc73pTW4mezNt9fTsMzh7UrXVOYctFgxcLVExRk4K+EZB1sEBQFMHOGkXk6miwwqqWKDZT5kTtvQxH2AS/1C7OaHhyibzmdrO74YSahwPvOhIFYX3emMYIIVoMS1FIkqkJbEVlr3cPb3SjdpiNTZgQACh5eMJBZqFTj/lIq+Gi4Grh9Gek5nwYI1aLrA74aut6lk71euX1PNK8Xxa1FB67EIuDPjAlPiA+coD71GB11v9Mu40X8ugjoy/vO1SEPWYCbqe5tAYLH3pu2MJbk5ZWS4ni07KTuTWiEvBrttrhQutmts5AB6IUVFUBEB7/S78QoFbQ1mztfDa3rJQnBnBEDJwj5HGXs2eSEVR1dWGz7wc9Fu5JwNwkYQyT6PCD7ijj1eLhHVbYATz3XMcz9rCkCR8FQDLGDUsTUk1mmHv4Sodk0lDjgBMtRqTlMLfYKv+JKD/x2FJqwVMiSCkGo/9vE40+9TISbH4CQ3NddqK6Ags9nqm9L56b6eWPJofvaXYWxHxU2rmvJLTj/qGJ2R1s0xyTn6Ieupqtr+sZYAseYqPGOObMKkmhstjbkvzKgTL2w2VergmTwMBIKSW4L6A4lsa3XdO5mIET/8X5+ZfGkp3lf5ne5+wrfbkXuRjKaL7vHq6ofiZbXF/NxMJzre1Lqi0i0eJd89ok7WOV2jsydO9bW1fp4ksQ9j3WDbISDqFg3ihrGlLOXVlWGmCxPjmjhCiDVCOJLhnXtA8cKZ/YuUt4L4aaUSVbWVtBFJ3K6AHSUHnEgcqEen+RqPgk2RfhtfzTqfXd6wYWpjwl+eGyWwIcSdTjLTQa6wS7U/H9gnkZ5T+h++caAW/RPsU9aNWGVGEG5hCryaGw/zDV6QPG+YIkpFZNQBRWOJbA4wLp6zKKVdQPmNqqrf/kKgasE2D Decision: If p-value < α, reject H₀ Example: One-sample z-test Claim: μ = 500. Sample: n = 36, x̄ = 515, σ = 60
z = (515 − 500)/(60/√36) = 15/10 = 1.5
p-value ≈ 0.134 > 0.05 → fail to reject H₀
Type I & II Errors Type I (α): Rejecting H₀ when it's true (false positive)gC2wSk2bdIIBJ8JhXjo/KSzJoIzcRrRN4gsK2rhRbazhM3ya3lGf/J2QRuv1v6nQkcGiGH9FP1MHXKsdGIxX1ndWlZjTp3nF5+OXUokqdeAyH18Bu9Pan9yhkNSeKBkKZ4HJe7e7PZArAYNWCxs9G0VwVVO935C8KGupb11SXBzGB5BrxlvUJxu4knXBOw8YmCx8BSRIa/1QNHkZYT8NvFJmd3s0KQsd+yR5cXuO16sNHvLPXluvjp6vdlSL0lSgoeA62p8dX4FI8aLcgSGCVLj0TZmjjAVxxFop/3W5gd8MGpWPYKi1Ph4MdXZ+eXdLEwouTNwgv+Oe5Lx3OFqgyAxaiIAhjNdMq9Cl3ME8jRWclj8sf1yvzoGgd3DpqOF8cEqlm2VH0Z2Z+To0KH+C3cUq76GZ7eJcTV9PbDZakbKhdXtLbhj9gi4RAEzsT9MOaQGgjd3axjffK84wKeTPn7zsWumXiWkeoFLQrokoeJLNXG/a/gfJpkI5M6qndQBb737a0msKDz6rl8AQOMGMCcs7PKFCYzWqrP61HvJWqw5WgUb11HuVJbXPTgtG1m/s5hXagh/Qlgl8yVcBeMPj8xMkrkdJSTJS/jH7sJIblDo5JJOMYQl590RTON1SH2IwO2JoHpAEZegMAfiTSY/0s2EaTW3nxVAIhFQpC3sr2OEyXL3C3sV0Q3TJzvWKtpea4BAlKgl/uk1dUgDyT2u+vyLU/nV29UzjE5AA7b3ZNlqvq+xxEPeDxuOoGGl4UzYnssP3OAPwM6Rq6ef2eNCQptt9YtZi8XgPa1N+EpK06Gcb0+OKq7atuBVDIus5sHkFpXozA7FZiYcuyC3Kg70XpCl9+goespQDpMJiC2rWFNTXh1M9lVWzm2VbZgVExqYKmvGzNz3n8F/+/R2AIWx9Nx2ainBx92kJegmmvXQwXCHA5WGl77BC7+uMcQDenu0Ht82cBWHfY45mVCINcANb3Buzojzq9kGI9ndxTdrZZ0eYSEEC/Hx6XMlLUZmOMVwm/WkJjhdQL3N8JIIo5taYpAZrmP4XhWoHs9L6ypNjW+Cz9g8l Type II (β): Failing to reject H₀ when it's false (false negative) Power = 1 − β: Probability of correctly rejecting a false H₀ Increasing sample size increases power without inflating α. These trade-offs are fundamental to experimental design.
Regression Analysis
Simple linear regression: ŷ = b₀ + b₁x
b₁ = Σ(xᵢ − x̄)(yᵢ − ȳ)/Σ(xᵢ − x̄)²
b₀ = ȳ − b₁x̄
R² = 1 − SS_res/SS_tot
Regression finds the line of best fit using calculus optimization (minimizing the sum of squared residuals). For multiple predictors, matrix algebra gives the solution: b = (XᵀX)⁻¹Xᵀy .
ANOVA & Chi-Square Tests ANOVA (Analysis of Variance) tests whether means differ across groups — it generalizes the t-test. The F-statistic = MS_between / MS_within. Chi-Square tests independence in contingency tables and goodness-of-fit for categorical data.
Modern statistics increasingly uses computational methods: bootstrapping, permutation tests, and Bayesian approaches. These still rely on the
probability and
descriptive foundations covered earlier, but add computational power to handle complex real-world data.