Skip navigation

6 Estimation & hypothesis testing

Concepts

  • The Central Limit Theorem in a nutshell:  if multiple samples of the same size were drawn randomly and independently from a population, then the sampling distribution of the means of those samples would be approximately normal regardless of the shape of the underlying population distribution as long as the sample size is large enough. This result forms the basis of most statistical inference, and hence most statistical analysis you will come across. You will often hear the term normal approximation; this refers to inference based on the central limit theorem.
  • The sampling distribution is a probability distribution; it is a theoretical construct that is mathematically derived based on independent and random processes. Most statistical analyses use a sampling distribution to make inference , hence to make valid inference we also need to assume that the units of observation in our sample are independent and from a random sampling process.
  • The mean and standard deviation (standard error) of sampling distributions for different point estimates are calculated differently (you will see two examples in this section- the sample mean and sample proportion), but the basic form of a confidence interval and the basic form of a test statistic for a hypothesis test are the same.
  • The confidence in a confidence interval is pre-determined by us. Confidence is in the method, not in the result. Most confidence intervals take this form: estimate plus or minus a chosen number of standard errors. The chosen number is called the confidence coefficient and is selected to create the desired confidence level.
  • A test statistic from a hypothesis test measures how many standard errors the observed point estimate is away from the expected null hypothesis value. If the observed value is too many standard errors from the expected value, we don't believe the null hypothesis or there is evidence against it (falsification).
  • Almost all test statistics take this form:

                                        (sample statistic - null hypothesis value of parameter)/ standard error

  • Confidence intervals and hypothesis tests are directly linked. Confidence intervals can be used to check the reasonableness of claims about the parameter. If someone claims the parameter is equal to 62, and 62 is not within your confidence interval, then this claim is suspect. Here's another distinction between what a p-value and CI tells us: a p-value answers the question "How surprising is my sample?" The confidence interval can answer the question "What values of the population parameter would cause me not to be surprised by the sample?"

Connections with other material

  • Most of the inference for simple comparisons covered in the next theme (Simple comparisons) use the central limit theorem and so share the same principles and logic. They just have different point estimates and different standard errors.
  • Inference for regression models is largely based on the normal distribution.
  • In the exploratory analysis theme, we also saw how the normal probability distribution can be used to work out reference ranges for individual observations in a sample. The application of the normal distribution is the same, except here we are estimating the sampling distribution of a point estimate or sample statistic as opposed to estimating the distribution of individual values in the population for a reference range.