21  Sampling Distributions

Why is it okay to not have a random sample that doesn’t match the population distribution? That is, why is it possible and not all too worrisome if we collect a random sample, but it doesn’t necessarily look how we might expect it to for our population? It is important to note here that if we have a sample that looks different from what we’d expect with our population and we non-randomly collected it, it’s a problem. The important part here is that it was intended to be random.

Say we are able to take an infinite number of random samples. For each of those samples, we calculate its mean. What we should expect, if the samples are truly random, that most of them, on average, will have a mean similar to that of the population. Yes, we may have some samples where the mean of the sample may have a mean quite a bit different than the population. But, we know that on average, we should do okay at producing a sample that has a mean similar to that of the population. This idea is referred to as The central limit theorem.

We can’t control which of these samples we get, unfortunately. So, we often rely on the idea that on average, we should be right in science.

Let’s visualize this process.

In the previous exercise, I took one sample (which is what I often do). But, I could technically take a massive number (infinite) of samples. Let’s visualize what it may look like if I was able to take a really large number of samples, calculate the mean for each one, and then plot the distribution of means for all of those samples. This distribution is called the sampling distribution. According to The central limit theorem, I should expect that the average random sample’s mean approximates the mean of the population.

Note
  • Population distribution (theoretical): the distribution of a given variable in our population.
  • Sample distribution (real): the distribution of a given variable for a sample (random or non-random)
  • Sampling distribution (theoretical): If I collected multiple samples (an infinite number), took the mean of the samples, then plotted the means of all of these samples, then I’d expect that the average sample mean approximates the mean of my population distribution.

21.1 Random samples

Let me take multiple samples. For each sample, I am going to take the mean of the variables of interest for that sample. I am then going to plot each of the calculated means for each sample. This will give me my sampling distribution.

Let’s see what these sampling distributions look like at different sample sizes and when I take more samples.

(a) Normal distribution
(b) Poisson distribution
(c) Binomial distribution
Figure 21.1: Random sample distributions

You may have noticed a couple of things:

  1. The more samples I take, the average sample’s mean gets closer to the population distribution.
  2. The larger my sample size, the average sample’s mean is closer to the population distribution than the average sample’s mean when I have a smaller sample size.
  3. All of my sampling distributions look more and more like a normal distribution regardless of the population distribution for that variable.

What gives with my third observation? The central limit theorem does not say that I am recreating my population when I am collecting more samples. It just tells me that the average sample will have a average value closer to the average value in my population for that particular value. The sampling and population distributions are different things. I could go into the math as to why the sampling and population distributions do not match in both central tendency and spread, but all that really matters is that the central tendencies of the sampling and population distributions approximate one another.

What this demonstrates, is that if I grab one random sample, I should expect, on average, that sample’s mean will match my population distribution’s. This re-affirms the idea that random samples are extremely useful for understanding our population with only a subset of it. It gives me confidence that I do not have to collect data on my entire population.

What does this all look like when I do non-random samples? Does The Central Limit Theorem fix the problems coming from any single non-random sample that I’ve collected (like the one from the previous exercise). A bit of a preview: no, it doesn’t fix your problems.

(a) Normal distribution
(b) Poisson distribution
(c) Binomial distribution
Figure 21.2: Non-random sample distributions

As we can see, these look off.