20  Sample Distributions

The population is often very hard to get complete data on. (That is why, in the previous exercise I said that we rarely have data on our population.) Therefore, we often rely on whats called samples for our statistical analyses. These samples are ideally a randomly selected/collected subset of our population. For example, when we do a poll of Black Americans’ attitudes, we often aren’t able to ask every person who identifies as Black in the United States. We randomly select folks who are part of our population to ask the questions to. In practice, we often collect 1 sample. But we can definitely do this multiple times if we have the resources.

Why do we care that our sample is a randomly selected subset? As you will see later in the term, we care about this because we do not want to introduce what is referred to as systematic error (or systematic bias) into our sample. We care about reducing the amount of systematic error in the way we collect our data, because that can make our results less accurate (more biased). This systematic error makes things less accurate because it makes the sample that we’ve collected data on look different than our population, on average.

How is a sample randomly selected? There is no one best way to do this; it is actually a big area of research. One way that we can get a random sample from our population is by being extremely careful to not select observations that are convenient to collect data on. For example, if we are interested in examining interstate conflicts, our population would be every country around the world. But let’s say that it would be to costly to get data on all interstate wars that every country engaged in. So we decide to pick a subset of countries. A random subset here would be that we make a list of countries and then randomly choose a certain number (sample size) of countries from that list. A non-random subset would be that before we select our sample, we exclude some countries from our list because they may be reclusive and may heavily restrict access to information about them. While this is a simple example, our random subset was chosen completely at random from our population without any systematic conditions or limitations. The non-random subset had a pre-condition to what countries we could choose before we even did a sample. This pre-condition introduces that systematic error where our understanding of interstate conflict between countries is dependent/biased because we only have information on those countries that share information about themselves.

Another source of systematic error in a sample is dependent not just on how we select observations from our population, but also how many. Going back to the example above on interstate conflict, say that we make a list of all of the countries in the world to construct our sample. Say that we are committed to randomly selecting countries from that list. But, say that we limit our sample to only be one country. Do you think there might be some problems there? Say that we select a country like the United States and that represents our entire sample. Do we think that only looking at interstate conflicts that the United States have been involved with will give us a good understanding of all countries and their interstate conflicts? Probably not. So, say instead, we increase our sample size to two countries instead of one. Say that we still randomly select these two countries and end up with Russia and the United States. Do we still think that these two countries will give us an accurate understanding of the population at large? No, probably not. The lesson to learn from this, is that the fewer observations you have in your sample (the smaller your sample size), the more influence each individual observation has on your analysis.

Let’s visualize these problems. To do so, I am going to use the exact same data that I used to visualize the population distributions to pick my samples.

Let’s draw a single sample randomly and non-randomly and see how they differ. At the same time, you can change the sample size.

20.1 Random sample

(a) Normal distribution
(b) Poisson distribution
(c) Binomial distribution
Figure 20.1: Random sample distributions

20.2 Non-random samples

(a) Normal distribution
(b) Binomial distribution
(c) Poisson distribution
Figure 20.2: Grabbing a non-random sample

We see a couple of lessons here:

  1. The random samples look much more like the population distribution than the non-random samples.
  2. Even if you do a random sample and the sample is smaller, it looks more different from the original population distribution than the same random sample but with a larger sample size.

So what makes a good sample? 1. It should ideally be random (this is often not the case and this is why our statistical tools in the social sciences get much more complex) 2. You should ideally have a pretty large sample. How large? It is extremely dependent on the context. But often a good rule of thumb is somewhere above 20 observations (n = 20).

If I am collecting a random sample, what do I do if my sample doesn’t look exactly like what I’d expect my population distribution to look like? Well, if you have a random sample that doesn’t seem to match what you’d expect for your population distribution, that is actually okay! If your sample is random, it definitely (and usually) does look different than my population distribution. Well then, why would it be valid for me to use that sample then? Do I just keep randomly collecting samples until it looks like my population? NO!

The next exercise is designed to help you visualize why any single random sample you collect does not need to look like what you’d expect from your population for it to still be a useful sample.