Standard Deviation vs. Standard Error: Don’t Get Lost in the Statistical Maze!

Prajwal Srinivas
2 min readSep 7, 2022

--

Terms which we come across regularly in the statistics world but are often wrongly interpreted. Let’s try to understand the clear differences/utility of the two, and hopefully by the end of the article you will be able to understand the intuition behind the two terms.

Let’s get the commonalities between the two, out of the way first. Both the statistic measure variability, and high value of either of them indicates a high-level spread in the data. And this is where the similarities end.

Now to highlight the key differentiators between the two:

Standard Deviation talks about variability of the individual data points in a particular dataset, while the Standard Error gives information about the spread of the samples which are drawn from the same population.

We can understand this better with the help of an example. Let us take the dataset of the IQs of the people in a particular country. If we were to plot these data points, they would be approximately normally distributed. Now, if we were to draw a random sample of 10 people from the population and for determine their standard deviation, it would provide us with the information of how spread out the individual IQs of the 10 people are, w.r.t. the sample mean.

Imagine, instead of drawing 1 random sample we were to draw 10,000 random samples (of size 10) and calculate the mean for each of those samples (refer Parameter and Statistic in Statistics | by Jeeva Selvaraju | Medium) and then plot those individually calculated ‘random sample’ means (sample statistic), they would be normally distributed (refer What exactly is Central Limit Theorem? | by Gaurav Arora | Medium); and the spread associated with such a distribution would be referred to as the standard error.

Credits: https://cdn.scribbr.com/wp-content/uploads/2020/12/standard-error-2-768x596.png

To summarize, standard deviation measures deviation within a sample and it assesses distance between data and sample mean, while the standard error measures variability between samples and it is used in calculating the distance between sample statistics and the population parameter.

Also, one interesting thing to note is that as the sample size increases standard deviation doesn’t tend to change, while the standard error tends to decrease. Can you think of why this happens? A little something to think of as we come to the end of this article, hope this provided a simple explanation of a much heard off topic. Happy Learning!

--

--

Prajwal Srinivas
Prajwal Srinivas

Written by Prajwal Srinivas

Master’s of Data Analytics Engineering Student @ Northeastern University | Ex- HSBC | Ex-TCS