Alec Gehlot
2 min readDec 22, 2021

--

Photo by Matthew Henry on Unsplash

Imagine two small towns, each with only one hundred people. Town A has ninety-nine people earning £80,000 a year, and one super wealthy person who struck oil on her property, earning £5,000,000 a year. Town B has fifty people earning £100,000 a year and fifty people earning £140,000.

The average (mean) income of Town A is £129,000 and the average income of Town B is £120,000. Although Town A has a higher average income, in ninety-nine out of one-hundred cases, any individual you select randomly from Town B will have a higher income than an individual selected from Town A [1].

The mistake we make is thinking that the if you select someone at random from the group with the higher average, that individual is likely to have a higher income. If someone in the sample performed extremely well, perhaps Warren Buffet is included in your sample of investors, then the average return on investment is likely to be an overestimation of what you can expect to get.

However, the opposite also applies. If someone in the sample has performed very badly, then the average will be an underestimation of what you can expect to get. Therefore it is important to look at the influence of the data on the average, such as looking at how the data is distributed, because we may be fooling ourselves more often than we think. All it takes is for one person to get an extreme result in a sample and the average will deceive us.

[1] Credit goes to Daniel Levitin for this example.

--

--