Simpson’s paradox is a phenomenon in probability and statistics, in which a trend appears in different groups of data, but disappears or reverses when these groups are combined.
You need to be very careful while calculating averages or pooling data from different sectors. It is always better to check whether the pooled data tells the same story or a different one from that of the non-aggregated data. If the story is different, then there is a high probability of Simpson’s paradox. A lurking variable must be affecting the direction of the explanatory and target variables.
Table of Contents
Let us understand Simpson’s paradox with the help of an example:
In 1973, a court case was registered against the University of California, Berkeley. The reason behind the case was gender bias during graduate admissions. Here, we will generate synthetic data to explain what really happened.
Let’s assume the combined data for admissions in all departments is as follows:
If you observe the data carefully, you’ll see that 52% of the males were given admission, while only 43% of the women were admitted to the university. Clearly, the admissions favoured the men, and the women were not given their due. However, the case is not so simple as it appears from this information alone. Let’s now assume that there are two different categories of departments — ‘Hard’ (hard to get into) and ‘Easy’.
Let’s divide the combined data into these categories and see what happens:
Do you see any gender bias here? In the ‘Easy’ department, 62% of the men and 80% of the women got admission. Likewise, in the ‘Hard’ department, 26% of the men and 27% of the women got admission. Is there any bias here? Yes, there is. But, interestingly, the bias is not in favour of the men; it favours the women!!! If you combine this data, then an altogether different story emerges. A bias favouring the men becomes apparent. In statistics, this phenomenon is known as ‘Simpson’s paradox.’ But why does this paradox occur?
Simpson’s paradox occurs if the effect of the explanatory variable on the target variable changes direction when you account for the lurking explanatory variable. In the above example, the lurking variable is the ‘department.’ In the case of the ‘Easy’ department, the percentages of men and women applying were in equal proportion. While in the case of the ‘Hard’ department, more women applied than men, and this led to more women applications getting rejected. When this data is combined, it shows a visible bias towards male admissions, which is really non-existent.
Now suppose you were a statistician for the Indian government and inspected a fighter plane that returned from the Chinese war of 1965. Inspecting the bullet holes in the aircraft surface, what would you recommend? Would you recommend the strengthening of the areas hit by bullets?
The following is an excerpt from a StackExchange:
“During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from missions and analysed the pattern of the bullet ‘wounds’ on the planes. He recommended that the Navy reinforce areas where the planes had no damage.
Why? We have selective effects at work. This sample suggests that damage inflicted on the observed areas could be withstood. Either the plane was never hit in the untouched areas — an unlikely proposition — or strikes to those parts were lethal. We care about the planes that went down, not just those that returned. Those that fell likely suffered an attack in a place that was untouched on those that survived.”
In statistics, things are not as they appear on the surface. You need to be skeptical and look beyond the obvious during analyses. Maybe it’s time to read ‘Think Like a Freak’ or ‘How to Think Like Sherlock’. Let us know if you already have and what your thoughts are on the same!
What is the impact of Simpsons paradox on Data Analytics?
The necessity of comprehending the data and its limits is demonstrated by Simpson’s Paradox. As the world moves towards datasets gathered in extremely short spans of time, it reminds us of the importance of critical thinking when dealing with data, as well as looking for hidden biases and variables in the data. If the data is not stratified deeply enough, the Simpson paradox may exist. Even though the variation becomes modest, too much aggregation becomes irrelevant and produces bias. However, there will be insufficient data or information to identify the underlying pattern if we disaggregate too much. The variance has increased, but the bias has decreased. As a result, the Simpson Paradox can be considered the pinnacle of the Bias and Variance Trade-off.
What causes Simpson’s Paradox?
It happens because disaggregation of the data causes some subgroups to have an imbalanced representation as compared to other groups. This could be as a result of the relationship between variables or because of the way data has been partitioned into subgroups. A famous example is that of admission data for graduate school at UC Berkeley in 1973. When admission data was looked at overall, it looked like men were more likely to be admitted than women but when data examined individually for each department, the opposite was true.
Is it possible to avoid Simpson’s Paradox?
The answer is Yes. To avoid erroneous results, it’s usually a good idea to check whether the association in the aggregated dataset holds up in subsets, especially if some groups in the data aren’t equally represented. Another option is to weigh the samples based on their dimensions. Statistical analysis tools, however, are just that: tools to assist you in organising and analysing the data you’ve collected. They can’t give you any information about data that wasn’t collected or analysed. As a result, involving a multifunctional team, particularly subject matter experts and practitioners, is critical.
In a well-designed experiment or survey, Simpson’s paradox is unlikely to be an issue. You can identify potential hidden variables ahead of time and regulate them effectively by deleting them, maintaining them constant for all groups, or including them in the study. Randomization goes a long way toward limiting the effects of a hidden variable that might have been overlooked.