Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconWhat is P-Hacking & How To Avoid It in 2024?

What is P-Hacking & How To Avoid It in 2024?

Last updated:
27th Aug, 2023
Views
Read Time
9 Mins
share image icon
In this article
Chevron in toc
View All
What is P-Hacking & How To Avoid It in 2024?

Statistical Analysis is an essential part of Data Science and analysis. One of the most important concepts in statistics is Hypothesis Testing and P-Values. Interpreting P-Value can be tricky and you might be doing it wrong. Beware of P-Hacking!

By the end of this tutorial you will have the knowledge of below:

  • P-Values
  • How to reject/accept hypothesis
  • What is P-Hacking and how to avoid it
  • What is Statistical Power

Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career

Let’s dive right in!

What are P-Values?

P-values evaluate how well the sample data supports that the null hypothesis is true. It measures how correct your sample data are with the null hypothesis. 

While performing Statistical tests, a threshold value or the alpha needs to be set prior to starting the test. A common value for it is 0.05, which can be thought of as a probability. P-values are defined as the probability of getting the outcome as rare as that alpha or even rarer. 

Therefore, if we get our P-value less than that alpha, that would mean that our statistical test didn’t occur by chance and it was indeed significant. So, if our P-Value comes, say, 0.04, we say we reject the Null Hypothesis.

A low P value suggests that your sample provides enough evidence that you can reject the null hypothesis for the entire population. If you got a P-Value of anything less than 0.05 in our case, then you can safely say that the null hypothesis can be rejected. In other words, the sample you took from the population didn’t occur by pure chance and the experiment indeed had a significant effect.

So what can go wrong?

As we say that getting any P-value of less than alpha gives us the liberty to safely reject the Null Hypothesis, we might be making a mistake if our experiment itself is not showing the right picture! In other words, it might be a false positive. 

Best Practices to Avoid P Hacking

As we explore the intricacies of p-hacking techniques, a growing realization emerges about the ease with which one can inadvertently or deliberately stray into these practices. This highlights the crucial significance of receiving proper statistical training and maintaining an unyielding dedication to upholding scientific integrity. The primary goal should be to present the data, avoiding any inclination to shape it according to our preferences.

P-hacking possesses the potential to undermine the very core of scientific research silently. However, there is no need to worry. By adhering to certain best practices, one can ensure they stay on the correct path: 

Develop a Clear Research Plan

Before conducting any research, develop a comprehensive and well-structured plan encompassing your hypotheses, data collection strategies, and analysis procedures. This meticulous roadmap safeguards against the tempting path of p-tracking, where one may resort to trial-and-error techniques by manipulating variables and experimenting with different data analyses until significant results are obtained. By adhering to a predetermined plan, you can uphold the integrity of your research and avoid any unintentional bias or manipulation that could compromise the validity of your findings. 

Pre-Register Your Studies

Before initiating the study, make your research strategy known to the general audience. By taking action, you considerably reduce the temptation to deviate from your original goal in light of preliminary results. This open approach also conveys to other researchers that your work may be regarded more seriously since it shows your dedication to impartial and unbiased study. Use systems like upGrad to document and publish your research strategy to pre-register your investigations, assuring more responsibility and legitimacy in the scientific community. 

Transparent Reporting

Embrace honesty as your most helpful ally in research by keeping track of all your efforts, including the unsuccessful ones. This dedication to openness necessitates the establishment of comparison groups in advance and delivering a thorough report containing all relevant variables, circumstances, data exclusions, tests, and measurements. By doing this, you can ensure that your study is transparent and that your findings are trustworthy, helping you build confidence in the scientific community.

Education and Training

The popularity of “p-hacked” research frequently results from ignorance of the dangers rather than deliberate bad intentions. It is essential to understand statistical concepts and be conscious of the risks associated with p-hacking to protect against such practices. Every researcher’s toolset should include continuous learning since it improves their capacity to conduct solid research. Understanding statistics is essential to achieving this goal.

Understanding that any choice made during statistical analysis might impact the outcomes is critical. P-hacking may not necessarily be an intentional act of dishonesty, but it typically results from a lack of statistical knowledge.

We can ensure the reliability of our research and the validity of our conclusions by following these recommended practices. Avoiding p-hacking is essential for maintaining the integrity of the overall scientific method and obtaining reliable results. Adopting these principles strengthens research’s position as a reliable source of information and insight and helps keep research authentic.

What is P-Hacking?

You must be wondering what is p-hacking? We say that we P-Hacked when we incorrectly exploit the statistical analysis and falsely conclude that we can reject the null hypothesis. Let’s understand this in detail.

# Hack 1

Consider we have 5 types of CoronaVirus candidate Vaccines with us for which we need to check which one has actual impact on recovery time of patients. So let’s say we do Hypothesis Tests for all 5 types of vaccines one by one. We set the alpha as 0.05. And hence if P-Value for any vaccine comes less than that, we say we can reject the Null Hypothesis.. Or can we?

Example 1

Say, Vaccine A gives a P-Value of 0.2, Vaccine B gives 0.058, Vaccine C gives 0.4, Vaccine D gives 0.02, Vaccine E gives 0.07.

Now, by above results, a naive way to deduce will be that Vaccine D is the one which significantly reduces recovery time and can be used as the CoronaVirus Vaccine. But can we really say that just yet? No. If we do, we might be P value Hacking. As this can be a false positive.

Example 2

Okay, let’s take it another way. Consider we have a Vaccine X and we surely know that this Vaccine is useless and has no effect on recovery time. Still we carry out 10 hypothesis tests by different random samples each time with P-Value of 0.05. Say we get the following P-values in our 10 tests: 0.8, 0.7, 0.78, 0.65, 0.03, 0.1, 0.4, 0.09, 0.6, 0.75. Now if we had to consider the above tests, the test with a surprisingly low P-Value of 0.03 would have made us reject the Null Hypothesis, but in reality it was not. 

So what do we see from the above examples? In essence, when we say that alpha = 0.05 we set a confidence interval of 95%. And that means that 5% of the tests will still result in errors as above. 

Explore our Popular Data Science Courses

Multiple Testing Problem

One way to tackle this would be to increase the number of tests. So more the tests, more easily you can say that the maximum number of tests are resulting in rejection of Null. But also, more tests will mean that there will be more false positives(5% of total tests in our case). 5 out of 100, 50 out of 1000 or 500 out of 10,000! This is also called the Multiple Testing Problem.

False Discovery Rate

One of the ways to tackle above problems is to adjust all the P-Value by using a mechanism called False Discovery Rate (FDR). FDR is a mathematical adjustment of the P-Values which increases them by some values and in the end, the P-Values which incorrectly came lower, might get adjusted to values higher than 0.05.

Learn: 8 Important Skills for Data Scientists

# Hack 2

Now consider a case from example where Vaccine B gave a P-value of 0.058. Wouldn’t you be tempting to add some more data and retest to see if P-Value decreases? Say, you add a few more data points, and the P-value for Vaccine B came to be 0.048. Is this legit? No, you’d again be P value hacking. We cannot change or add data to suit our tests later and the exact sample size needs to be decided prior to performing the tests by doing Power Analysis.

Power Analysis tells us the right sample size we need to have the maximum chances of correctly rejecting the null hypothesis and not getting fooled.

upGrad’s Exclusive Data Science Webinar for you –

ODE Thought Leadership Presentation

Top Data Science Skills to Learn

# Hack 3

One more mistake you shouldn’t do is to change the alpha after you perform the experiments. So once you see a P-Value of 0.058, you think what if my alpha was 0.06?

But you cannot change it once your experiment starts. 

Impact Of P-Hacking in Data Science and Machine Learning Projects

P-hacking statistics harms research studies, frequently without the examiner’s knowledge. Data dredging may have several well-known negative impacts in the fields of data science and machine learning models, including: 

  • The generation of false positives, which compromises the accuracy of the findings.
  • Deception of other examiners and falsification of research findings.
  • An increase in the analysis’s biases.
  • Significant resource waste, notably in the area of labour.
  • Improper model training, which reduces accuracy and validity.
  • Requiring researchers to withdraw their findings from publications.
  • A reduction in financing for additional research projects.

Must ReadHow to Become a Data Scientist?

Read our popular Data Science Articles

Before you go

Hypothesis Testing and P-Values is a tricky subject and needs to be carefully understood before having any deductions. Statistical Power and Power Analysis are an important part of this which need to be kept in mind before starting the tests. 

If you are curious to learn about data science, check out IIIT-B & upGrad’s PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What do you understand by P-Hacking?

P-Hacking statistics or Data dredging is a method to misuse the data analysis techniques to find patterns in data that appear significant but are not. This method affects the study negatively as it gives false promises to provide significant data patterns which in turn can lead to a drastic increase in the number of false positives.

P-hacking can not be prevented completely but there are some methods that can surely reduce it and help avoid the trap.

2What should I keep in mind to avoid p-hacking?

You can use some safe practices to minimise the instances of p-hacking. You can first make a detailed plan of the tests to carry out and then register it on a registry online. You must ensure that you allow the complete test to get executed first and not interrupt in between even if the required p-value is attained.

Apart from these measures, you can also ensure to start with a high-quality data set to avoid chances of error. All these safety measures will definitely help you to avoid data dredging to a great extent.

3What is False Discovery Rate?

This is one of the most advanced approaches to solve the problems regarding p-hacking. This method allows you to adjust the p-values for each test. Unlike other methods, it does not reduce the false-positive results, instead, it discovers them. This makes it more significant than other methods like Bonferroni correction and more accurate in finding significant results.

These adjusted p-values are also known as q-values. There are other versions of this FDR approach like the optimised FDR approach.

Explore Free Courses

Suggested Blogs

Top 13 Highest Paying Data Science Jobs in India [A Complete Report]
905266
In this article, you will learn about Top 13 Highest Paying Data Science Jobs in India. Take a glimpse below. Data Analyst Data Scientist Machine
Read More

by Rohit Sharma

12 Apr 2024

Most Common PySpark Interview Questions & Answers [For Freshers & Experienced]
20925
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s
Read More

by Rohit Sharma

05 Mar 2024

Data Science for Beginners: A Comprehensive Guide
5068
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts
Read More

by Harish K

28 Feb 2024

6 Best Data Science Institutes in 2024 (Detailed Guide)
5179
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in
Read More

by Harish K

28 Feb 2024

Data Science Course Fees: The Roadmap to Your Analytics Career
5075
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.
Read More

by Harish K

28 Feb 2024

Inheritance in Python | Python Inheritance [With Example]
17647
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-
Read More

by Rohan Vats

27 Feb 2024

Data Mining Architecture: Components, Types & Techniques
10803
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a
Read More

by Rohit Sharma

27 Feb 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
80779
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories & Types [With Examples]
139137
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon