Table of Contents

**Introduction**

Probability has been an important aspect when it comes to the field of Data Science. It has played a pivotal role in the lives of data analysts and data scientists. The concepts used in probability theory are a must-know for people in the Data Science domain. The statistical methods used for making certain predictions are based upon the theories of probability and statistics thus making probability a crucial part of the data science domain.

Probability gives information about occurring of a certain event under some assumptions i.e. It indicates the likelihood of an event occurring. To represent the different possible values that a random variable can take, we make use of probability distribution.

A random variable can be referred to as the different outcomes which are possible in a given situation. To illustrate, if a die is rolled, then the possible outcomes for this situation are values ranging from 1 to 6 which become the values of the random variable.

Probability Distribution can be of two types: – Discrete and Continuous. Discrete distributions are for the variables which take only a limited number of values within a range. Continuous distributions are for variables that can take an infinite number of values within a range. In this article, we will be exploring more into the discrete distribution and later into Probability Mass Function.

**Discrete Distribution**

The discrete distribution represents the probabilities of the different outcomes for a discrete random variable. In simple terms, it allows us to understand the pattern of the different outcomes in the random variable. It is nothing but the representation of all the probabilities of a random variable put together.

To create a probability distribution for a random variable, we need to have the outcomes of the random variable along with its associated probabilities and then we can compute its probability distribution function.

Some of the types of discrete distributions are listed as follows: –

- Binomial Distribution: – The number of outcomes in a single trial can only be two (yes or no, success or failure, etc). Example: – Tossing of a coin
- Bernoulli’s Distribution: – A special version of Binomial distribution where the number of trials conducted in the experiment is always equal to 1.
- Poisson Distribution: – It provides the probability of an event occurring a certain number of times in a specific period of time. Example: – Number of times a movie will be streamed on a Saturday night.
- Uniform Distribution: – This distribution assumes that the probability for all the outcomes in a random variable is the same. Example: – Rolling of a die (as all the sides have an equal probability of showing up).

You can refer to this link for more details on types of continuous and discrete distributions. To calculate the probability of a random variable with its value equal to some value within the range, Probability Mass Function (PMF) is used. For every distribution, the formula of probability mass function varies accordingly.

For better clarity on probability mass function, let us walk through an example. Suppose we have to figure out which of the batting positions in cricket has more probability of scoring a century within a team, provided we have some related data. Now as there can be only 11 playing positions in the team, the random variable will take values ranging from 1 to 11.

Probability Mass Function, also called Discrete Density Function will allow us to find out the probability of scoring a century for each position i.e. P(X=1), P(X=2)….P(X=11). After the computation of all the probabilities, we can compute the probability distribution of that random variable.

The general formula for probability mass function is as follows: –

**P****X****(x****k****) = P(X = x****k****) for k = 1,2,…k**

where,

X = Discrete random variable.

xk= Possible value of the random variable.

P = Probability of the random variable when it equals xk.

Many get themselves into the confusion between Probability Mass Function (PMF) and Probability Density Function (PDF). To clear this out, the probability mass function is for the discrete random variables i.e. the variables that can take a limited number of values within a range.

The probability density function is used for the continuous random variables. i.e. the variables that can take an infinite number of values in a range. The probability mass function helps in the calculation of the general statistics like mean and variance of the discrete distribution.

*Earn **data science certification** from the World’s top Universities. Join our Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.*

**Properties of Probability Mass Function**

- The probabilities of all the possible values of the random variable should sum up to 1. [ ∑PX(xk) = 1]
- All the probabilities have to be can either be 0 or greater than 0. [P(xk) ≥ 0]
- The probability of each event occurring ranges from 0 to 1. [1 ≥ P(xk) ≥ 0]

**Conclusion**

The concepts of probability like Probability Mass Function have been very useful in the data science domain. These concepts may not be used in every aspect of a data science project or for that matter in the entire project as well. But this does not belittle the importance of probability theory in this domain.

The applications of probability theory have provided great results not only in the data science domain but other domains of the industry also as it can help in interesting insights and decision-making which always makes it worth a try.

This article provided an overview of the importance of probability in the field of data science, introduced the basic concepts of probability like probability distribution and probability mass function. The article has mainly focused on the discrete variable terms as probability mass function is used for them. The terminologies used for the continuous variables are different, but the overall ideology of these concepts remain similar to the one explained in this article.

### How is a discrete probability distribution different from a continuous probability distribution?

The discrete probability distribution or simply discrete distribution calculates the probabilities of a random variable that can be discrete. For example, if we toss a coin twice, the probable values of a random variable X that denotes the total number of heads will be {0, 1, 2} and not any random value.

Bernoulli, Binomial, Hypergeometric are some examples of the discrete probability distribution.

On the other hand, the continuous probability distribution provides the probabilities of a random value that can be any random number. For example, the value of a random variable X that denotes the height of citizens of a city could be any number like 161.2, 150.9, etc.

Normal, Student’s T, Chi-square are some of the examples of continuous distribution.

### Explain hypergeometric distribution?

The hypergeometric distribution is a discrete distribution where we consider the number of successes over the number of trials without any replacement. Such a type of distribution is useful in cases where we need to find the probability of something without replacing it.

Let us say we have a bag full of red and green balls and we have to find the probability of picking a green ball in 5 attempts but each time we pick a ball, we do not return it back to the bag. This is an apt example of the hypergeometric distribution.

### What is the importance of probability in Data Science?

As data science is all about studying data, probability plays a key role here. The following reasons describe how probability is an indispensable part of data science:

1. It helps analysts and researchers make predictions out of data sets. These kinds of estimated results are the foundation for further analysis of the data.

2. Probability is also used while developing algorithms used in machine learning models. It helps in analysing the data sets used for training the models.

3. It allows you to quantify data and derive results such as derivatives, mean, and distribution.

4. All the results achieved using probability eventually summarizes the data. This summary also helps in the identification of existing outliers in the data sets.