Are you someone who wishes to make a successful career in Machine Learning? If so, great for you!

But first, you must prepare yourself for the ice-breaker – the ML interview.

Since the process of preparing for an interview can be overwhelming, we’ve decided to step in – here’s a curated list of 15 most commonly asked questions in Machine Learning interviews!

**What’s the difference between Deep Learning and Machine Learning?**

While Machine Learning involves the application and usage of advanced algorithms to parse data, uncover the hidden patterns within the data and learn from it, and finally apply the learned insights to make informed business decisions. As for Deep Learning, it is a subset of Machine Learning that involves the use of Artificial Neural Nets that draw inspiration from the neural net structure of the human brain. Deep Learning is widely used in feature detection.

**Define – Precision and Recall.**

**Precision or Positive Predictive Value measures or more precisely predicts the number of true positives in claimed by a model compared to the number of positives it actually claims.**

Recall or True Positive Rate refers to the number of positives claimed by a model compared to the actual number of positives present throughout the data.

**Explain the terms ‘bias’ and ‘variance.**’

During the training process, the expected error of a learning algorithm is generally classified or decomposed into two parts – bias and variance. While ‘bias’ is an error situation caused due to the use of simple assumptions in the learning algorithm, ‘variance’ denotes an error caused due to the complexity of that learning algorithm in data analyzation. Bias measures the proximity of the average classifier created by the learning algorithm to the target function, and variance measures by how much the learning algorithm’s prediction varies for different training data sets.

**How does a ROC curve function?**

The ROC or the Receiver Operating Characteristic curve is a graphical representation of the variation between true positive rates and the false-positive rates at varying thresholds. It is a fundamental tool for diagnostic test evaluation and is often used as a representation of the trade-off between the sensitivity of the model (true positives) vs the probability of triggering false alarms (false positives).

- The curve depicts the trade-off between sensitivity and specificity – if the sensitivity increases, the specificity will decrease.
- If the curve borders more towards the left-hand axis and top of the ROC space, the test is usually more accurate. However, if the curve comes closer to the 45-degree diagonal of the ROC space, the test is less accurate or reliable.
- The slope of the tangent line at a cutpoint indicates the Likelihood Ratio (LR) for that particular value of the test.
- The area under the curve measures the test accuracy.

**Explain the difference between Type 1 and Type 2 errors?**

Type 1 error is a false positive error that ‘claims’ that an incident has occurred when, in fact, nothing has occurred. The best example of a false positive error is a false fire alarm – the alarm starts ringing when there’s no fire. Contrary to this, a Type 2 error is a false negative error that ‘claims’ nothing has occurred when something has definitely happened. It would be a Type 2 error to tell a pregnant lady that she isn’t carrying a baby.

**Why is Bayes referred to as “Naive Bayes?”**

Naive Bayes is referred to as “naive” because although it has many practical applications, it is based on the assumption that is impossible to find in real-life data – all the features in a data set are crucial, independent, and equal. In the Naive Bayes approach, conditional probability is computed as the pure product of the probabilities of individual components, thereby implying the complete independence of features. Unfortunately, this assumption can never be fulfiled in a real-world scenario.

**What is meant by the term ‘Overfitting’? Can you avoid it? If so, how?**

Usually, during the training process, a model is fed large amounts of data. In the course of the process, the data starts learning even from the inaccurate information and noise present in the sample data set. This creates a negative influence on the performance of the model on new data, that is, the model cannot accurately classify new instances/data apart from those of the training set. This is known as Overfitting.

Yes, it is possible to avoid Overfitting. Here’s how:

- Gather more data (from disparate sources) to train the model with different samples.
- Apply ensembling methods (for example, Random Forest) that use the bagging approach to minimize the variation in the predictions by juxtaposing the results of multiple Decision trees on different units of the data set.
- Make sure to use cross-validation techniques.

**Name the two methods used for calibration in Supervised Learning.**

The two calibration methods in Supervised Learning are – Platt Calibration and Isotonic Regression. Both these methods are specifically designed for binary classification.

**Why do you prune a Decision Tree?**

Decision Trees need to be pruned to get rid of the branches with weak predictive abilities. This helps to minimize the complexity quotient of the Decision Tree model and optimize its predictive accuracy. Pruning can be done either from the top-down or bottom-up. Reduced error pruning, cost-complexity pruning, error complexity pruning, and minimum error pruning are some of the most used Decision Tree pruning methods.

**What is meant by F1 score?**

In simple terms, the F1 score is a measure of a model’s performance – an average of the Precision and Recall of a model, with results nearing to 1 being the best and those nearing to 0 being the worst. The F1 score can be used in classification tests that do not place importance on true negatives.

**Differentiate between a Generative and Discriminative algorithm.**

While a Generative algorithm learns the categories of data, a Discriminative algorithm learns the distinction between different categories of data. When it comes to classification tasks, discriminative models typically outpace generative models.

**What is Ensemble Learning?**

Ensemble Learning uses a combination of learning algorithms to optimize the predictive performance of models. In this method, multiple models like classifiers or experts are both strategically generated and combined to prevent Overfitting in models. It is mostly used to enhanced the prediction, classification, function approximation, performance, etc., of a model.

**Define ‘Kernel Trick’.**

Kernel Trick method involves the use of kernel functions that can operate in a higher-dimensional and implicit feature space without having to compute the coordinates of points within that dimension explicitly. Kernel functions compute the inner products between the images of all pairs of data present in a feature space. This procedure is computationally cheaper compared to the explicit computation of the coordinates and is known as the Kernel Trick.

**How should you handle missing or corrupted data in a dataset?**

To find the missing/corrupted data in a dataset, you must either drop the rows and columns or replace them with other values. Pandas library has two great methods to find missing/corrupted data – isnull() and dropna(). Both of these functions are specifically designed to help you find the rows/columns of data with missing/corrupted data and drop those values.

**What is a Hash Table?**

A Hash Table is a data structure that creates an associative array, wherein a key is mapped to specific values by using a hash function. Hash tables are mostly used in database indexing.

This list of questions is only meant to introduce you to the basics of Machine Learning, and frankly, these twenty questions are just a drop in the sea. Machine Learning is advancing as we speak, and hence, with time, new concepts will emerge. The key to nailing your ML interviews, thus, lies in harbouring a constant urge to learn and upskill. So, get started and scourge the Internet, read journals, join online communities, attend ML conferences and seminars – there are so many ways to learn.