10 Basic Machine Learning Interview Questions
Explain the difference between supervised and unsupervised machine learning?
In supervised machine learning algorithms, we have to provide labeled data, for example, prediction of stock market prices, whereas in unsupervised we do not have labeled data where we group the unlabeled data, for example, conducting market segmentation.
Explain the difference between KNN and K-Means clustering?
K-Nearest Neighbours is a supervised machine learning algorithm where we need to provide the labeled data to the model it then classifies the points based on the distance of the point from the nearest points. Whereas, on the other hand, K-Means clustering is an unsupervised machine learning algorithm thus we need to provide the model with unlabelled data and this algorithm classifies points into clusters based on the mean of the distances between different points.
What is the difference between classification and regression?
Classification is used to produce discrete results, classification is used to classify data into some specific categories .for example classifying e-mails into spam and non-spam categories. Whereas, We use regression analysis when we are dealing with continuous data, for example predicting stock prices at a certain point of time.
How to ensure that your model is not overfitting?
Keep the design of the model simple. Try to reduce the noise in the model by considering fewer variables and parameters. Cross-validation techniques such as K-folds cross-validation help us keep overfitting under control. Regularization techniques such as LASSO help in avoiding overfitting by penalizing certain parameters if they are likely to cause overfitting.
What are the different sets in which we divide any dataset for Machine Learning?
For any ML application, we divide our dataset into three segments namely ‘Training Set’, ‘Validation Set’ & ‘Testing Set’. Training Set is used for training the ML model, Validation Set is used for Hyperparameter tuning and Testing Set is used for testing the model to see how well it is performing.
List the main advantage of Naive Bayes?
A Naive Bayes classifier converges very quickly as compared to other models like logistic regression. As a result, we need less training data in the case of a naive Bayes classifier.
Explain Ensemble learning.
In ensemble learning, many base models like classifiers and regressors are generated and combined together so that they give better results. It is used when we build component classifiers that are accurate and independent. There are sequential as well as parallel ensemble methods.
Explain Dimensionality Reduction in machine learning.
Dimensionality Reduction is the method of reducing the number of dimensions of any dataset by reducing the number of features. It is important because as we move into higher dimensions, the datapoints start becoming equidistant from each other which can affect the performance of unsupervised ML algorithms which use euclidean distance as the similarity function to classify datapoints. This is known as the Curse of Dimensionality. Also, it is difficult to visualize data beyond 4 dimensions.
What should you do when your model is suffering from low bias and high variance?
When the model is suffering from low bias and high variance, it is essentially overfitting, where the accuracy of train dataset is much higher than the accuracy of the test dataset. In such a situation, techniques such as Regularization can be used or the model can be simplified by reducing the number of features in the dataset.
Explain the differences between random forest and gradient boosting algorithms.
Random forest uses bagging techniques whereas GBM uses boosting techniques. Random forests mainly try to reduce variance and GBM reduces both bias and variance of a model.
For More interview Question - Top ML Interview Question
Please Login to comment...