What is multicollinearity?Data Science by Sunny Srinidhi - August 8, 2018January 30, 20200 Multicollinearity is a term we often come across when we’re working with multiple regression models. But do we actually know what it means?
Overfitting and Underfitting models in Machine LearningData Science by Sunny Srinidhi - August 2, 20180 In most of our posts about machine learning, we've talked about overfitting and underfitting. But most of us don't yet know what those two terms mean. What does it acutally mean when a model is overfit, or underfit? Why are they considered not good? And how do they affect the accuracy of our model's predictions? These are some of the basic, but important questions we need to ask and get answers to. So let's discuss these two today. The datasets we use for training and testing our models play a huge role in the efficiency of our models. Its equally important to understand the data we're working with. The quantity and the quality of the data also matter, obviously. When the data
Different types of Validations in Machine Learning (Cross Validation)Data Science by Sunny Srinidhi - August 1, 20180 Now that we know what is feature selection and how to do it, let's move our focus to validating the efficiency of our model. This is known as validation or cross validation, depending on what kind of validation method you're using. But before that, let's try to understand why we need to validate our models. Validation, or Evaluation of Residuals Once you are done with fitting your model to you training data, and you've also tested it with your test data, you can't just assume that its going to work well on data that it has not seen before. In other words, you can't be sure that the model will have the desired accuracy and variance in your production environment. You need
Different methods of feature selectionData Science by Sunny Srinidhi - July 31, 2018November 6, 20191 In our previous post, we discussed what is feature selection and why we need feature selection. In this post, we're going to look at the different methods used in feature selection. There are three main classification of feature selection methods - Filter Methods, Wrapper Methods, and Embedded Methods. We'll look at all of them individually. Filter Methods Filter methods are learning-algorithm-agnostic, which means they can be employed no matter which learning algorithm you're using. They're generally used as data pre-processors. In filter methods, each individual feature in the dataset will be scored on its correlation with the dependent variable. A variety of statistical tests will be used to calculate this correlation score. Based on this score, it will be decided whether to
What is Feature Selection and why do we need it in Machine Learning?Data Science by Sunny Srinidhi - July 31, 2018November 11, 20192 If you've come across a dataset in your machine learning endeavors which has more than one feature, you'd have also heard of a concept called Feature Selection. Today, we're going to find out what it is and why we need it. When a dataset has too many features, it would not be ideal to include all of them in our machine learning model. Some features may be irrelevant for the independent variable. For example, if you are going to predict how much it would cost to crush a car, and the features you're given are: the dimensions of the car if the car will be delivered to the crusher or the company has to go pick it up if the car
Linear Regression in Python using SciKit LearnData Science by Sunny Srinidhi - July 30, 2018July 30, 20181 Today we'll be looking at a simple Linear Regression example in Python, and as always, we'll be using the SciKit Learn library. If you haven't yet looked into my posts about data pre-processing, which is required before you can fit a model, checkout how you can encode your data to make sure it doesn't contain any text, and then how you can handle missing data in your dataset. After that you have to make sure all your features are in the same range for the model so that one feature is not dominating the whole output; and for this, you need feature scaling. Finally, split your data into training and testing sets. Once you're done with all that, you're ready to start your
Why do we need feature scaling in Machine Learning and how to do it using SciKit Learn?Data Science by Sunny Srinidhi - July 27, 2018November 5, 20191 When you're working with a learning model, it is important to scale the features to a range which is centered around zero. This is done so that the variance of the features are in the same range. If a feature's variance is orders of magnitude more than the variance of other features, that particular feature might dominate other features in the dataset, which is not something we want happening in our model. The aim here is to to achieve Gaussian with zero mean and unit variance. There are many ways of doing this, two most popular are standardisation and normalisation. No matter which method you choose, the SciKit Learn library provides a class to easily scale our data. We can use the StandardScaler
How to split your dataset to train and test datasets using SciKit LearnData Science by Sunny Srinidhi - July 27, 2018November 5, 20192 When you're working on a model and want to train it, you obviously have a dataset. But after training, we have to test the model on some test dataset. For this, you'll a dataset which is different from the training set you used earlier. But it might not always be possible to have so much data during the development phase. In such cases, the obviously solution is to split the dataset you have into two sets, one for training and the other for testing; and you do this before you start training your model. But the question is, how do you split the data? You can't possibly manually split the dataset into two. And you also have to make sure you split
Handle missing data in your training dataset with SciKit ImputerData Science by Sunny Srinidhi - July 27, 2018November 5, 20192 Most often than not, you'll encounter a dataset in your data science projects where you'll have missing data in at least one column. In some cases, you can just ignore that row by taking it out of the dataset. But that'll not be the case always. Sometimes, that row would be crucial for the training, maybe because the dataset itself is very small and you can't afford to lose any row, or maybe it holds some important data, or for some other reason. When this is the case, a very important question to answer is, how do you fill in the blanks? There are many approaches to solving this problem, and one of them is using SciKit's Imputer class. If you're
Label Encoder vs. One Hot Encoder in Machine LearningData ScienceTech by Sunny Srinidhi - July 27, 2018November 6, 201911 Update: SciKit has a new library called the ColumnTransformer which has replaced LabelEncoding. You can check out this updated post about ColumnTransformer to know more. If you're new to Machine Learning, you might get confused between these two - Label Encoder and One Hot Encoder. These two encoders are parts of the SciKit Learn library in Python, and they are used to convert categorical data, or text data, into numbers, which our predictive models can better understand. Today, let's understand the difference between the two with a simple example. Label Encoding To begin with, you can find the SciKit Learn documentation for Label Encoder here. Now, let's consider the following data: In this example, the first column is the country column, which is all