Linear Regression in Python using SciKit LearnData Science by Sunny Srinidhi - July 30, 2018July 30, 20181 Today we'll be looking at a simple Linear Regression example in Python, and as always, we'll be using the SciKit Learn library. If you haven't yet looked into my posts about data pre-processing, which is required before you can fit a model, checkout how you can encode your data to make sure it doesn't contain any text, and then how you can handle missing data in your dataset. After that you have to make sure all your features are in the same range for the model so that one feature is not dominating the whole output; and for this, you need feature scaling. Finally, split your data into training and testing sets. Once you're done with all that, you're ready to start your
Why do we need feature scaling in Machine Learning and how to do it using SciKit Learn?Data Science by Sunny Srinidhi - July 27, 2018November 5, 20191 When you're working with a learning model, it is important to scale the features to a range which is centered around zero. This is done so that the variance of the features are in the same range. If a feature's variance is orders of magnitude more than the variance of other features, that particular feature might dominate other features in the dataset, which is not something we want happening in our model. The aim here is to to achieve Gaussian with zero mean and unit variance. There are many ways of doing this, two most popular are standardisation and normalisation. No matter which method you choose, the SciKit Learn library provides a class to easily scale our data. We can use the StandardScaler
How to split your dataset to train and test datasets using SciKit LearnData Science by Sunny Srinidhi - July 27, 2018November 5, 20192 When you're working on a model and want to train it, you obviously have a dataset. But after training, we have to test the model on some test dataset. For this, you'll a dataset which is different from the training set you used earlier. But it might not always be possible to have so much data during the development phase. In such cases, the obviously solution is to split the dataset you have into two sets, one for training and the other for testing; and you do this before you start training your model. But the question is, how do you split the data? You can't possibly manually split the dataset into two. And you also have to make sure you split
Handle missing data in your training dataset with SciKit ImputerData Science by Sunny Srinidhi - July 27, 2018November 5, 20192 Most often than not, you'll encounter a dataset in your data science projects where you'll have missing data in at least one column. In some cases, you can just ignore that row by taking it out of the dataset. But that'll not be the case always. Sometimes, that row would be crucial for the training, maybe because the dataset itself is very small and you can't afford to lose any row, or maybe it holds some important data, or for some other reason. When this is the case, a very important question to answer is, how do you fill in the blanks? There are many approaches to solving this problem, and one of them is using SciKit's Imputer class. If you're