Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Which Feature Scaling Technique To Use- Standardization vs Normalization
Latest

Which Feature Scaling Technique To Use- Standardization vs Normalization

Last Updated on July 5, 2022 by Editorial Team

Author(s): Gowtham S R

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Is feature scaling mandatory? when to use standardization? when to use normalization? what will happen to the distribution of the data? what is the effect on outliers? Will the accuracy of the model increase?

Image byΒ author

When we begin the data science journey, we get a few questions regarding feature scaling which are really confusing.

Image byΒ author

The above questions are frequently asked in interviews too,, I will try to answer the above questions in this blog by providing suitable examples. We will use sklearn’s StandardScaler and MinMaxScaler.

Let us consider a dataset in which Age and Estimated Salary are the input features and we have to predict if the product is Purchased(output label) or not purchased.

Take a look at the first 5 rows of ourΒ data.

The first 5 rows of theΒ dataset

What is StandardScaler?

StandardScaler or Z-Score Normalization is one of the feature scaling techniques, here the transformation of features is done by subtracting from the mean and dividing by standard deviation. This is often called Z-score normalization. The resulting data will have the mean as 0 and the standard deviation asΒ 1.

Formula to calculate ZΒ score

So now we have seen the formula of standard scaling, Now we shall look at how it can be applied to ourΒ dataset.

First, we shall divide our data into train and test sets and apply a standardΒ scaler.

Description of the datasetΒ :

Note that the described method applied to X_train_scaled data shows that the mean is 0 and the standard deviation is 1 after applying the standardΒ scaler.

Effect on the distribution ofΒ data:

Effect of standard scaling on distribution ofΒ data.
Effect of standard scaling on the distribution ofΒ data.

From the above scatter plots and KDE plots we can note that the distribution of the data remains the same even after applying standard Scaler, only the scaleΒ changes.

How do different Machine Learning Models Perform before and AfterΒ scaling?

training variousΒ models
the behavior of different machine learning algorithms before and afterΒ scaling.

In the above examples, the accuracy of Logistic regression and KNN increased significantly after scaling. But there was no effect on accuracy when the decision tree or random forest wasΒ used.

Effect onΒ Outliers

The above plots show that the outliers in our data will be still the outliers even after applying the standard scaling. So, as data scientists, it is our responsibility to handle the outliers.

What is MinMaxScaler?

MinMaxScaling(commonly used normalization technique) is one of the feature scaling techniques, it transforms features by subtracting from the minimum value of the data and dividing by (maximum minus minimum).

Formula for MinMaxScaling

So now we have seen the formula min maxscaling, Now we shall look at how it can be applied to ourΒ dataset.

Description of the datasetΒ :

describe the data before min-maxΒ scaling

Note that the minimum value of both the input features Age and Estimated Salary has become 0 and maximum value has become 1 after applying MinMaxΒ scaling.

Effect on Distribution of the dataΒ :

Effect of scatter plot before and after Min-MaxΒ scaling

From the above scatter plots and KDE plots we can note that the distribution of the data remains the same even after applying minmax scaler, only the scaleΒ changes.

Effect of MinMaxScaler on different Machine Learning algorithms:

Effect onΒ accuracy

In the above examples, the accuracy of Logistic regression and KNN increased significantly after scaling. But there was no effect on accuracy when the decision tree or random forest wasΒ used.

Effect on outliersΒ :

As shown above, there will not be any effect on outliers even after applying minmaxΒ scaling.

Observations:

  • The resulting data after standardization will have the mean 0 and a standard deviation of 1, whereas the resulting data after min-max scaling will have minimum value as0 and maximum value as 1 (Here the mean and standard deviation can be anything).
  • The scatter plots and distplots above show that there will be no change in the distribution of data before and after applying the standard scaler or minmax scaler, only the scaleΒ changes.
  • The feature scaling step has to be performed while applying algorithms where distance gets calculated (Eg: KNN, KMEANS), and involves gradient descent (Eg: Linear and Logistic regressions, neural networks).
  • There will not be any effect of scaling when we use tree-based algorithms like decision trees or randomΒ forests.
  • In the above examples, the accuracy of Logistic regression and KNN increased significantly after scaling. But there was no effect on accuracy when the decision tree or random forest wasΒ used.
  • Outliers in the dataset will still remain an outlier even after applying the feature scaling methods, as data scientists, it is our responsibility to handle the outliers.
  • There is no hard rule to tell which technique to use, but we need to check both standardization and normalization and decide based on the result which one toΒ use.

If you have any confusion about the confusion matrix, precision, and recall, then please read the belowΒ blog.

Confusion Matrix

If you want to know how to analyze the IPL data, please read the belowΒ blog.

Virat Kohli vs KL Rahul

Please visit the following GitHub links to get the fullΒ code.

Connect with me onΒ LinkedIn

Get the data science book clickΒ here

Mlearning.ai Submission Suggestions


Which Feature Scaling Technique To Use- Standardization vs Normalization was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓