
Member-only story
Data Science, Editorial, Programming
Principal Component Analysis (PCA) with Python Examples — Tutorial
An in-depth tutorial on principal component analysis (PCA) with mathematics and Python coding examples
Last updated, January 8, 2021
Author(s): Saniya Parveez, Roberto Iriondo
This tutorial’s code is available on Github and its full implementation as well on Google Colab.
Table of Contents
- Introduction
- Curse of Dimensionality
- Dimensionality Reduction
- Correlation and its Measurement
- Feature Selection
- Feature Extraction
- Linear Feature Extraction
- Principal Component Analysis (PCA)
- Math behind PCA
- How does PCA work?
- Applications of PCA
- Implementation of PCA with Python
- Conclusion
📚 Check out our convolutional neural networks tutorial with Python. 📚
Introduction
When implementing machine learning algorithms, the inclusion of more features might lead to worsening performance issues. Increasing the number of features will not always improve classification accuracy, which is also known as the curse of dimensionality. Hence, we apply dimensionality reduction to improve classification accuracy by selecting the optimal set of lower dimensionality features.
Principal component analysis (PCA) is essential for data science, machine learning, data visualization, statistics, and other quantitative fields.