The Curse Of Dimensionality in KNN Classifiers
Last Updated on December 21, 2023 by Editorial Team
Author(s): Tim Cvetko
Originally published on Towards AI.
Exploring the troublesome effect of βhigh-dimensionalityβ in clustering algorithms
Source: https://scipy-lectures.org/packages/scikit-learn/auto_examples/plot_iris_knn.html
In this article, weβll be exploring the effect of curse dimensionality in the KNN algorithm, starting out with a brief overview of how the KNN algorithm works and leading to proper intuition of the curse itself.
Who is this useful for? Those acquainted with machine learning and clustering algorithms & all those getting there.
How advanced is this post? This post is primarily intended for more experienced engineers.
Pre-requisites: Iβll briefly cover the KNN algorithm in this article, but you can refer to the following article for more information on the subject.
KNN: K Nearest Neighbour is one of the fundamental algorithms to start Machine Learning. Machine Learning models use aβ¦
towardsdatascience.com
Before we get into the curse of dimensionality, I want to go over the KNN algorithm briefly. In its most basic sense, the KNN algorithm bundles similar items together and literally finds the βnearest neighbors.β
Hereβs how it works: Given a dataset with labeled points, when you want to classify a new data point, KNN identifies the K nearest points in the feature space. The class or value assigned to the new point is then determined by a majority vote (for classification) or an average (for regression) from these K neighbors. The βnearestβ is… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI