WORLD OF CLASSIFICATION IN MACHINE LEARNING
Last Updated on January 6, 2023 by Editorial Team
Author(s): Data Science meets Cyber Security
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
World of Classification in Machineย Learning
SUPERVISED MACHINE LEARNINGโโโPARTย 1
1. CLASSIFICATION:

Classification is the act of categorizing something, as the name implies. Putting it more analytically, classification is the process of categorizing data into classes to gain a better understanding of it. Classification is a type of supervised learning method which can be applied to both structured and unstructured data.
So what exactly we are trying to do is, Use classification to predict the future outcomes of given data points by based on the likelihood and probability of which category they will fallย into!
Honestly, the only question now is, How and What can be done to classify data more precisely to be able to understand itย better?
Letโs take a simple example of ONLINEย DATING!
According to 2022 studies, there are over 8000 dating apps and sites available worldwide, with 323 million users. Isnโt it huge? In the meantime, this application promises 323 million users the right life partner for them based on their common traits, and the users expect a perfect future partner to start a family, have children, live happily, and haveย fun.

First and foremost, all of these dating apps use an amazing combination of artificial intelligence and machine learning to generate personalized matches, but how does the app know what common traits those matches share? The answer is most likely classification.
If you are one of the 323 million people, you are familiar with the dating app, but for those who arenโt, here is a goodย example:
For instance, imagine a User interface with a stack of people, and now you swipe right if you like the person on the screen and left if you donโt. Psychologically, every curious human mind wonโt stop by only swiping on 1 person, so when youโre in this process ofย swiping:
- Consider youโre swiping right on profiles who have mentioned โTHE OFFICEโ, as their favoriteย show
- Now what the application does is it will classify people from the stack who has โTHE OFFICEโ as their favorite show (one of manyย traits)
- And, in a few seconds, you would be able to see maximum profiles who have mentioned โTHE OFFICEโ as their favoriteย show.
So, machine learning is classifying your recommendations based on the traits they believe you prefer, but in reality, it wonโt even matter! This is how classification works; it can be based on a variety of characteristics; as we spoke, the above was just a general example to provideย context.
TYPES OF CLASSIFICATION TECHNIQUES WHICH ARE USED IN MACHINE LEARNING:
LOGISTIC REGRESSION:
Letโs see some of the problems where Logistic regression can be used to find the solutions.
- Increasing the reach, followers, likes, and comments on Instagram
- To predict the future stock price movement.
- To predict if a patient will get diabetes orย not.
- To classify a mail as spam or non-spam.
LETโS TAKE A LOOK AT THE CASEย STUDIES:
CASE STUDY 1:] Suppose based on income levels I want to predict or classify whether a person is going to buy my product or not buy myย product.

LITTLE DESCRIPTION TO UNDERSTAND THINGSย BETTER:
- The left graph depicts the number of people who would buy the product asย 1.
- The people who will not buy the product because of their income are represented as 0 in the rightย graph.
- Now, we can see in the right graph that there is a line drawn on purchase, which we can think of as a threshold value.
- So the threshold value simply means that people who are inside the line have a low income and cannot afford the product, whereas people who are outside the line have a higher income and can afford theย product.
CASE STUDY 2:] We want to plot a graph of the average number of times people have shopped per month and how much money they have spent on each purchase:

LITTLE DESCRIPTION TO UNDERSTAND THINGSย BETTER:
- So we can see that linear regression is incapable of distinguishing between High Value and Low-Value customers.
- Linear regression output values are always in the range [-โ, โ ], whereas the actual values (i.e., binary classification) in this case are limited to 0 andย 1.
- This is insufficient for such classification tasks; we also require a function that can output values between 0 andย 1.
- This is enabled by a sigmoid or a logistic function, hence the name LOGISTIC REGRESSION.
NAIVE BAYES CLASSIFIER:
Letโs have a look at some of the Classification problems with multipleย Classes:
- Given an article, predict which genre of the newspaper (i.e., Current news, International, Arts, Sports, fashion, etc.) it is supposed to be published in.
- Given a photo of the car number plate, identifying which country it belongsย to.
- Given an audio clip of the song, identify the genre of theย song.
- Given an email, predicting whether the email is fraud orย not.
MATHEMATICALLY SPEAKING:
PROBLEM:
Given certain evidence X, what is the probability that this is from class Yi, i.e,ย P(Yi|X)
SOLUTION:
Naive Bayes makes predictionsโโโP(Yi|X)โโโusing Bayes theorem after estimating the joint probability distribution of X and Y, i.e. P(X andย Y)

K-NEAREST NEIGHBOR (KNN CLASSIFIER)
To better understand what the KNN algorithm does, consider the following real-world application:
1. KNN is a beautiful algorithm used in recommendation systems.
1. KNN is a beautiful algorithm used in recommendation systems.
3. KNN can search for similarities between two documents and is known as aย vector.
This reminds me again of dating apps that use recommendation engines to analyze profiles, user likes, dislikes, and behaviors and provide recommendations to find a perfect match forย them.

Take TINDER as an example: Tinder employs a VecTec, a machine learning and artificial intelligence hybrid algorithm that assists users in generating personalized recommendations. Tinder users are classified as Swipes and Swipers, according to Tinderโs chief scientist Steveย Liu.
That is, every swipe made by a user is marked on an embedded vector and is assumed to be one of the many traits of the users. (like favorite series, food, educational background, hobbies, activities, vacation destination, and manyย others)
When the recommendation algorithm detects similarity between the two built-in vectors (two users with similar traits), it will recommend them to each other. (ITโS DESCRIBED AS A PERFECTย MATCH!)
- K-Nearest Neighbors are one of the most basic forms of instanceย learning
TRAINING METHODSย INCLUDE:
- Saving the training examples.
AT PREDICTION TIME:
- Find the โkโ training examples (x1, y1), andโฆ(xk, YK) that are closest to the test example x. Predict the most frequent class among thoseย yi.
Got this amazing example from the internet where the author explains the KNN algorithm in the most basic way i.e., If it walks like a duck, and quacks like a duck, then itโs probably aย duck.

KNN IS FURTHER DIVIDED INTO 3ย TYPES:

DECISION TREES:
Decision trees are a game-changing algorithm in the world of prediction and classification. It is a tree-like flowchart in which each internal node represents a test on an attribute, each branch represents the testโs outcome, and each leaf node holds a classย label.

DECISION TREES TERMINOLOGIES TO UNDERSTAND THINGSย BETTER:
ROOT NODE:
โ The decision tree begins at the root node. It represents the entire dataset, which is then split into two or more homogeneous sets.
In our example, the root node is 2ย people
LEAF NODE:
โ Leaf nodes are the treeโs final output node, and the tree cannot be further separated after obtaining a leafย node.
In our example, the leaf node ends in a match or noย match
SPLITTING:
โ In splitting, we divide the root node into further sub-nodes, i.e., classifying the rootย node.
In our example, splitting sub-nodes are characteristics like gender, animals, travel, sport, culture,ย etc.
SUB-TREE:
โ A tree created by splitting anotherย tree
In our example, the sub-tree is Sexual preference, Allergic to animals, or education.
PRUNING:
โ Pruning is the removal of undesirable branches from aย tree.
PARENT/ CHILDย NODES:
โ The root node of the tree is called the parent node, and other nodes are called the childย nodes.
In our example, the 2 people are considered the root node, and the other sub-nodes are considered the childย nodes.

SUPPORT VECTOR MACHINES:
The SVM algorithmโs goal is to find the best line or decision boundary for categorizing n-dimensional space so that we can easily place new data points in the correct category in the future. A hyperplane is the best decision boundary.
SVM chooses the exceptional pts, which will help create the higher dimensional space. These extreme cases are referred to as support vectors, and the algorithm is known as the Support Vectorย Machine.
LETโS TAKE THE SVM PARAMETERโโโC
- Controls trainingย error.
- It is used to prevent overfitting.
- Letโs play withย C.

METHODS TO CALCULATE THE CLASSIFICATION MODEL PERFORMANCE:

CONFUSION MATRIXย METHOD:
LETโS UNDERSTAND IT THROUGH AN INTERESTING ANALOGY
Before going deep into how the confusion matrix works, Letโs start with the definition:
The confusion matrix helps us to determine the performance of the classification models for a given test data. The name is confusing because it makes things easy for us to see when the system is confusing the twoย classes.

EXAMPLE TO MAKE IT QUICK ANDย EASY:
ASSUME,
X = The test data of ladies who have come for theย checkup.
P = The set of ladies whose test is positive, i.e., they are pregnant.
NP = The set of ladies whose test is negative, i.e., They are not pregnant.
Let x = be the lady who is pregnant from the given set of test dataย X.
CASE1:] How to calculate how many ladies have POSITIVE results, i.e. Pย :
P = { x โ X: x is pregnantย }
CASE2:] How to calculate how many ladies have NEGATIVE results, i.e. NPย :
NP = { x โ X: x is not pregnantย }
POSSIBILITIES OF THE ABOVE CASEย STUDIES:

1] A LADY WHO IS PREGNANT AND HER TEST IS ALSO POSITIVE.
Lady โAโ is in set โX,โ and she tested positive for pregnancy and is pregnant โ This is what we call TRUEย POSITIVE
2] A LADY WHO IS NOT PREGNANT AND HER TEST IS ALSO NEGATIVE.
Lady โAโ is in set โXโ and she tested NEGATIVE for pregnancy and is NOT pregnant โ This is what we call TRUEย NEGATIVE
3] A LADY WHO IS PREGNANT, BUT HER TEST IS NEGATIVE.
Lady โAโ is in set โXโ and she tested negative for pregnancy, but she is pregnant โ This is what we call a FALSEย NEGATIVE
4] A LADY WHO IS NOT PREGNANT, BUT SHE TESTS POSITIVE.
Lady โAโ is in set โXโ and she tested positive for pregnancy, but she is NOT pregnant โ This is what we call a FALSEย POSITIVE
NOW, THIS IS THE SITUATION WHERE THE CONFUSION MATRIXย ENTERS:
A confusion matrix would work and analyze the above situation in the classification algorithm.
The benefit of a confusion matrix is that it helps you to understand your classification model and can predict what exactly the results are and if they are accurate or not, adding to it confusion matrix also helps to find out the errors the model isย making
PRECISION AND RECALLย METHOD:
Letโs take a simple example to understand this method. Trust me, itโs super easy and exciting.
CASE STUDY 1:] Assume there are two types of malware, which are classified as Spyware and Adware. Now, weโve created a model that can detect malware in a variety of business software. To do so, we must examine the predictions of our machine learningย models.
MODEL 1: TRUE POSITIVE = 80, TRUE NEGATIVE = 30, FALSE POSITIVE = 0, FALSE NEGATIVE =ย 20
MODEL 2: TRUE POSITIVE = 90, TRUE NEGATIVE = 10, FALSE POSITIVE = 30, FALSE NEGATIVE =ย 0
As we can see, the false positive rate in model 1 is zero because we donโt want our model to detect the wrong type of malware and cause confusion between the two groups of malware. And as we can see, model 1 has a higher precision value, so letโs startย there.
PRECISION = TRUE POSITIVE / TRUE POSITIVE + FALSEย POSITIVE
Moving on, in an extreme cyber war, we want to detect malware as soon as possible while keeping their groups apart, and we can see that model 2 has 0 false negatives, which means we can deal with situations where the model does not need to categorize it into two groups of malware, but just detect it so we can put an end to the cyber war as soon as possible. This is also referred to as the RECALLย method.
RECALL METHOD = TRUE POSITIVE / TRUE POSITIVE + FALSEย NEGATIVE
F -1ย SCORE:
Assume youโve started a paper company, and itโs making less money at first because itโs new. However, you already have a large amount of paper and need a proper place to store that paper as well as an office where you can hire a sales team to increase your sales. Now that we donโt know how many days, weeks, or months the sales will take to complete the sales. So how to predict the deadline?

We need to create a model with a higher F-1 Score which is calculated based on the recall and precision values that can predict that forย us.

THE HIGHER THE F-1 SCORE, THE BETTER THEย MODEL
FOLLOW US FOR THE SAME FUN TO LEARN DATA SCIENCE BLOGS AND ARTICLES:๐
LINKEDIN: https://www.linkedin.com/company/dsmcs/
INSTAGRAM: https://www.instagram.com/datasciencemeetscybersecurity/?hl=en
GITHUB: https://github.com/Vidhi1290
TWITTER: https://twitter.com/VidhiWaghela
MEDIUM: https://medium.com/@datasciencemeetscybersecurity-
WEBSITE: https://www.datasciencemeetscybersecurity.com/
-Team Data Science meets Cyber Securityย โค๏ธ
WORLD OF CLASSIFICATION IN MACHINE LEARNING was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itโs free, we donโt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aย sponsor.
Published via Towards AI