WORLD OF CLASSIFICATION IN MACHINE LEARNING
Last Updated on January 6, 2023 by Editorial Team
Author(s): Data Science meets Cyber Security
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
World of Classification in MachineΒ Learning
SUPERVISED MACHINE LEARNINGβββPARTΒ 1
1. CLASSIFICATION:
Classification is the act of categorizing something, as the name implies. Putting it more analytically, classification is the process of categorizing data into classes to gain a better understanding of it. Classification is a type of supervised learning method which can be applied to both structured and unstructured data.
So what exactly we are trying to do is, Use classification to predict the future outcomes of given data points by based on the likelihood and probability of which category they will fallΒ into!
Honestly, the only question now is, How and What can be done to classify data more precisely to be able to understand itΒ better?
Letβs take a simple example of ONLINEΒ DATING!
According to 2022 studies, there are over 8000 dating apps and sites available worldwide, with 323 million users. Isnβt it huge? In the meantime, this application promises 323 million users the right life partner for them based on their common traits, and the users expect a perfect future partner to start a family, have children, live happily, and haveΒ fun.
First and foremost, all of these dating apps use an amazing combination of artificial intelligence and machine learning to generate personalized matches, but how does the app know what common traits those matches share? The answer is most likely classification.
If you are one of the 323 million people, you are familiar with the dating app, but for those who arenβt, here is a goodΒ example:
For instance, imagine a User interface with a stack of people, and now you swipe right if you like the person on the screen and left if you donβt. Psychologically, every curious human mind wonβt stop by only swiping on 1 person, so when youβre in this process ofΒ swiping:
- Consider youβre swiping right on profiles who have mentioned βTHE OFFICEβ, as their favoriteΒ show
- Now what the application does is it will classify people from the stack who has βTHE OFFICEβ as their favorite show (one of manyΒ traits)
- And, in a few seconds, you would be able to see maximum profiles who have mentioned βTHE OFFICEβ as their favoriteΒ show.
So, machine learning is classifying your recommendations based on the traits they believe you prefer, but in reality, it wonβt even matter! This is how classification works; it can be based on a variety of characteristics; as we spoke, the above was just a general example to provideΒ context.
TYPES OF CLASSIFICATION TECHNIQUES WHICH ARE USED IN MACHINE LEARNING:
LOGISTIC REGRESSION:
Letβs see some of the problems where Logistic regression can be used to find the solutions.
- Increasing the reach, followers, likes, and comments on Instagram
- To predict the future stock price movement.
- To predict if a patient will get diabetes orΒ not.
- To classify a mail as spam or non-spam.
LETβS TAKE A LOOK AT THE CASEΒ STUDIES:
CASE STUDY 1:] Suppose based on income levels I want to predict or classify whether a person is going to buy my product or not buy myΒ product.
LITTLE DESCRIPTION TO UNDERSTAND THINGSΒ BETTER:
- The left graph depicts the number of people who would buy the product asΒ 1.
- The people who will not buy the product because of their income are represented as 0 in the rightΒ graph.
- Now, we can see in the right graph that there is a line drawn on purchase, which we can think of as a threshold value.
- So the threshold value simply means that people who are inside the line have a low income and cannot afford the product, whereas people who are outside the line have a higher income and can afford theΒ product.
CASE STUDY 2:] We want to plot a graph of the average number of times people have shopped per month and how much money they have spent on each purchase:
LITTLE DESCRIPTION TO UNDERSTAND THINGSΒ BETTER:
- So we can see that linear regression is incapable of distinguishing between High Value and Low-Value customers.
- Linear regression output values are always in the range [-β, β ], whereas the actual values (i.e., binary classification) in this case are limited to 0 andΒ 1.
- This is insufficient for such classification tasks; we also require a function that can output values between 0 andΒ 1.
- This is enabled by a sigmoid or a logistic function, hence the name LOGISTIC REGRESSION.
NAIVE BAYES CLASSIFIER:
Letβs have a look at some of the Classification problems with multipleΒ Classes:
- Given an article, predict which genre of the newspaper (i.e., Current news, International, Arts, Sports, fashion, etc.) it is supposed to be published in.
- Given a photo of the car number plate, identifying which country it belongsΒ to.
- Given an audio clip of the song, identify the genre of theΒ song.
- Given an email, predicting whether the email is fraud orΒ not.
MATHEMATICALLY SPEAKING:
PROBLEM:
Given certain evidence X, what is the probability that this is from class Yi, i.e,Β P(Yi|X)
SOLUTION:
Naive Bayes makes predictionsβββP(Yi|X)βββusing Bayes theorem after estimating the joint probability distribution of X and Y, i.e. P(X andΒ Y)
K-NEAREST NEIGHBOR (KNN CLASSIFIER)
To better understand what the KNN algorithm does, consider the following real-world application:
1. KNN is a beautiful algorithm used in recommendation systems.
1. KNN is a beautiful algorithm used in recommendation systems.
3. KNN can search for similarities between two documents and is known as aΒ vector.
This reminds me again of dating apps that use recommendation engines to analyze profiles, user likes, dislikes, and behaviors and provide recommendations to find a perfect match forΒ them.
Take TINDER as an example: Tinder employs a VecTec, a machine learning and artificial intelligence hybrid algorithm that assists users in generating personalized recommendations. Tinder users are classified as Swipes and Swipers, according to Tinderβs chief scientist SteveΒ Liu.
That is, every swipe made by a user is marked on an embedded vector and is assumed to be one of the many traits of the users. (like favorite series, food, educational background, hobbies, activities, vacation destination, and manyΒ others)
When the recommendation algorithm detects similarity between the two built-in vectors (two users with similar traits), it will recommend them to each other. (ITβS DESCRIBED AS A PERFECTΒ MATCH!)
- K-Nearest Neighbors are one of the most basic forms of instanceΒ learning
TRAINING METHODSΒ INCLUDE:
- Saving the training examples.
AT PREDICTION TIME:
- Find the βkβ training examples (x1, y1), andβ¦(xk, YK) that are closest to the test example x. Predict the most frequent class among thoseΒ yi.
Got this amazing example from the internet where the author explains the KNN algorithm in the most basic way i.e., If it walks like a duck, and quacks like a duck, then itβs probably aΒ duck.
KNN IS FURTHER DIVIDED INTO 3Β TYPES:
DECISION TREES:
Decision trees are a game-changing algorithm in the world of prediction and classification. It is a tree-like flowchart in which each internal node represents a test on an attribute, each branch represents the testβs outcome, and each leaf node holds a classΒ label.
DECISION TREES TERMINOLOGIES TO UNDERSTAND THINGSΒ BETTER:
ROOT NODE:
β The decision tree begins at the root node. It represents the entire dataset, which is then split into two or more homogeneous sets.
In our example, the root node is 2Β people
LEAF NODE:
β Leaf nodes are the treeβs final output node, and the tree cannot be further separated after obtaining a leafΒ node.
In our example, the leaf node ends in a match or noΒ match
SPLITTING:
β In splitting, we divide the root node into further sub-nodes, i.e., classifying the rootΒ node.
In our example, splitting sub-nodes are characteristics like gender, animals, travel, sport, culture,Β etc.
SUB-TREE:
β A tree created by splitting anotherΒ tree
In our example, the sub-tree is Sexual preference, Allergic to animals, or education.
PRUNING:
β Pruning is the removal of undesirable branches from aΒ tree.
PARENT/ CHILDΒ NODES:
β The root node of the tree is called the parent node, and other nodes are called the childΒ nodes.
In our example, the 2 people are considered the root node, and the other sub-nodes are considered the childΒ nodes.
SUPPORT VECTOR MACHINES:
The SVM algorithmβs goal is to find the best line or decision boundary for categorizing n-dimensional space so that we can easily place new data points in the correct category in the future. A hyperplane is the best decision boundary.
SVM chooses the exceptional pts, which will help create the higher dimensional space. These extreme cases are referred to as support vectors, and the algorithm is known as the Support VectorΒ Machine.
LETβS TAKE THE SVM PARAMETERβββC
- Controls trainingΒ error.
- It is used to prevent overfitting.
- Letβs play withΒ C.
METHODS TO CALCULATE THE CLASSIFICATION MODEL PERFORMANCE:
CONFUSION MATRIXΒ METHOD:
LETβS UNDERSTAND IT THROUGH AN INTERESTING ANALOGY
Before going deep into how the confusion matrix works, Letβs start with the definition:
The confusion matrix helps us to determine the performance of the classification models for a given test data. The name is confusing because it makes things easy for us to see when the system is confusing the twoΒ classes.
EXAMPLE TO MAKE IT QUICK ANDΒ EASY:
ASSUME,
X = The test data of ladies who have come for theΒ checkup.
P = The set of ladies whose test is positive, i.e., they are pregnant.
NP = The set of ladies whose test is negative, i.e., They are not pregnant.
Let x = be the lady who is pregnant from the given set of test dataΒ X.
CASE1:] How to calculate how many ladies have POSITIVE results, i.e. PΒ :
P = { x β X: x is pregnantΒ }
CASE2:] How to calculate how many ladies have NEGATIVE results, i.e. NPΒ :
NP = { x β X: x is not pregnantΒ }
POSSIBILITIES OF THE ABOVE CASEΒ STUDIES:
1] A LADY WHO IS PREGNANT AND HER TEST IS ALSO POSITIVE.
Lady βAβ is in set βX,β and she tested positive for pregnancy and is pregnant β This is what we call TRUEΒ POSITIVE
2] A LADY WHO IS NOT PREGNANT AND HER TEST IS ALSO NEGATIVE.
Lady βAβ is in set βXβ and she tested NEGATIVE for pregnancy and is NOT pregnant β This is what we call TRUEΒ NEGATIVE
3] A LADY WHO IS PREGNANT, BUT HER TEST IS NEGATIVE.
Lady βAβ is in set βXβ and she tested negative for pregnancy, but she is pregnant β This is what we call a FALSEΒ NEGATIVE
4] A LADY WHO IS NOT PREGNANT, BUT SHE TESTS POSITIVE.
Lady βAβ is in set βXβ and she tested positive for pregnancy, but she is NOT pregnant β This is what we call a FALSEΒ POSITIVE
NOW, THIS IS THE SITUATION WHERE THE CONFUSION MATRIXΒ ENTERS:
A confusion matrix would work and analyze the above situation in the classification algorithm.
The benefit of a confusion matrix is that it helps you to understand your classification model and can predict what exactly the results are and if they are accurate or not, adding to it confusion matrix also helps to find out the errors the model isΒ making
PRECISION AND RECALLΒ METHOD:
Letβs take a simple example to understand this method. Trust me, itβs super easy and exciting.
CASE STUDY 1:] Assume there are two types of malware, which are classified as Spyware and Adware. Now, weβve created a model that can detect malware in a variety of business software. To do so, we must examine the predictions of our machine learningΒ models.
MODEL 1: TRUE POSITIVE = 80, TRUE NEGATIVE = 30, FALSE POSITIVE = 0, FALSE NEGATIVE =Β 20
MODEL 2: TRUE POSITIVE = 90, TRUE NEGATIVE = 10, FALSE POSITIVE = 30, FALSE NEGATIVE =Β 0
As we can see, the false positive rate in model 1 is zero because we donβt want our model to detect the wrong type of malware and cause confusion between the two groups of malware. And as we can see, model 1 has a higher precision value, so letβs startΒ there.
PRECISION = TRUE POSITIVE / TRUE POSITIVE + FALSEΒ POSITIVE
Moving on, in an extreme cyber war, we want to detect malware as soon as possible while keeping their groups apart, and we can see that model 2 has 0 false negatives, which means we can deal with situations where the model does not need to categorize it into two groups of malware, but just detect it so we can put an end to the cyber war as soon as possible. This is also referred to as the RECALLΒ method.
RECALL METHOD = TRUE POSITIVE / TRUE POSITIVE + FALSEΒ NEGATIVE
F -1Β SCORE:
Assume youβve started a paper company, and itβs making less money at first because itβs new. However, you already have a large amount of paper and need a proper place to store that paper as well as an office where you can hire a sales team to increase your sales. Now that we donβt know how many days, weeks, or months the sales will take to complete the sales. So how to predict the deadline?
We need to create a model with a higher F-1 Score which is calculated based on the recall and precision values that can predict that forΒ us.
THE HIGHER THE F-1 SCORE, THE BETTER THEΒ MODEL
FOLLOW US FOR THE SAME FUN TO LEARN DATA SCIENCE BLOGS AND ARTICLES:π
LINKEDIN: https://www.linkedin.com/company/dsmcs/
INSTAGRAM: https://www.instagram.com/datasciencemeetscybersecurity/?hl=en
GITHUB: https://github.com/Vidhi1290
TWITTER: https://twitter.com/VidhiWaghela
MEDIUM: https://medium.com/@datasciencemeetscybersecurity-
WEBSITE: https://www.datasciencemeetscybersecurity.com/
-Team Data Science meets Cyber SecurityΒ β€οΈ
WORLD OF CLASSIFICATION IN MACHINE LEARNING was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI