All About Logistic Regression
Last Updated on March 24, 2022 by Editorial Team
Author(s): Akash Dawari
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
In this article we will understand Logistic Regression and try to answer the following questions:
- What is Logistic Regression?
- Why not Linear Regression?
- Why Logistic Regression?
- When Logistic Regression?
- How Logistic Regression?
What is Logistic Regression?
Logistic Regression is a Supervised Machine Learning algorithm that is used in classification problems where we have to distinguish the dependent variable between two or more categories or classes by using the independent variables.
For example:
Consider the above table where the dependent variable is βSpeciesβ and the independent variable is βpetal widthβ. Species show two categories that are β0β denotes Setosa iris flower and β1β denotes Versicolor irisΒ flower.
Logistic Regression will give the probability of the respective category using the petal width of theΒ flower.
Why not Linear Regression?
There are mainly twoΒ reasons:
- Linear regression deal with continuous or quantitative values whereas in classification problems we deal with discrete or probabilistic values.
- If we somehow fit the linear regression line into the data set asΒ follows:
If we take the threshold of 0.5, in this case, the linear line seems to do a good job but if we introduce some outliers in theΒ dataset.
You can see how much the line slope fluctuates from its original position. So we cannot afford such a model which gives different answers when it is introduced with new data points. Another small point to be noticed is that the range of the outcome is not in the range that we want, which is predicting probability outside the range of 0 toΒ 1.
To know more about Linear Regression click on the belowΒ link:
Why Logistic Regression?
- As we discuss in the above question, linear regression fluctuates when new data points are introduced. Logistic regression solves this problem by introducing an additional function to bend the best fit straight line into a curve shownΒ below.
- Logistic regression is easy to understand, implement, and very efficient toΒ train.
- Logistic regression works well on small datasets and is very fast at classifying unknownΒ records.
- The core concept of logistic regression is further used in Artificial Neural networks.
When Logistic Regression?
When the following requirements are satisfied for logistic regression:
- The dependent variable must be categorical data either be the binary class of data or the ordinal class ofΒ data.
- Observation of the independent variables must not be repetitive or matched, cause logistic regression is sensitive to overfitting.
- Multicollinearity should not be there among the independent variables
- Logistic regression typically required large sampleΒ size.
NOTE: Above points are also referred to as the βAssumptions for logistic regressionβ.
How Logistic Regression?
There are two approaches to understanding logistic regression.
- Probabilistic Intuition
- Geometric Intuition
In this article, we will go for Probabilistic Intuition as Geometric Intuition will again cover in SVM (Support Vector Machine).
As we discuss above logistic regression is using a function to squash or bend the linear line that function can be any mathematical function like tan, Sigmoid, ReLu,Β etc
Sigmoidal Function
Logistic Regression mostly use the sigmoid functionΒ because-
- sigmoid function return outcome in the range of 0 to 1 which is well suited for logistic regression as we are predicting the probability of theΒ outcome.
- Sigmoid Function derivative is easy to calculate as compared to other function which is used during gradientΒ descent.
Letβs see some relation between the probability of outcome and the linear line equation to better understand the relation between the linear regression and logistic regression:
Now, as most supervised machine learning Algorithms, logistic regression also has three main steps: Hypothesis, cost function, an optimization technique.
Hypothesis: In this, we assume some kind of mathematical relationship between dependent and independent variables. In logistic regression, we are using the Sigmoid function to establish the relationship.
Cost function: this function is used to find out the error of our predicted value. In logistic regression, we are using the log loss function.
Optimization Technique: In this step, we are trying to reduce the error using some mathematical techniques. In logistic regression, we will use Gradient Descent to doΒ that.
where derivative of the log loss functionΒ is:
Coding Example of Logistic Regression Using Python fromΒ scratch:
Step 1: Preparing theΒ dataset
Step 2: Creating a function that performs Gradient Descent and returns the value of weights associated with each independent variable.
Step 4: Creating a function that predicts the outcome using theΒ weights.
Step 5: Call the functions and predict the outcomes.
How do we know the score or accuracy of the prediction?
In classification problems, we use the confusion matrix, accuracy score, precision, recall, or F1 score to measure the accuracy of theΒ model.
Confusion Matrix:
where,
True positive(TP): A result that was predicted as positive by the classification model and also is positive
True Negative(TN): A result that was predicted as negative by the classification model and also is negative
False Positive(FP): A result that was predicted as positive by the classification model but actually is negative
False Negative(FN): A result that was predicted as negative by the classification model but actually is positive.
The Credibility of the model is based on how many correct predictions did the modelΒ do.
Accuracy score: The total number of correct classifications divided by the total number of classifications.
Precision: It is a measure of amongst all the positive predictions, how many of them were actuallyΒ positive
Recall: It is a measure of: from the total number of positive results how many positives were correctly predicted by theΒ model.
F1 Score: It is defined as the harmonic mean of Precision andΒ Recall.
link of the notebook used in thisΒ article:
Like and Share if you find this article helpful. Also, follow me on medium for more content related to Machine Learning and Deep Learning.
All About Logistic Regression was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI