Maximizing Machine Learning: How Calibration Can Enhance Performance
Author(s): Cornellius Yudha Wijaya
Originally published on Towards AI.
The not-so-much talked method to improve our machine-learning model.
Image by Author
Many machine learning model outputs are a probability of certain classes. For example, the model classifier would assign a probability to the βchurnβ class and βnot churnβ class β say 90% for βchurnβ and 10% for βnot churnβ.
Generally, the probability would be translated into the prediction. Then, we evaluate our model using standard measurements taught in class, such as accuracy, precision, recall, logloss, etc. These measurements were based on discrete outputs such as 0 or 1.
But how do we measure our prediction probability? And how trustworthy is our model probability? To answer these questions, we can use calibration techniques to tell us how much we can trust our prediction probability model.
Calibration in machine learning is a method that refers to the action of adjusting the probabilistic model output to match the actual output. What does it mean to match the actual output?
Letβs say we have a classification model predicting churn with a 70% probability for each prediction; it means it βshouldβ be correct 7 out of 10 times. The model is well-calibrated if we take data for 10 customers and find that 7 customers were churned.
However, the probability output needs to be calibrated in more classification models. Often,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI