Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

EarlyStopping and LiveLossPlot Callbacks in TensorFlow, Keras, and Python
Artificial Intelligence   Data Science   Latest   Machine Learning

EarlyStopping and LiveLossPlot Callbacks in TensorFlow, Keras, and Python

Author(s): Rashida Nasrin Sucky

Originally published on Towards AI.

How to Improve Your Model Training Time and to Prevent Overfitting Using EarlyStopping and Plot the Losses and Metrics Live While Training
Photo by Pierre Bamin on Unsplash

Keras library has several callback functions that make your model training very efficient. One of them is EarlyStopping which I love to use. It saves time and computation costs in a great way. As the name suggests, it stops the model training early if you set the model training for more epochs than necessary.

Starting your model training with the right number of epochs in the beginning can be tricky. It may take some trial and error to know actually how many epochs it may take to converge and not cause overfitting.

EarlyStopping can be very useful in this case. You can set as many epochs as you want, but once the model training is done, it will stop training. We will work on a complete example to learn about it.

In this tutorial, I will also touch on another callback function called ‘livelossplot’. This is another cool callback function in the Keras library. It keeps plotting the loss and evaluation metric as the model trains. It is very cool.

I used a Google Colab notebook for this exercise. You can use any other platform of your choice. First, I needed to install livelossplot by running the following line:

!pip… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓