How To Train Your BERT Model 5X Faster Than In Colab
Last Updated on June 14, 2022 by Editorial Team
Author(s): Chris Marrie
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
Example notebook
For this article, we will use a notebook called TransferLearning, created using code from a post by AlvinΒ Chen.
We will build a sentiment classifier with a pre-trained BERT model using the Hugging Face transformers library.
Here are the files needed to download and run the notebook yourself:
Transfer learning refresher
Transfer learning is a method that can be used to accelerate the development of new machine learning models. It works by leveraging the knowledge gained from training a model on one dataset and applying it to another relatedΒ dataset.
For example, if a data scientist wants to build a model to identify dogs in pictures, they could use data from a previous project that was used to train a model to identify cats. This can be especially useful when data is limited or when there is a need to build a modelΒ quickly.
BERT refresher
BERT is a Transformer-based model for natural language processing that was proposed in 2018 and open-sourced byΒ Google.
The model is trained on a large corpus of text, such as Wikipedia, and can be used for various tasks such as question answering and text classification. BERT can be easily fine-tuned on a variety of tasks such as text classification and named entity recognition.
Baseline performance
Since fine-tuning the BERT model is by far the most computationally intense part of this example, we will only focus on the training cell (cellΒ #18).
In our last article, we used a laptop with 8 CPU cores and 32 GB of RAM to get a baseline performance. This experiment would likely run without fail on the same laptop but it would take impractically long, so instead, we will use Google Colab. Colab offers free GPU access and is a common workspace for deep learning.
However, one thing to remember is that you are unable to predict or select resourcesβββthe type of GPU and amount of RAM will vary for eachΒ session.
When running this experiment we happened to be given a T4 GPU (not bad!) with 16 GB of GPU RAM and 12 GB of regular RAM. The BERT training cell took 17 minutes and 17Β seconds.
π° Before: 17 minutes and 17Β seconds
Our enhancements
- Use mixed-precision
Mixed-precision is a technique used to improve the performance of machine learning models. It involves using both lower-precision data types (such as 16-bit floats) and higher-precision data types (such as 32-bit floats) in training and inference.
The benefits of mixed-precision include reduced training time, reduced memory usage, and increased accuracy.
Note: Not all hardware supports mixed-precision and older GPU models often doΒ not.
Because we are using a newer T4 (uncommon as Colab typically allocates older GPUs), we can take advantage of mixed-precision.
To do so, we added the following lines ofΒ code:
from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy(βmixed_float16β)
We ran the notebook again and it was completed in ~8Β minutes.
2. Leverage a newerΒ GPU
The second enhancement we made was leveraging a newer, better GPU model. We used a V100 GPU with 32 GB of GPU RAM and 112 GB of regular RAM, costing $3.12/hour.
This time, the BERT training cell was completed in 3 minutes and 5 secondsβββmore than a 5x speedup from the original experiment, and it only cost 18Β cents!
π After: 3 minutes and 5Β seconds
Key takeaways
So, what should be remembered goingΒ forward?
- Mixed-precision has many compelling benefits, such as reduced processing time and reduced memory usage, but can only be utilized if you have modern GPU hardware. Keep this in mind when using cloud services that offer free compute resourcesβββyou might only have access to low-end GPUs. There are tradeoffs!
- Using powerful resources for short periods of time can be cheap. The cognitive friction of a paid vs. free service might be more than the actual dollars youβll spend. You might be able to unlock significant performance gains for under aΒ dollar.
- Examples like this one often stick to a small scale to facilitate reproducibility, but the techniques to improve performance are often generalizable. Try them on larger-scale projects!
If you found this article helpful, we have written others likeΒ it!
You mightΒ enjoy:
Please also feel free to leave claps below so other people on Medium see it and so we know to produce moreΒ ππ
About theΒ author
I mostly write about practical data science and machine learning workflow challenges and walk through ways to solveΒ them.
You can follow me on Medium or LinkedIn.
How To Train Your BERT Model 5X Faster Than In Colab was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI