How To Train a Seq2Seq Summarization Model Using “BERT” as Both Encoder and Decoder!! (BERT2BERT)
Last Updated on July 18, 2023 by Editorial Team
Author(s): Ala Alam Falaki
Originally published on Towards AI.
BERT is a well-known and powerful pre-trained “encoder” model. Let’s see how we can use it as a “decoder” to form an encoder-decoder architecture.
Photo by Aaron Burden on Unsplash
The Transformer architecture consists of two main building blocks — encoder and decoder components — which we stack on top of each other to form a seq2seq model. (You can read more about it in my previous story) It is generally hard to train a transformer-based model from scratch since it needs both large datasets and high GPU memory. So, there are numerous pre-trained models with different objectives in mind.
Firstly, the encoder… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI