From Garbage In to Gold Out: Understanding Denoising Autoencoders
Author(s): Anay Dongre Originally published on Towards AI. A denoising autoencoder (DAE) is a type of autoencoder neural network architecture that is trained to reconstruct the original input from a corrupted or noisy version of it. Don’t confuse my drawing skills with …
CompressedBART: Fine-Tuning for Summarization through Latent Space Compression (Paper Review/Described)
Author(s): Ala Alam Falaki Originally published on Towards AI. Paper title: A Robust Approach to Fine-tune Pre-trained Transformer-based Models for Text Summarization through Latent Space Compression. “Can we compress a pre-trained encoder while keeping its language generation abilities?”This is the main question …