Understanding BERT
Author(s): Shweta Baranwal Originally published on Towards AI. Source: Photo by Min An on Pexels Natural Language Processing BERT (Bidirectional Encoder Representations from Transformers) is a research paper published by Google AI language. Unlike previous versions of NLP architectures, BERT is conceptually …
Multi-lingual Language Model Fine-tuning
Author(s): Edward Ma Originally published on Towards AI. The Problem of Low-resource Languages Photo by Chloe Evans on Unsplash English is one of the richest resources in natural language processing field. Lots of state-of-the-art NLP models support English natively. To tackle multi-lingual …
A Lite BERT for Reducing Inference Time
Author(s): Edward Ma Originally published on Towards AI. BERT Photo by Ksenia Makagonova on Unsplash BERT (Devlin et al., 2018) achieved lots of state-of-the-art results in 2018. However, it is not easy to use BERT (Devlin et al., 2018) in production even …
Adversarial Attacks in Textual Deep Neural Networks
Author(s): Edward Ma Originally published on Towards AI. What is an adversarial attack? Photo by Solal Ohayon on Unsplash Adversarial examples aim at causing target model to make a mistake on prediction. It can be either be intended or unintended to cause …
A Robustly Optimized BERT Pretraining Approach
Author(s): Edward Ma Originally published on Towards AI. What is BERT? Top highlight BERT (Devlin et al., 2018) is a method of pre-training language representations, meaning that we train a general-purpose “language understanding” model on a large text corpus (like Wikipedia), and …
Text Mining in Python: Steps and Examples
Author(s): Dhilip Subramanian In today’s scenario, one way of people’s success is identified by how they are communicating and sharing information with others. That’s where the concepts of language come into the picture. However, there are many languages in the world. Each …
Address Limitation of RNN in NLP Problems by Using Transformer-XL
Author(s): Edward Ma Originally published on Towards AI. Limitations of recurrent neural networks Photo by Joe Gardner on Unsplash Recurrent Neural Network (RNN) offers a way to learn a sequence of inputs. The drawback is that it is difficult to optimize due …
Prerequisites
Author(s): Thomas Kraehe Originally published on Towards AI. Top highlight Using Google Cloud’s Machine Learning as a Service U+007C Towards AI Analyzing the Mood of Chat Messages with Google Cloud’s Natural Language API With the help of NLP services like the Natural …
Unified Language Model Pre-training for Natural Language Understanding and Generation
Author(s): Edward Ma Originally published on Towards AI. Using UNILM to tackle natural language understanding (NLU) and natural language generation (NLG) Photo by Louis Hansel on Unsplash Recent state-of-the-art NLP pre-trained models also use a language model to learn contextualized text representation. …
Cross-lingual Language Model
Author(s): Edward Ma Originally published on Towards AI. Discussing XLMs and unsupervised cross-lingual word embedding by multilingual neural language models Photo by Edward Ma on Unsplash A pre-trained model is proven to improve the downstream problem. Lample and Conneau propose two new …