Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Back Translation in Text Augmentation by nlpaug
Natural Language Processing

Back Translation in Text Augmentation by nlpaug

Last Updated on August 29, 2020 by Editorial Team

Author(s): Edward Ma

Natural Language Processing

Data augmentation for NLPβ€Šβ€”β€Šgenerate synthetic data by back-translation in 4 lines ofΒ code

Photo by Edward Ma onΒ Unsplash

English is one of the languages which has lots of training data for translation while some language may not has enough data to train a machine translation model. Sennrich et al. used the back-translation method to generate more training data to improve translation model performance.

Given that we want to train a model for translating English (source language) β†’ Cantonese (target language) and there is not enough training data for Cantonese. Back-translation is translating target language to source language and mixing both original source sentences and back-translated sentences to train a model. So the number of training data from the source language to target language can be increased.

In the previous story, the back-translation method is mentioned to generate synthetic data for the NLP task. Such that, we can have more training for model training especially for low-resource NLP tasks and languages.

This story will cover how Facebook AI Research (FAIR) team trains a model for translation and how can we leverage the pre-trained model to generate more training data for your model. By leveraging subword models, large-scale back-translation, and model ensembling, Ng et al. (2019) win WMT 19 award. They worked on two language pairs and four language directions which are translating English ← β†’ Germany (EN ← β†’ DE) and English ← β†’Russian (EN ← β†’RU). They demonstrated how to use back-translation to boost up model performance. After that, I will show how can we write a few lines to generate synthetic data by using back-translation. Here are some details about data processing, data augmentation, and translation model.

Data Processing

Subword

In the earlier stage of NLP, word level and character level tokens are used to train a model. In the state-of-the-art NLP model, the sub-word (in between a word and character level) is the standard way in the tokenization stage. For example, it uses β€œtrans” and β€œlation” to represent β€œtranslation” because of occurrence frequency. You may have a look at 3 different sub-word algorithms from here. Ng et al. pick bye pair encodings (BPE) with 32K and 24 split operations for EN←→DE and EN← β†’RU tokenization respectively.

Data Filtering

To make sure only sentence pairs with the correct language, Ng et al. use langid (Lui et al., 2012) to filter out invalid data. langid is a language identification tool that tells you what language does text belongsΒ to.

If sentences contain more than 250 tokens or length ratio between source and target exceeding 1.5, it will be excluded in model training. I suspected that it may introduce too much noise information to theΒ model.

The data size for different filtering methods (Ng et al.,Β 2019)

The third filtering way is targeting monolingual data. To keep high-quality monolingual data, Ng et al. adopt the Moore-Lewis method (2010) for removing noisy data from the larger corpus. In short, Moore and Lewis score text by the difference of source data language model and the larger corpus language model. After picking a high-quality corpus, it will use the back-translation model to generate a pair of training data for the translation model.

Data Augmentation

Back-Translation

After filtering low-quality data from a larger monolingual corpus, it is ready for training an intermediate target-to-source model. From experiments, ensembled models trained on back-translated data is better than a singleΒ model.

Comparison of a single model and ensemble over SacreBLEU for English to Russian (Ng et al.,Β 2019)

Translation Model

As usual, Transformer architecture (Vaswani et al., 2017) is adopted in FAIRSEQ. The Transformer leverages multiple attention networks to compute representation. For more information about the Transformer architecture, you may visit thisΒ article.

Transformer architecture (Radford et al.,Β 2018)

Another skill is leveraging multiple trained models to form an ensemble model for prediction.

Fine-tuning

After training on the filtering and back-translated data, Ng et al. leverage the model by using the previous year dataset such as newstest2012 and newstest2013.

Generating synthetic data by Back Translation

nlpaug provides an easy way to generate synthetic data by 4 lines ofΒ code.

Behind the scenes, nlpaug leveraged pre-trained model from fairseq (released by Facebook AI Research) to perform 2 times translation. Taking the following as an example, it translates source input (English) to German. After that, providing the translated text (German, the output from the first model) to the second model, it will output translated text (English). Here is theΒ code:

import nlpaug.augmenter.word as naw

text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
from_model_name='transformer.wmt19.en-de',
to_model_name='transformer.wmt19.de-en')
back_translation_aug.augment(text)

Taking the above examples, it can be changed from β€œThe quick brown fox jumped over the lazy dog” to β€œThe speedy brown fox jumped over the lazyΒ dog”

Extension Reading

About Me

I am a Data Scientist in the Bay Area. Focusing on the state-of-the-art in Data Science, Artificial Intelligence, especially in NLP and platform related areas. You can reach me on Medium, LinkedIn, orΒ Github.

Reference


Back Translation in Text Augmentation by nlpaug was originally published in Towards AIβ€Šβ€”β€ŠMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓