Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Multi-lingual Language Model Fine-tuning
Latest   Machine Learning

Multi-lingual Language Model Fine-tuning

Last Updated on July 24, 2023 by Editorial Team

Author(s): Edward Ma

Originally published on Towards AI.

The Problem of Low-resource Languages


Photo by Chloe Evans on Unsplash

English is one of the richest resources in natural language processing field. Lots of state-of-the-art NLP models support English natively. To tackle multi-lingual language downstream problems, cross-lingual language models (XLM) and other solutions are proposed.

However, there is still a challenge when the target language has very limited training data. Eisenschlo et al. proposed (MultiFiT) to enable us to train target language effectively.

MultiFiT (Eisenschlo et al., 2019) aims to address the low-resource languages problem. Neural network architecture is based on Universal Language Model Fine-tuning (ULMFiT) (Howard and Ruder, 2018) and quasi-recurrent neural network (QRNN) (Bradbury… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓