Attention is all you need: How Transformer Architecture in NLP started.
Last Updated on September 2, 2024 by Editorial Team
Author(s): Surya Maddula
Originally published on Towards AI.
This member-only story is on us. Upgrade to access all of Medium.
Original Paper: Attention is all you need.
AI-Generated ImageThis was THE paper that introduced Transformer Architecture to NLP. This transformative concept led to the rise of LLMs and solved the problem of contextualized word embeddings!
Letβs take a journey that led up to the statement written above.
I was researching Embedding Models, and some of the material I came across talked about Word Vector Embeddings.
Vector embeddings map real-world entities, such as a word, sentence, or image, into vector representations or points in some vector space.
Points that are closer to each other in a vector space have similar semantic meanings, which means that they convey comparable meanings or concepts.
Here, you see sample words and their embedding vector using a word embedding model, such as Word2Vec and GloVe, which gives you the embeddings that capture the semantic meaning of each word.
However, the problem with word embedding models is that they donβt really understand the context.
For Example:
The bark of the ancient oak tree was thick and rough, providing shelter for various insects.The dogβs bark echoed through the quiet neighborhood, alerting everyone to the approaching mailman.
Word embedding models like GloVe wonβt be able to separate these… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI