Explaining Transformers as Simple as Possible through a Small Language Model
Author(s): Alex Punnen
Originally published on Towards AI.
And understanding Vector Transformations and Vectorizations
This member-only story is on us. Upgrade to access all of Medium.
I have read countless articles and watched many videos about Transformer networks over these past few years. Most of these were very good, yet I struggled to understand the Transformer Architecture while the main intuition behind it (context-sensitive embedding) was easier to grasp. While giving a presentation I tried a different and more effective way. Hence this article is based on that talk and hoping this will be effective.
βWhat I cannot build. I do not understand.β β Richard Feynman
I also remembered that when I was learning about Convolutional Neural Nets, I did not understand it fully till I built one from scratch. Hence I have built a few notebooks, which you can run in Colab and highlights of those are also presented here without cluttering as I feel without this complexity it won't be possible to understand in depth.
Please read this brief article if you are unclear about Vectors in the ML context before you go in.
Everything should be made as simple as possible, but not simpler. Albert Einstein
Before we talk about Transformers and jump into the complexity of Keys, Queries, Values, Self-attention and complexity of Multi-head Attention, which… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI