NLP News Cypher | 02.02.20
Last Updated on July 27, 2023 by Editorial Team
Author(s): Ricky Costa
Originally published on Towards AI.
Weekly Newsletter Natural Language Processing (NLP) News and Research
NLP News Cypher U+007C 02.02.20
The Die Is Cast
Today is 02.02.20 β the first global palindrome day in 909 years. U+1F440
How was your week?
Well, if you live anywhere near the NLP universe, youβve probably stumbled on the NLP database. If you havenβt, you should!
Next, I want to give a shout-out to two database contributors from the past week: Kiril Gashteovski and Chandra Sekhar. Thank You! Thus far, we have amassed 239 NLP datasets.
If you know of a dataset that you see missing or have an edit request, please contact us on the databaseβs web page.
This Week:
BERTs Lingua Franca
Deep Learning Boot Camp
Meena is Perplexing
The Conscious Mind
A Token of Appreciation
S&P Global NLP White Papers
Deployment Headaches
Dataset of the Week: QA-SRL Bank
BERTs Lingua Franca
On Twitter, Sebastian Ruder shared just how many international BERT models we already have! Then Hugging Face shared some more. In total, thereβs a lot of country flags on display! This is good to see for the international community!
Hugging Face:
Me:
Deep Learning Boot Camp
Beyond the footsteps of the next killer robot and Lex Fridmanβs dark suits, and way beyond the deepest reaches of MIT, there lies a 1-week deep learning boot-camp. And itβs on YouTube:
Meena is Perplexing
Google created a chatbot, with a training objective to minimize perplexity. Apparently, its quality amazingly good. When reading Meenaβs conversations, it seems like itβs doing a great job at something that is very difficult for most chit-chat dialogue systems: memory. To solve for this, they used 1 encoder and 13 decoder blocks. The encoder stores convoβs context and decoders help formulate higher conversational quality. This is how the bot does against the grain:
I asked Google Brainβs Thang Luong if it will be open-sourced. Apparently, they are being cautious about its release similarly to how OpenAI handled it own GPT-2 release:
Blog:
Towards a Conversational Agent that Can Chat About…Anything
Modern conversational agents (chatbots) tend to be highly specialized – they perform well as long as users don't strayβ¦
ai.googleblog.com
The Conscious Mind
Circa seven years ago, in lower Manhattan, I randomly ran in to David Chalmers outside of a movie theater (this was when he was in his leather jacket thrash metal hair phase). As we exited the establishment, I commented on my joy for his book βThe Conscious Mindβ. I followed this up with a Neuroscience joke. He smirked.
Anyway, hereβs Chalmers on the Fridman podcast:
A Token of Appreciation
It seems that every time I read a FloydHub article, a definitive pre-requisite prior to reading is hot cocoa and a fireplace. In a recent article, they illustrate the various kinds of tokenizers and how they differ in functionality. Hereβs the tokenizers discussed (and make a smore):
Subword Tokenization
Byte Pair Encoding (BPE)
Unigram Subword Tokenization
WordPiece
SentencePiece
Tokenizers: How machines read
The world of Deep Learning (DL) Natural Language Processing (NLP) is evolving at a rapid pace. We tried to capture someβ¦
blog.floydhub.com
S&P Global NLP White Papers
S&P Global market research firm released several white papers on the use of NLP in Finance. They also share use-cases and code! Which is rare for the private industry. Anyway, always good to keep up on the business side of things.
Part I:
Part II:
Part III:
Deployment Headaches
If you want to deploy your model, then reading this article would be of help to you. Caleb Kaiser from Cortex shows the common pitfalls when one attempts to deploy a large transformer model and simultaneously requiring it work at scale.
Too big to deploy: How GPT-2 is breaking production
A look at the bottleneck around deploying massive models to production
towardsdatascience.com
Dataset of the Week: QA-SRL Bank
What is it?
Itβs a question answering dataset used for semantic-role labeling.
Sample:
QA-SRL U+007C Browse Data
Edit description
browse.qasrl.org
Where is it?
uwnlp/qasrl-bank
This repository is the reference point for QA-SRL Bank 2.0, the dataset described in the paper Large-Scale QA-SRLβ¦
github.com
Every Sunday we do a weekly round-up of NLP news and code drops from researchers around the world.
If you enjoyed this article, help us out and share with friends or social media!
For complete coverage, follow our twitter: @Quantum_Stat
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI