NLP News Cypher | 07.26.20
Last Updated on July 24, 2023 by Editorial Team
Author(s): Quantum Stat
NATURAL LANGUAGE PROCESSING (NLP) WEEKLY NEWSLETTER
Primus
The Liber Primus is unsolved to this day. A book of 58 pages written in Runes, of which, its bewildering encryption continues to haunt hacker gunslingers around the globe who choose only to communicate and study its content via IRCs (internet chatΒ relays).
The cryptic book arrived on the internet in the mid 2010βs by the now wildly popular but mysterious internet group 3301. While the groupβs identity remains hidden, it is speculated they are a remnant of the cypherpunk activist movement (birthed somewhere out of Berkley in the 80s). At least this is the most plausible explanation given to us by one of the few known hackers thatβs made it inside the clandestine groupβββMarcus Wanner. But whoΒ knowsβ¦
3301βs Cicada project started with a random 4chan post in 2012 leading many thrill seekers, with a cult-like following, on a puzzle hunt that encompassed everything from steganography to cryptography. While most of their puzzles were eventually solved, the very last one, the Liber Primus, is still (mostly) encrypted. The last known comms from 3301 came in April 2017 via Pastebin post. ItΒ reads:
Message from 3301/Cicada – Pastebin.com
FYI, thereβs a standard PGP (pretty good privacy) key for all 3301 posts. If you see a 3301 online post without their PGP signature, donβt trust it (plenty of troll accounts to beΒ found).
For a Summary/Timeline:
Visit Noxβs YouTube channel if you are interested in understanding how they cracked previous Cicada puzzles ante-Liber Primus.
Meanwhile back at theΒ ranchβ¦
I luckily found my way in creating a training script for adapters (the modular add-ons discussed in last weekβs blog). The script works for the GLUE datasets. Will keep everyone updated as new events unfold regarding the AdapterHub. Very excited about this new framework, once again thanks to Jonas for nudging me in the right direction.
Stay FrostyΒ ββ
This Week
SimpleTOD
TurboTransformers
NLP & Audio Pretrained Models
NERtwork
AllenNLP Library Step-by-Step
Search Engining is HardΒ Bruh
Dataset of the Week:Β ODSQA
SimpleTOD
Previous task oriented dialogues, especially from those chatbots we all dream of one day building, are built using a standard modular pipeline (similar to what you find in the RASA framework). However, Salesforce Research has recently released a unidirectional language model called SimpleTOD, that attempts to solve all the sub-tasks in an end-to-end manner. It was built with Transformers on the MultiWOZΒ dataset.
Blog:
SimpleTOD: A Simple Language Model for Task-Oriented Dialogue
Paper
GitHub:
TurboTransformers
A recent transformer runtime library, TurboTransformers, for inference came to my attention. This library optimizes what everyone wants in production, lower latency. TheyΒ claim:
It brings 1.88x acceleration to the WeChat FAQ service, 2.11x acceleration to the public cloud sentiment analysis service, and 13.6x acceleration to the QQ recommendation system.
The sell is that it can support various lengths of input sequences without preprocessing which reduces overhead in computation. ?
GitHub:
NLP & Audio Pretrained Models
A nice collection of pretrained model libraries found on GitHub. These 2 repos encompass NLP and Speech modeling. Conveniently, the models are indexed by framework and includes a brief description.
NLP
balavenkatesh3322/NLP-pretrained-model
Speech/Audio
balavenkatesh3322/audio-pretrained-model
NERtwork
Awesome new shell/python script that graphs a network of co-occurring entities from plainΒ text!
It combines Stanfordβs NER for the model, OpenRefine (to deal with data normalization: i.e. B. Obama and Barrack are same entity) and NetworkX for graph creation.
Blog: http://brandontlocke.com/2020/07/22/announcing-nertwork.html
GitHub (Profile photo of theΒ week):
AllenNLP Library Step-by-Step
Best step-by-step guide into AllenNLPβs library to date. Lengthy but worthwhile with code pasted along the way. The demo is for building/training an NER LSTMΒ model.
Blog:
Search Engining is HardΒ Bruh
Research scientist from AI2 discusses the hardships of building the Semantic Scholar search engine, which currently indexes 190M scientific papers.Β ?
It uses the 2 model architecture: sparse search via Elasticsearch and then a ranker MLΒ model.
The blog goes in-depth into the challenges they faced while building the search engine such as data complexity, and evaluation problems. It offers a ton of detail, more than I can handle on this post to give it justice, so give it a read if your are interested inΒ search.
Building a Better Search Engine for Semantic Scholar
Dataset of the Week:Β ODSQA
What isΒ it?
ODSQA is a Chinese dataset for spoken question answering (extractive). It contains 3,654 question answerΒ pairs.
Paper: https://arxiv.org/pdf/1808.02280.pdf
Where isΒ it?
Every Sunday we do a weekly round-up of NLP news and code drops from researchers around theΒ world.
If you enjoyed this article, help us out and share withΒ friends!
For complete coverage, follow our Twitter: @Quantum_Stat
NLP News Cypher | 07.26.20 was originally published in Towards AIβββMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI