Our terms of service are changing. Learn more.

Publication

Latest

The NLP Cypher | 05.09.21

Last Updated on May 11, 2021 by Editorial Team

Author(s): Quantum Stat

Saturn as seen from Mimas | Chesley Bonestell

NATURAL LANGUAGE PROCESSING (NLP) WEEKLY NEWSLETTER

Lost Tales

I mostly know dark.fail as an onion site with a great collection of urls for parasailing tor-land (aka darknet). To be honest, I didn’t even know dark.fail had a clearnet site. And very recently, it’s clearnet mirror was phished for a total of 4–5 days. 👀

Apparently a threat actor presented a fake court order to dark.fail’s domain registrar. And in return, they obtained access to the dark.fail’s hosting and rerouted traffic to the bad actor’s mirrored web page. It phished the pages URLs with the intention on fooling people into thinking they were buying products on the dark markets when instead the bad actor(s) were pocketing their bitcoin. This has caused a big uproar in the hacking community given dark.fail’s popularity.🥶

The anonymous owner of dark.fail appeared on a hacker podcast this past weekend to discuss the hijacking and spoke via a text-to-speech software as to protect their voice identity. You can watch/listen here:

https://medium.com/media/a9c9f8844f55519b5f891ed693e2eb24/href

And in other news…

ICLR Residuals…

Galkin’s Knowledge Graph Review from ICLR

Couldn’t have a conference without getting a Galkin knowledge graph review!

TOC:

  1. Reasoning in Knowledge Graphs: Simpler than you thought
  2. Temporal Logics and KGs
  3. NLP Perspective: PMI & Relations, Entity Linking
  4. Complex Question Answering: More Modalities
  5. Lookback

Knowledge Graphs @ ICLR 2021

THE NLP Index Update

Since last week, we’ve added ~750 new repos to the index and I’ve included GitHub stars and programming language for each repo.

In addition, we also added nearly 1,000 introductory videos for select assets. Thank you to Amit Chaudhary for the data! 🐱‍👤

Check it out here:

The NLP Index

A Commonsense Knowledge Base Construction

Checkout how the Max Planck Institute for Informatics is building commonsense knowledge bases.

This paper introduces 3 systems:

Quasimodo: “an open-source commonsense knowledge base designed to get relevant properties about entities.” site

Dice: “a reasoning framework for deriving refined and expressive commonsense knowledge from existing CSK collections.” site

Ascent: “a pipeline for automatically collecting, extracting and consolidating commonsense knowledge (CSK) from the web.” site

A Large Netflix Dataset

“This dataset combines data sources from Netflix, Rotten Tomatoes, IMBD, posters, box office information, trailers on YouTube, and more using a variety of APIs.” Netflix doesn’t have it’s own API so the devs just went nuclear on triangulating Netflix’s data via other sources. 🙉

Last updated April 2021 according to authors.

Latest Netflix data with 26+ joined attributes

Awesome Self-Supervised Learning

Index for all things Self-Supervised Learning across different domains such as vision, NLP, graphs and more.

jason718/awesome-self-supervised-learning

For an intuitive intro into self-supervised learning, check out Sergey Ivanov’s blog:

GML In-Depth: three forms of self-supervised learning

Repo Cypher 👨‍💻

A collection of recently released repos that caught our 👁

SUPERB Benchmark for Speech

A collection of benchmarking resources to evaluate the capability of a universal shared representation for speech processing. SUPERB consists of the following:

A benchmark of ten speech processing tasks built on established public datasets,

A BENCHMARK TOOLKIT designed to evaluate and analyze pretrained model performance on various downstream tasks following the conventional evaluation protocols from speech communities,

A public LEADERBOARD for SUBMISSIONS and performance tracking on the benchmark.

SUPERB: Speech processing Universal PERformance Benchmark

Associated repo:

s3prl/s3prl

Connected Papers 📈

Explainable Text VQA

A dataset containing ground truth visual and multi-reference textual explanations that can be leveraged during both training and evaluation.

Dataset not officially out yet, but keep track of this repo for updates.

amzn/explainable-text-vqa

Connected Papers 📈

Rare Disease Identification

Using ontologies and weak supervision to identify rare diseases from clinical notes.

acadTags/Rare-disease-identification

Connected Papers 📈

The Carleton Benchmark Suite (CBench)

A benchmarking framework for evaluating question answering systems over knowledge graphs.

aorogat/CBench

Connected Papers 📈

AMR Parser with Action-Pointer Transformer

Abstract Meaning Representation (AMR) parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens.

Authors used a transformer that handles the generation of arbitrary graph constructs.

IBM/transition-amr-parser

Connected Papers 📈

ADAM

ADAM is a demonstration of “grounded language acquisition,” which is to say learning (some amount of) language from observing how language is used in concrete situations, like infants (presumably) do. 👀

This work is under DARPA’s Grounded Artificial Intelligence Language Acquisition (GAILA) program. 🛸👽

isi-vista/adam

Connected Papers 📈

Knover | Knowledge Grounded Dialogue Generation

Knover is a toolkit for knowledge grounded dialogue generation based on PaddlePaddle. Knover allows researchers and developers to carry out efficient training/inference of large-scale dialogue generation models.

PaddlePaddle/Knover

Connected Papers 📈

Dataset of the Week: Ascent

What is it?

A pipeline for automatically collecting, extracting and consolidating commonsense knowledge (CSK) from the web.

Where is it?

AscentKB

Every Sunday we do a weekly round-up of NLP news and code drops from researchers around the world.

For complete coverage, follow our Twitter: @Quantum_Stat

Quantum Stat


The NLP Cypher | 05.09.21 was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓