Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

The NLP Cypher | 03.28.21
Latest   Newsletter

The NLP Cypher | 03.28.21

Last Updated on July 24, 2023 by Editorial Team

Author(s): Quantum Stat

Couple Gazing at the Moon | Friedrich

NATURAL LANGUAGE PROCESSING (NLP) WEEKLY NEWSLETTER

A Step Forward in Open SourcingΒ GPT-3

In the 02.21.21 newsletter, we highlighted EleutherAI’s ambitions for building an open-sourced version of the uber large GPT-3 175B param model. And this week, they released two versions in the sizes of 1.3B and 2.7B params as a stepping stone towards paradise. Here’s how the current GPT models stack up.Β ?

β€œThe release includes:

The full modeling code, written in Mesh TensorFlow and designed to be run onΒ TPUs.

Trained modelΒ weights.

Optimizer states, which allow you to continue training the model from where EleutherAI leftΒ off.

A Google Colab notebook that shows you how to use the code base to train, fine-tune, and sample from aΒ model.”

Their notebook requires a Google storage bucket to access their data since TPUs can’t be read from local file systems. You can set up a free-trial fairly easily, they provide a link in the notebook.

Colab:

Google Colaboratory

Code:

EleutherAI/gpt-neo

Hacker SideΒ Note:

Earlier this year, EleutherAI apparently suffered a DDOS attack. Connor Leahy, a co-founder, tweeted a visualization of abnormal traffic receiving a bunch of HTTP 403s on the pile dataset. If you would like to help donate to their cause (and towards secured hosting), go here:Β SITE.

Visualization of abnormalΒ traffic

Millions of Translated Sentences in 188 Languages

Woah, that’s a lot of translated corpora. Helsinki-NLP collection is menacing collection of monolingual data that includes:

β€œtranslations of Wikipedia, WikiSource, WikiBooks, WikiNews and WikiQuote (if available for the source language we translate from)”

Helsinki-NLP/Tatoeba-Challenge

Backprop AI | Finetune and Deploy MLΒ Models

Library can fine-tune models with 1 line ofΒ code.

Features:

  • Conversational question answering inΒ English
  • Text Classification in 100+ languages
  • Image Classification
  • Text Vectorization in 50+ languages
  • Image Vectorization
  • Summarization inΒ English
  • Emotion detection inΒ English
  • Text Generation

backprop-ai/backprop

TorchSort

Mr. Koker hacked a Google library for sorting and ranking and converted it into PyTorch, (faster than the original) ?. Areas where ranking is used: Spearman’s rank correlation coefficient, top-k accuracy and normalized discounted cumulative gainΒ (NDCG).

Benchmarks:

teddykoker/torchsort

Dive Into Graphs (DIG) | A New GraphΒ Library

This new library helps in four researchΒ areas:

divelab/DIG

Awesome Newspapers | A Possible DataΒ Source?

A curated list of online newspapers covering 79 languages and 7,102 sources. The data hasn’t been scraped it just indexes theΒ sources.

divkakwani/awesome-newspapers

State of Search | The DeepSet Connection

DeepSet walks you down memory lane in open domain qa/search. Beginning with the 2 stage Retriever Reader (ha! remember reading the Chen et al. 2017 paper?), then to RAG or generated responses (as opposed to extractive), and finally heading into summarizing (using Pegasus), and their latest β€œPipelines” initiative. They also discuss a future initiative of using a query classifier to classify which type of retrieval their software should use (dense vs. shallow). This is really cool because it shows their interest in using hierarchy in AI decision-making by using… AI.Β ?

Haystack: The State of Search in 2021

Stanford’s Ode to Peeps in the Intelligence Community to AdoptΒ AI

An interesting ? white paper from Stanford giving advice and warning to the US intelligence community about adopting AI and staying up-to-date with the fast moving field to stay competitive. They also recommend an open-sourced intelligence agency.Β ?

β€œβ€¦one Stanford study reported that a machine learning algorithm could count trucks transiting from China to North Korea on hundreds of satellite images 225 times faster than an experienced human imagery analystβ€Šβ€”β€Šwith the same accuracy.”

HAI_USIntelligence_FINAL.pdf

Matrix Multiplicationβ€Šβ€”β€ŠReachingΒ NΒ²

What will it take to multiply a pair of n-by-n matrices in only n2Β steps??

FYI, matrix multiplication is the engine of all deep neural networks. The latest improvement, β€œshaves about one-hundred-thousandth off the exponent of the previous best mark.” Take thatΒ Elon!

Matrix Multiplication Inches Closer to Mythic Goal

The Repo CypherΒ ?‍?

A collection of recently released repos that caught ourΒ ?

GENRE (Generative Entity Retrieval)

GENRE uses a sequence-to-sequence approach to entity retrieval (e.g., linking), based on a fine-tuned BART architecture. Includes Fairseq and Hugging FaceΒ support.

facebookresearch/GENRE

Connected PapersΒ ?

Shadow GNN

A library for graph representational learning. It currently supports six different architectures: GCN, GraphSAGE, GAT, GIN, JK-Net andΒ SGC.

facebookresearch/shaDow_GNN

Connected PapersΒ ?

Unicorn on Rainbow | A Commonsense Reasoning Benchmark

Rainbow brings together six pre-existing commonsense reasoning benchmarks: aNLI, Cosmos QA, HellaSWAG, Physical IQa, Social IQa, and WinoGrande. These commonsense reasoning benchmarks span both social and physical commonΒ sense.

allenai/rainbow

Connected PapersΒ ?

TAPAS [Extended Capabilities]

This recent paper describes an extension to Google’s TAPAS table parsing capabilities to open-domain QA!!

google-research/tapas

Connected PapersΒ ?

MMT-Retrieval: Image Retrieval and more using Multimodal Transformers

Library for using pre-trained multi-modal transformers like OSCAR, UNITER/ VILLA or M3P (multilingual!) for image search andΒ more.

UKPLab/MMT-Retrieval

Connected PapersΒ ?

AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization

The first benchmark to simulate the low-resource domain Adaptation setting for abstractive Summarization systems with a combination of existing datasets across six diverse domains:, emailΒ , movie review, debate, social media, and science, and for each domain, we reduce the number of training samples to a small quantity so as to create a low-resource scenario.

UKPLab/MMT-Retrieval

Connected PapersΒ ?

CoCoA

CoCoA is a dialogue framework providing tools for data collection through a text-based chat interface and model development in PyTorch (largely based on OpenNMT).

stanfordnlp/cocoa

Connected PapersΒ ?

Dataset of the Week: MasakhaNER

What isΒ it?

A collection of 10 NER datasets for select African languages: Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Naija Pidgin, Swahili, Wolof, and Yoruba. The repo also contains model trainingΒ scripts.

Where isΒ it?

masakhane-io/masakhane-ner

Every Sunday we do a weekly round-up of NLP news and code drops from researchers around theΒ world.

For complete coverage, follow our Twitter: @Quantum_Stat

Quantum Stat


The NLP Cypher | 03.28.21 was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓