Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

The NLP Cypher | 11.29.20
Latest   Machine Learning   Newsletter

The NLP Cypher | 11.29.20

Last Updated on July 24, 2023 by Editorial Team

Author(s): Ricky Costa

Originally published on Towards AI.

found it on @vboykis’s twitter

NATURAL LANGUAGE PROCESSING (NLP) WEEKLY NEWSLETTER

The NLP Cypher U+007C 11.29.20

Hand of God

Hey, welcome back, just returned from the holidays. And Happy Thanksgiving for those celebrating. It’s been a slow week given the holiday break so the newsletter will be a bit shorter than usual, but that doesn’t mean we can’t discuss alien monoliths…

If you haven’t heard, in a national park in Utah, an unknown monolith was discovered. At the moment, no one knows where it came from.

U+1F47D

And it didn’t take long for someone to loot it U+1F62D.

U+1F47D

Software Updates

TF

Release TensorFlow 2.4.0-rc3 Β· tensorflow/tensorflow

tf.distribute introduces experimental support for asynchronous training of Keras models via the…

github.com

&

You can now parallelize models on the Transformers library!

Oh and by the way, earlier this week we added 50 new datasets to the Big Bad NLP Database: highlights include the IndoNLU benchmark and several datasets from EMNLP, thank you to Ulrich SchΓ€fer and Neea Rusch for contributing!

P.S. If you enjoy today’s article, don’t hesitate to give a U+1F44FU+1F44F! Thank you!

GNN Book

Hey want a an awesome introduction to graph neural networks? Found this pre pub version of William Hamilton’s β€œGraph Representation Learning” book.

It is very well written and illustrates this burgeoning topic in machine learning with elegant simplicity.

ToC

Part I: Node Embeddings

Part II: Graph Neural Networks

Part III: Generative Graph Models

Graph Representation Learning Book

The field of graph representation learning has grown at an incredible (and sometimes unwieldy) pace over the past seven…

www.cs.mcgill.ca

PDF Graph Representation Learning

Language Explanations

Can language help us to train models better?

β€œIn the same way that we might take an input x, and extract features (e.g. the presence of certain words) to train a model, we can use explanations to provide additional features.”

In a new blog post from Stanford AI, they discuss the problem on why it’s so hard to teach models knowledge via language, and possible solutions from an NLP perspective(i.e. they discuss their ExpBERT paper from earlier this year), and computer vision perspective (i.e. their visual perceptions paper)

Learning from Language Explanations

Imagine you're a machine learning practitioner and you want to solve some classification problem, like classifying…

ai.stanford.edu

ExpBERT’s GitHub discussed in the blog:

MurtyShikhar/ExpBERT

This repository contains code, scripts, data and checkpoints for running experiments in the following paper: Shikhar…

github.com

DataLoader PyTorch

Interesting blog post from PaperSpace discussing the DataLoader Class in PyTorch. They summarize this handy class in PyTorch if you are interested in using preexisting datasets or even using your own custom dataset on numerical or text data. ToC:

  • Working on Datasets
  • Data Loading in PyTorch
  • Looking at the MNIST Dataset in-Depth
  • Transforms and Rescaling the Data
  • Creating Custom Datasets in PyTorch

Blog:

Complete Guide to the DataLoader Class in PyTorch U+007C Paperspace Blog

In this post, we'll deal with one of the most challenging problems in the fields of Machine Learning and Deep Learning…

blog.paperspace.com

Repo Cypher U+1F468‍U+1F4BB

A collection of recent released repos that caught our U+1F441

Neural Acoustic

A library for modeling English speech data with varied accents using Transformers.

Bartelds/neural-acoustic-distance

Code associated with the paper: Neural Representations for Modeling Variation in English Speech. git clone…

github.com

The Speech Accent Archive

RELVM

Repo used for training a latent variable generative model on pairs of entities and contexts (i.e. sentences) in which the entities occur. Their model can be used to perform both mention-level and pair-level classification.

BenevolentAI/RELVM

This repository contains the code accompanying the paper "Learning Informative Representations of Biomedical Relations…

github.com

Paper

GLGE Benchmark

A new natural language generation (NLG) benchmark composing of 8 language generation tasks, including Abstractive Text Summarization (CNN/DailyMail, Gigaword, XSUM, MSNews), Answer-aware Question Generation (SQuAD 1.1, MSQG), Conversational Question Answering (CoQA), and Personalizing Dialogue (Personachat).

microsoft/glge

This repository contains information about the general langugae generation evaluation benchmark GLGE, which is composed…

github.com

In addition,

Microsoft highlights a new pre-trained language model called ProphetNet used in sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.

microsoft/ProphetNet

This repo provides the code for reproducing the experiments in ProphetNet: Predicting Future N-gram for…

github.com

OpenTQA

OPENTQA is a open framework of the textbook question answering task. Textbook Question Answering (TQA) is where one should answer a diagram/non-diagram question given a large multi-modal context consisting of abundant essays and diagrams.

keep-smile-001/opentqa

OPENTQA is a open framework of the textbook question answering.

github.com

Dataset of the Week: Question Answering for Artificial Intelligence (QuAIL)

What is it?

QuAIL contains 15K multiple-choice questions in texts 300–350 tokens long across 4 domains (news, user stories, fiction, blogs).

Sample

Where is it?

text-machine-lab/quail

This repository contains the main and challenge data for QuAIL reading comprehension dataset. QuAIL contains 15K…

github.com

legend

Every Sunday we do a weekly round-up of NLP news and code drops from researchers around the world.

For complete coverage, follow our Twitter: @Quantum_Stat

Quantum Stat

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓