Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

This AI newsletter is all you need (#36)
Generative AI   Latest   Machine Learning   Newsletter   Openai

This AI newsletter is all you need (#36)

Last Updated on March 4, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louis

This week we were pleased to note an acceleration in progress toward open-source alternatives to ChatGPT as well as signs of increased flexibility in access to these models. While many major tech companies are building their own alternative to ChatGPT, we are particularly excited to see open-source alternatives that can make next-generation LLM models more accessible, flexible, and affordable for the machine learning community. We are seeing progress in the release of more open-source foundational LLM models as well as building out of open-source data sets and workflows, together with new innovations to reduce the cost of fine-tuning these models with human feedback.

The strategic partnership between Hugging Face and Amazon Web Services (AWS) looks like a positive step in this direction and should increase the availability of open-source data sets and models hosted on Hugging Face. We were also pleased to see the release of Meta’s LLaMA, 4 foundation models ranging from 7B to 65B parameters. Another example is Laion’s project, which involves crowdsourcing annotations for its OpenAssistant ChatGPT replication project. Carper has also developed open-source RLHF workflows, which range from human annotation with CHEESE to RLHF training with trlX. We are also seeing new approaches that can reduce the human feedback requirements and barriers to entry to developing a ChatGPT-like product — such as Anthropic AI’s constitutional AI approach (behind its Claude model) requiring minimal human labels.

We hope these open-source models and competition can also put pressure on OpenAI to keep costs affordable and to increase the flexibility of interaction with their models. We see signs of this with a glimpse leaked of its new OpenAI Foundry product — a platform for running OpenAI models on a dedicated capacity. This platform will also allow more robust fine-tuning options for its latest models. Also of note in this release is that the future models (GPT-4?) look to offer 32,000 max context length (up from 4,000 today) which will likely bring a lot of new capabilities.

Hottest News

  1. OpenAI Foundry will let customers buy dedicated compute to run GPT3 and their other models

OpenAI is launching a new developer platform that lets customers run the company’s newer machine learning models, like GPT-3.5, on a dedicated capacity. OpenAI describes the forthcoming offering, called Foundry, as “designed for cutting-edge customers running larger workloads.”

2. What ChatGPT And Generative AI Mean For Your Business?

Generative AI could potentially become a powerful tool for businesses, providing a new basis for competitive advantage. Enterprises should consider experimenting with generative AI by identifying existing processes that can be enhanced with this technology.

3. What are ‘robot rights,’ and should AI chatbots have them?

This article features an interview with Professor David Gunkel, discussing the issue of what rights robots, including AI chatbots, should have. The discussion centers on the concept of robot rights, including the background and articulation of rights for AI.

4. Hugging Face and AWS partner to make AI more accessible

The strategic partnership between Hugging Face and Amazon Web Services (AWS) is expected to make AI open and accessible to everyone. Together, the two companies aim to accelerate the availability of next-generation machine learning models by making them more accessible, efficient, and affordable for the machine learning community.

5. How AI Can Help Create and Optimize Drugs To Treat Opioid Addiction

The use of artificial intelligence for drug discovery has shown promise in the development of potential treatments for opioid addiction. Preclinical studies suggest that blocking kappa-opioid receptors may be an effective approach to treating opioid dependence. AI can be used to optimize and create new drugs that can block the activity of the protein responsible for kappa-opioid receptors, making the drug discovery process more cost-effective and efficient.

Three 5-minute reads/videos to keep you learning

  1. Text-to-Image Diffusion Models: A Guide for Non-Technical Readers

The guide provides a simple explanation of text-to-image models and their use of diffusion to create images from natural language. It also introduces various tools for controlling and improving image generation processes, including ControlNET, ControlNET Pose, and ControlNET LORA.

2. The technology behind GitHub’s new code search

This post provides a high-level explanation of the inner workings of GitHub’s new code search and offers a glimpse into the system architecture and technical underpinnings of the product. It also discusses how the search functionality allows users to find, read, and navigate code more efficiently.

3. MIT course on Introduction to Data-Centric AI

This is a practical course on Data-Centric AI, focusing on the impactful aspects of real-world ML applications. The class covers algorithms for finding and fixing common issues in ML data, as well as constructing better datasets, with a concentration on data used in supervised learning tasks such as classification.

4. Lessons learned while using ChatGPT in education

This guide shares an experience of using AI in education to complete tasks such as generating ideas, producing written material, creating apps, and generating images. The article details the success, changes, and process of using AI in education.

5. Writing Essays With AI: A Guide

This guide explores the use of AI as a creative tool for essay writing. It discusses how to incorporate AI into writing practices by leveraging it to organize thoughts, capture a voice, summarize complex ideas, assist with idea generation, and evaluate writing quality.

Papers & Repositories

  1. LLaMA: A repository for Open and Efficient Foundation Language Models

A collection of foundation language models ranging from 7B to 65B parameters by Meta.

2. Aligning Text-to-Image Models using Human Feedback

This paper proposes a fine-tuning method for aligning text-to-image models using human feedback by collecting human feedback, training a reward function that predicts human feedback, and fine-tuning the model by maximizing the reward-weighted likelihood.

3. The Wisdom of Hindsight Makes Language Models Better Instruction Followers

This paper considers an alternative approach to RLHF: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.

4. Zero-Shot Information Extraction via Chatting with ChatGPT

This work aims to investigate whether strong information extraction (IE) models can be created by directly prompting large language models (LLMs). Specifically, it transforms the zero-shot IE task into a multi-turn question-answering problem using a two-stage framework called ChatIE, which leverages ChatGPT for entity-relation triple extraction, named entity recognition, and event extraction.

5. How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

The paper offers a thorough evaluation of GPT models for machine translation. It covers various aspects, including the quality of different GPT models compared to state-of-the-art research and commercial systems, the impact of prompting strategies, the resilience to domain shifts, and document-level translation.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Upcoming Community Events

The Learn AI Together Discord community hosts weekly AI seminars to help the community learn from industry experts, ask questions, and get a deeper insight into the latest research in AI. Join us for free, interactive video sessions hosted live on Discord weekly by attending our upcoming events.

  1. Multimedia Processing Networks (#8)

This week’s session in the (free) nine-part Neural Networks Architectures series led by Pablo Duboue (DrDub) focuses on Multimedia Processing Networks. During this session, he will explore VQA, NMNs, Hierarchical co-attention, Dall-E, Imagen, and Stable Diffusion. Find the link to the seminar here or add it to your calendar here.

Date & Time: 28th February, 11 pm EST

Add our Google calendar to see all our free AI events!

Meme of the week!

Meme shared by friedliver#0614

Featured Community post from the Discord

Carl#1372 has launched a text-to-image AI art generator that enables users to combine different styles or create customized ones to produce unique artwork. It is a free tool designed for novice users to experiment with art. You can check it out here and support a fellow community member. Join the conversation and share your feedback in the thread here.

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

BioGPT: The ChatGPT of Life Sciences by Dr. Mandar Karhade, MD. PhD.

The Microsoft team created an exceptional language model called BioGPT designed to answer medical questions. The researchers claim that BioGPT is as good as a human expert in answering these queries. The article delves into how the Microsoft team trained BioGPT to become such an intelligent language model.

Our must-read articles

easy-explain: Explainable AI for images by Stavros Theocharis

Unlocking the Power of Recurrent Neural Networks: A Beginner’s Guide by Gaurav Nair

If you want to publish with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Artificial Intelligence Engineer @Plain Concepts (Remote)

Senior Data Analyst — User Trust @BukuWarung (Remote)

Senior Data Scientist @Monzo (Remote)

AI/ML Wireless Communications Engineer @Anduril Industries (Costa Mesa, CA, USA)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

http://ws.towardsai.net/confetti-ai


This AI newsletter is all you need (#36) was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓