This AI newsletter is all you need #58
Last Updated on August 2, 2023 by Editorial Team
Author(s): Towards AI Editorial Team
Originally published on Towards AI.
What happened this week in AI by Louie
This week we were excited to see two new developments in AI outside the realm of NLP. The latest development from Meta AI involves the unveiling of their Open Catalyst simulator application, which has just been released as a demo. By harnessing the power of AI, this application can predict catalyst material reactivity with remarkable speed, outperforming existing methods by nearly 1000 times. The team believes that this technology has the potential to significantly accelerate the discovery of cost-effective materials. While we think material science is a more complex subject for machine learning to tackle relative to proteins (and AlphaFold), we see a lot of potential for similar models to aid researchers screen for potentially interesting materials in space. In the future, we also expect to see models released to contribute to the search for superconductors, topical this week where a potential breakthrough has been in focus!
In another exciting announcement, Google has introduced the Robotics Transformer 2 (RT-2) model, which they refer to as the worldβs first vision-language-action (VLA) model. The network undergoes training using text and images extracted from the web, allowing it to directly produce robotic actions as its output with a small amount of robot training data. The model exhibits a remarkable ability to comprehend complex commands, such as βthrow away the trash.β It achieves this by doing complex reasoning. For instance, recognizing that a banana peel becomes trash after being consumed and autonomously grasping the concept of discarding the trash, even without explicit training on the specific task. During the trials, the RT-2 model demonstrated a significant improvement in performance on unseen scenarios, achieving twice the effectiveness of its previous version.
While surrounded by the constant stream of exciting news in NLP, itβs refreshing to witness two equally thrilling advancements in other AI applications this week, specifically in the fields of robotics and materials science. We are pleased to see the recent breakthroughs and wave of investments in NLP begin accelerating progress in other areas.
– Louie Peters β Towards AI Co-founder and CEO
Hottest News
Stability AI and its CarperAI lab have released Stable Beluga 1 and its successor, Stable Beluga 2 (formerly codenamed FreeWilly). Stable Beluga 1 leverages the original LLaMA 65B foundation model and has been fine-tuned using Supervised Fine-Tune (SFT) techniques. Similarly, Stable Beluga 2 leverages the LLaMA 2 70B foundation model. Both models are publicly available under a non-commercial license.
2. Stability AI Announces Stable Diffusion XL 1.0
Stability AI has announced the release of Stable Diffusion XL (SDXL) 1.0, the latest and most advanced version of its flagship text-to-image suite of models. SDXL is an open-access image model with a staggering 6.6 billion parameter model ensemble pipeline, demonstrating significant improvements in color, contrast, lighting, and shadow.
3. Stack Overflow announces OverflowAI
Stack Overflow is integrating generative AI into its platform with OverflowAI. This includes semantic search and personalized results using a vector database. Additionally, they are enhancing search capabilities across different platforms and introducing an enterprise knowledge ingestion feature for Stack Overflow for Teams.
The Opentensor Foundation and Cerebras are pleased to announce Bittensor Language Model (BTLM), a new state-of-the-art 3 billion parameter language model that achieves breakthrough accuracy across a dozen AI benchmarks. It operates efficiently on mobile and edge devices with limited RAM, reducing the need for centralized cloud infrastructure.
5. OpenAI Scuttles AI-Written Text Detector Over βLow Rate of Accuracyβ
OpenAI has decided to retire its AI classifier due to its low accuracy rate in detecting AI-generated text. The rapid development of large language models has made it challenging to effectively identify features or patterns.
Five 5-minute reads/videos to keep you learning
This article is the second part of a three-part series on the history of open-source LLMs. It covers topics such as the early days of Open-Source LLMs, the current revolution in building better base models, and the current and future trends in open-source LLMs.
2. Building Generative AI Applications with Gradio
Hugging Face and DeepLearning.ai have launched a new short course on building generative AI applications with Gradio. The course focuses on creating user-friendly apps using open-source language models, with projects ranging from text summarization to image analysis and image generation.
3. Build an AI Chart Generator That Adapting to Any Dataset Type, in Only 50 Lines
This is a tutorial for developing an automated chart generator. With this tutorial, developers can easily create AI chart generators using GPT-3.5 or GPT-4 with Langchain, requiring just 50 lines of code.
This article explores the development of web research agents. The approach involves using an LLM to generate search queries, execute searches, scrape pages, index documents, and find the most relevant results for each query.
5. Creating an Automated Meeting Minutes Generator With Whisper and GPT-4
This guide explores the development of a meeting minutes generation tool that leverages Whisper and GPT-4 to efficiently summarize discussions, extract important details, and analyze sentiments.
Papers & Repositories
Andrew Karpathy has released an educational implementation of LLaMA 2 inference in pure C. This project enables you to train a LLaMA 2 LLM architecture in PyTorch and then load the weights into a single C file for efficient inference.
2. Universal and Transferable Attacks on Aligned Language Models
A recent study explores the automatic construction of adversarial attacks on both open-source and closed-source language models, rendering them susceptible to harmful commands. These attacks also transfer to widely used chatbots, raising concerns about effectively patching these vulnerabilities.
3. FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
This paper introduced FLASK, an evaluation protocol specifically designed for LLMsβ performance assessment. It breaks down evaluations into 12 different skill sets, allowing for a detailed analysis of a modelβs performance based on specific skills such as logical robustness, factuality, and comprehension.
4. A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
WebAgent, an LLM-driven agent, utilizes Flan-U-PaLM and HTML-T5 to enhance autonomous web navigation and task completion on real websites. By breaking down instructions, summarizing HTML documents, and generating Python programs, it achieves a 50% increase in success rates compared to previous models.
5. WebArena: A Realistic Web Environment for Building Autonomous Agents
WebArena is a realistic web environment that enables autonomous agents to develop their skills in tasks related to e-commerce, social forums, software development, and content management. It provides benchmarks for evaluating task completion and highlights the need for improved agents, as even advanced models like GPT-4 have a success rate of only 10.59%.
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
AI4 2023: Industryβs Leading AI Conference
Just a Reminder to join us at Ai4 2023, the industryβs leading AI conference, taking place in Las Vegas on August 7β9 at the MGM Grand. Read more about how the growth of Ai4 mirrors AIβs adoption by the industry and join 2200+ AI leaders, 240 speakers, and 100 cutting-edge AI exhibits. Apply for a complimentary pass.
Date: 7th-9th August 2023 (MGM Grand, Las Vegas)
Meme of the week!
Meme shared by archiesnake
Featured Community post from the Discord
Operand has shared its open-source Python library for agent integration, designed to complement existing libraries like HF Agent API and LangChain. The library enables you to connect agents with software systems and human users by defining actions, callbacks, and access policies, making it easy to integrate, monitor, and control your agents. Agency handles the communication details and allows for the discovery and invocation of actions across parties. Check it out on GitHub and support a fellow community member. Share your feedback and how you use it in the thread here.
AI poll of the week!
Join the discussion on Discord.
TAI Curated section
Article of the week
LangChain 101: Part 1. Building Simple Q&A App by Ivan Reznikov
LangChain is a powerful framework for creating applications that generate text, answer questions, translate languages, and perform many more text-related tasks. This article marks the beginning of the LangChain 101 course. Starting with this article, the author discusses concepts, practices, and experiences by showing you how to build your own LangChain applications.
Our must-read articles
Modern NLP: A Detailed Overview. Part 3: BERT by Abhijit Roy
Forget 32K of GPT4: LongNet Has a Billion Token Context by Dr. Mandar Karhade, MD. Ph.D.
Graph Attention Networks Paper Explained With Illustration and PyTorch Implementation by Ebrahim Pichka
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Job offers
Software Engineer III (Drupal) @Clarity Innovations, Inc. (Remote)
Distributed Systems Software Engineer @INSHUR (Brighton, UK)
Intern β Software Engineering Interns β ACI 01 @Activate Interactive Pte Ltd (Singapore)
Machine Learning Engineer (Risk) @SHIELD (Singapore)
Machine Learning Engineer @Robotec.ai sp. z o.o. (Warsaw, Poland/ Freelancer)
Machine Learning Engineer, Fast Optimized Inference @Hugging Face (US Remote)
Interested in sharing a job opportunity here? Contact [email protected].
If you are preparing your next machine learning interview, donβt hesitate to check out our leading interview preparation website, confetti!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI