Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

This AI newsletter is all you need #55
Latest   Machine Learning

This AI newsletter is all you need #55

Last Updated on July 15, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

This week we were excited to finally get to test Open AI’s Code Interpreter, a new capability of GPT-4 within ChatGPT. OpenAI was also active with other announcements including revealing its plans to establish a team dedicated to achieving superalignment within the next four years.

After an initial alpha release, the Code Interpreter feature, which includes coding and data visualization capabilities, is now available in beta mode for all ChatGPT Plus users. The Code Interpreter empowers ChatGPT with a range of functions, including data analysis, chart creation, file upload and editing, mathematical operations, and code execution.

Using the Code Interpreter is relatively straightforward for most use cases, especially when approached as an analyst. Data, including compressed files like ZIP archives, can be easily uploaded by clicking the plus button. The initial prompt can be minimal, as the AI excels at comprehending data meaning and structure from context alone. Once loaded, the AI automatically handles tasks such as data merging and sophisticated data cleaning. Ethan Mollick has documented many interesting use cases of Code Interpreter on his Twitter and in this blog. Particularly powerful is the ability to recognize mistakes (due to failed code execution) and for the model to iterate to correct them.

The feature can be used for many unique applications and people have documented uses including data analysis and visualization, trend identification, topic analysis, engagement pattern examination, SEO optimization, KPI analysis, video creation and even building machine learning data sets and models. The possibilities are extensive and are expanding as access to the Code Interpreter grows. There have been some recent signs of waning interest in ChatGPT (reports of website visitors down, albeit as GPT-Turbo and GPT-4 are rolled out more widely elsewhere via API), so this new feature comes at a good time for OpenAI.

We think Code Interpreter unlocks many more capabilities from LLMs and can be incredibly useful, massively reducing cost and barriers to entry of basic data analysis. However, it still needs human oversight and human imagination to ask the right questions and get the most insights. We expect a lot more progress in this direction in the months and years ahead as LLMs are given more powerful tools to work with.

In other AI news, Kinnu, the Generative AI-powered education startup that we introduced back in October, has successfully raised a $6.5 million funding round. Kinnu is primarily dedicated to adult enthusiast learners and utilizes AI to optimize content for each individual learner. “We always found it peculiar that most online education offerings simply scaled up the worst aspects of traditional schooling,” states Christopher Kahler, co-founder, and CEO of Kinnu. “We believe there is a significant opportunity for AI-powered learning that focuses on accelerating the pace of human learning itself.” We are thrilled with the progress made by Kinnu and concur on the potential for AI to contribute to enhanced education.

– Louie Peters — Towards AI Co-founder and CEO

Hottest News

  1. Introducing Superalignment

OpenAI has introduced the concept of Superalignment, emphasizing the necessity for scientific and technical breakthroughs to ensure that highly intelligent AI systems align with human intentions. The organization highlights the significance of establishing innovative governance institutions and exploring novel approaches to accomplish this alignment.

2. Miner Pivots 38,000 GPUs From Crypto to AI

Cryptomining firm Hive Blockchain is shifting its focus from Ethereum mining to AI workloads. With 38,000 GPUs at their disposal, they intend to generate revenue while still utilizing some GPU power for crypto mining. However, transitioning to AI computing presents challenges, as older ETH mining GPUs have limited value in this market.

3. AWS Launches $100M Generative AI Innovation Center

AWS has announced a substantial investment in the advancement of generative AI. With a commitment of $100 million, the newly established AWS Generative AI Innovation Center aims to assist customers and partners worldwide in unlocking the potential of generative AI. The Innovation Center is already collaborating with companies such as Highspot, Lonely Planet, Ryanair, and Twilio on generative AI solutions.

4. Google’s medical AI chatbot is already being tested in hospitals

Google’s Med-PaLM 2, an AI tool developed to provide answers regarding medical information, has undergone testing at the Mayo Clinic research hospital. As a variant of the language model PaLM 2, Med-PaLM 2 has exhibited promising results in terms of reasoning, delivering consensus-supported answers, and comprehension, although some accuracy issues persist.

5. Alibaba launches A.I. tool to generate images from text

Chinese technology giant Alibaba has launched Tongyi Wanxiang, an artificial intelligence tool capable of generating images from prompts. The tool allows users to input prompts in both Mandarin and English, and it generates images in various styles, including 2D illustrations, sketches, and 3D cartoons.

Five 5-minute reads/videos to keep you learning

  1. Intriguing Properties of Quantization at Scale

Recent research reveals that the quality of Large Language Models’ Post-Training Quantization (PTQ) is strongly influenced by pre-training hyperparameters. The study indicates that optimization choices, including weight decay, gradient clipping, and data type selection, have a significant impact on PTQ performance, with float16 and bfloat16 displaying notable influence. The research emphasizes the significance of optimization choices in the development of robust language models.

2. Best GPUs for Machine Learning for Your Next Project

This article highlights the increasing use of GPUs in machine learning and provides a guide on choosing the best GPUs for AI applications. It mentions key factors to consider, such as compatibility and memory capacity, and identifies top GPU options from NVIDIA, including the Titan RTX and Tesla V100. It also suggests cost-effective alternatives like the EVGA GeForce GTX 1080 and AMD Radeon GPUs.

3. AI Weights Are Not Open “Source”

The article delves into the issue of AI model weights and their availability as open-source. It argues that although the source code of AI models may be open, the weights — which encompass the actual learned knowledge — are generally not openly shared due to various reasons, such as concerns regarding intellectual property, privacy, and commercial interests.

4. Artificial intelligence glossary: 60+ terms to know

AI has been growing exponentially, and there are different levels of awareness surrounding it. This glossary aims to serve as a resource for those who are just being introduced to AI and for those looking for a reference or a vocabulary refresher.

5. Getting started with Code Interpreter in ChatGPT

In this article, Ethan Mollick has documented numerous interesting use cases of Code Interpreter on his Twitter. He has also highlighted its features, the process of using it, and more.

Papers & Repositories

  1. A Survey on Evaluation of Large Language Models

This article provides a comprehensive review of evaluation methods for Language Models. It focuses on what to evaluate (including various dimensions like reasoning, ethics, and applications), where to evaluate (both general and specific benchmarks), and how to evaluate (including human evaluations versus automatic evaluations).

2. DreamDiffusion: Generating High-Quality Images from Brain EEG Signals

This paper introduces DreamDiffusion, a novel method for generating high-quality images directly from brain electroencephalogram (EEG) signals, without the need to translate thoughts into text. By utilizing pre-trained models and advanced signal modeling techniques, it overcomes challenges like limited information and noise.

3. LongNet: Scaling Transformers to 1,000,000,000 Tokens

This work introduces LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. It uses dilated attention to efficiently process longer sequences while maintaining performance on shorter ones. One of the key benefits of employing this technique is its compatibility with existing optimization approaches. It seamlessly integrates with any other methods already in use, expanding the range of options available for optimization.

4. The Curse of Recursion: Training on Generated Data Makes Models Forget

Researchers explore “model collapse,” the disappearance of original content distribution when models are trained using content from other models. This phenomenon affects LLMs, Variational Autoencoders, and Gaussian Mixture Models, emphasizing the need to understand and preserve data from genuine human interactions to maintain the benefits of web-collected data.

5. ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases

ChatLaw is an open-source legal language model specifically designed for the Chinese legal domain. It utilizes a combination of vector and keyword retrieval techniques to address model hallucinations during data retrieval, resulting in more accurate responses. Self-attention is employed to improve accuracy and reliability in reference data.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Weekly AI Podcast

In this week’s episode of the “What’s AI” podcast, Louis Bouchard interviews Petar Veličković, a research scientist at DeepMind and an affiliate lecturer at Cambridge. Petar shares insights on the value of a Ph.D., emphasizing its role as a gateway to research and the opportunities it provides for building connections and adaptability. He also highlights the evolving landscape of AI research, underscoring the importance of diverse backgrounds and contributions. The interview provides valuable perspectives on academia versus industry, the role of a research scientist, working at DeepMind, teaching, and the significance of curiosity in driving impactful research. Tune in on YouTube, Spotify, or Apple Podcasts if you are interested in AI research!

Meme of the week!

Meme shared by mrobino

Featured Community post from the Discord

weaver159#1651 has recently introduced a new project called MetisFL, which is a federated learning framework designed to enable developers to easily federate their machine learning workflows and train models across distributed data silos, all without the need to centralize the data. The core of the framework is written in C++ and prioritizes scalability, speed, and resiliency. Currently, the project is transitioning from a private, experimental version to a public beta phase. Check it out on GitHub and support a fellow community member. Share your thoughts on this project in the thread here.

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

Better than GPT-4 for SQL queries: NSQL (Fully OpenSource) by Dr. Mandar Karhade, MD. PhD.

SQL is still the most commonly used language. Wouldn’t it be great if we could write SQL queries by asking a large language model? It will offset so much of the work and probably democratize access to insights to almost everyone in the company who needs it. In this piece, the author talks about NSQL which is a new family of open-source large foundation models (FMs) designed specifically for SQL generation tasks.

Our must-read articles

Classification Metrics Clearly Explained! by Jose D. Hernandez-Betancur

Unleash Data Insights: Mastering AI for Powerful Analysis by Amit Kumar

Meet Fully OpenSource Foundation Model By Salesforce XGen-7B by Dr. Mandar Karhade, MD. PhD.

If you want to publish with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Data Architect @ShyftLabs (Toronto, Canada)

Machine Learning Specialist — Legal Systems @Uni Systems (Brussels, Belgium)

Graphics AI Research Developer @Plain Concepts (Remote)

Data Engineer @Tomorrow (Freelance/Romania)

Growth Manager, Data & Analytics @WillowTree (Remote)

Client Platform Engineer @Chainalysis (Remote)

Intern — Software Engineering Interns — ACST @Activate Interactive Pte Ltd (Singapore)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

https://www.confetti.ai/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓