This AI newsletter is all you need #82
Last Updated on January 25, 2024 by Editorial Team
Author(s): Towards AI Editorial Team
Originally published on Towards AI.
What happened this week in AI by Louie
This week, our eyes were on OpenAIβs GPT Store launch and Poeβs creator monetization program as companies begin to explore new ways for creators to monetize LLM-based apps.
OpenAI introduced the GPT Store on January 10, offering ChatGPT Plus, Team, and Enterprise users a platform to find and create custom versions of ChatGPT. First announced at OpenAIβs DevDay in November, these customizable GPTs can be tailored for specific tasks and easily shared with other subscribers. Additionally, OpenAI announced the upcoming launch of a GPT builder revenue program in Q1, initially available to US builders, where they will earn based on user engagement with their GPTs.
OpenAI disclosed that 3 million GPTs have already been created! A GPT can be created in just 20 minutes, but some can undergo many hours or days of iteration, data selection, and prompt tweaking to improve performance and cover edge cases. For example, they can also become more complex by feeding the model a folder of multiple prompt templates for different user requests. GPTs can also be fed via API with any complexity built behind the scenes. This week, we released a tutorial for building GPTs using custom actions and external APIs.
In similar news, Quoraβs Poe LLM chatbot platform announced it raised $75m this week. CEO Adam DβAngelo stated, βWe expect the majority of the funding to be used to pay creators of bots on the platform through our recently launched creator monetization program.β Poeβs platform allows access to various LLMs, both open and closed. Its creator monetization program was launched in October and will enable creators of bots or prompts to generate revenue.
Why should you care?
We think it is positive that individuals can begin monetizing their creations and side projects and even find an external audience for prompts and mini AI bots they have developed for their workflows. However, it is still very early and hard to know how LLM-based apps and products will be accessed and monetized. Will people build mostly with open-source LLMs like Llama or closed-source LLM APIs like GPT-4? Will most LLM-based apps be accessed on their own platform with their own frontend, or will most primarily be accessed with third-party LLM platforms via app stores such as the GPT Store?
Beyond short-term creator monetization, it is interesting to speculate whether the GPT Store could become more significant. While many of the 3 million GPTs created to date are little more than basic prompts and add little value, some of these mini GPTs do, in fact, enhance the capabilities of GPT-4 on specific tasks by combining the LLM with human imagination and expertise. This comes via the human decisions feeding into how the app is designed, what data is uploaded, and what detailed instructions are given to the LLM. Could a GPT-5 or GPT-6 class model (which also performs each of these GPTs much more competently) be trained as an agent to choose, use, and combine a future library of millions of GPTs each with their own specialized βnarrow AIβ capabilities to perform much more complex tasks? Many reasoning capabilities are still fundamentally lacking in LLMs currently, but adding expertise from millions of humans into prompts and carefully chosen RAG datasets can potentially make up for some of this and hack together enhanced AI capabilities. Whether GPTs are a temporary trend or will become a fundamental part of AI capabilities and the AI product landscape going forward remains to be seen.
– Louie Peters β Towards AI Co-founder and CEO
Hottest News
1.OpenAI Launched Its GPT Store
OpenAI launched its βGPT Store,β an online marketplace for developers to share their custom chatbots publicly. The store within ChatGPT allows OpenAIβs paid subscribers to access special-purpose chatbots, such as a code tutor, a Canva design bot, and a hiking trail recommendation bot.
LangChain has released its first stable and backward-compatible version, v0.1.0. This release brings better observability and debugging capabilities, including performance tracking and insight tools, and introduces a new versioning system for clear API and feature updates.
3. AI Makes New Material That Could Dramatically Change How Batteries Work
Microsoft AI, utilizing Azure Quantum Elements, successfully identified a novel material after screening 32 million types, which resulted in a prototype lithium battery with a 70% reduction in lithium usage.
4. Google Confirms It Just Laid Off Around a Thousand Employees
Google just confirmed to The Verge that itβs eliminated βa few hundredβ roles in Googleβs core engineering and Google Assistant teams, meaning Google has confirmed layoffs of around a thousand employees.
5. AI Discovers That Not Every Fingerprint Is Unique
Columbia engineers have built a new AI that shatters a long-held belief in forensicsβthat fingerprints from different fingers of the same person are unique. It turns out they are similar, only weβve been comparing fingerprints the wrong way.
Five 5-minute reads/videos to keep you learning
1.A Simple Guide to Local LLM Fine-Tuning on a Mac With MLX
This guide provides a detailed process for fine-tuning large language models (LLMs) on Apple Silicon Macs using the MLX framework. It covers environment setup, data preparation, model fine-tuning, and methods for testing the customized LLM on Mac hardware.
2. Install Stable Diffusion XL Locally on macOS
Stable Diffusion XL is an open-source AI-based image generation tool similar to DALL-E or Midjourney. This guide explains how to run Stable Diffusion XL on MacOS by installing core development tools such as PyTorch, Anaconda, and Xcode. This setup guide includes command-line interface tasks.
3. A Survey of 2,778 Researchers Shows How Fragmented the AI Science Community Is
The 2023 Expert Survey on Progress in AI indicates significant advancements, with AI predicted to autonomously develop websites and compose music in the style of well-known artists by 2028. Experts estimate a 10% chance that AI will surpass human capability in all tasks by 2027, increasing to 50% by 2047.
4. Ben Thompsonβs Take on the OpenAI / NYTimes Lawsuit
The New York Times suit against OpenAI and Microsoft for copyright infringement further opens the discussion on the unauthorized use of published work to train AI models. This thought piece by Ben Thompson splits fair use into inputs and outputs β i.e., it is acceptable to βinputβ copyrighted material, but it is illegal to βoutputβ copyrighted material.
5. 12 Ways To Get Better at Using ChatGPT: A Comprehensive Prompt Guide
Since its launch, ChatGPT has become seemingly omnipresent. In this article, Insider asked AI enthusiasts how they interact with the chatbot to produce desirable outputs. It has twelve practicable prompting tips on how to get ChatGPT to do what you want.
Repositories & Tools
- Olly is a personal AI assistant that optimizes social media presence by commenting, managing personalized interactions, and more.
- Mergekit is a toolkit for merging pre-trained language models. It uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations.
- BakedAvatar takes monocular video recordings of a person and produces a mesh-based representation for real-time 4D head avatar synthesis on various devices, including mobiles.
Top Papers of The Week
1.MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts
MoE-Mamba is a selective state space model incorporating a Mixture of Experts (MoE) for enhanced efficiency. It achieves the same performance as the Mamba model with 2.2 times fewer computational steps while maintaining fast inference times.
2. Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
If an AI system learned such a deceptive strategy, could we detect and remove it using current state-of-the-art safety training techniques? This research paper by Antrhopic shows the potential for AI systems to engage in and maintain deceptive behaviors, even when subjected to safety training protocols designed to detect and mitigate such issues.
3. NEFTune: Noisy Embeddings Improve Instruction Finetuning
The paper shows that a simple augmentation can improve language model finetuning. The NEFTune approach adds noise to the embedding vectors during training. For example, Standard finetuning of LLaMA-2β7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings.
4. TOFU: A Task of Fictitious Unlearning for LLMs
LLMs trained on massive datasets from the web can memorize and reproduce sensitive or private data. Unlearning or tuning models to forget information in their training data protects private data after training. This paper proposes TOFU, a Task of Fictitious Unlearning, as a benchmark aimed at helping deepen our understanding of unlearning.
5. Instruction Tuning with Human Curriculum
This paper proposes a highly structured synthetic dataset that mimics human educationβs progressive and organized nature, unlike the previous conventional randomized instruction dataset. Compared to randomized training, this approach shows significant performance enhancements β +3.06 on the MMLU benchmark and +1.28 on the AI2 Reasoning Challenge.
Quick Links
- Microsoftβs market cap hit $2.89 trillion, overtaking Apple as the worldβs most valuable public company. The companyβs investment in AI has boosted its share price value this year.
- Google Cloud launches new AI tools for retailers to improve online shopping experiences and other retail operations. The suite of products includes a generative AI-powered chatbot that can talk to consumers and offer product recommendations.
- The Rabbit has already sold out two batches of 10,000 R1 pocket, an AI-powered gadget that can use your apps for you, in two days.
- OpenAI signs up 260 businesses for the corporate version of ChatGPT in four months. ChatGPT Enterprise offers additional features and enhanced privacy measures like data encryption.
Whoβs Hiring in AI
Senior Software Engineer β Scalability and Performance @super.AI (Remote)
Student Software Engineer @SoundHound AI (Remote)
AI/ML Training Manager @Underground Administration (Remote)
Machine Learning Engineer β AI Pipeline @Telnyx (LATM / EMEA β Remote)
Back End Engineer @Lilli (Remote)
Summer Data Science/Machine Learning Intern @AI Camp Inc (Remote)
Machine Learning Engineer @Nubank (Remote)
Interested in sharing a job opportunity here? Contact [email protected].
If you are preparing your next machine learning interview, donβt hesitate to check out our leading interview preparation website, confetti!
Think a friend would enjoy this too? Share the newsletter and let them join the conversation.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI