Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


This AI newsletter is all you need #81
Artificial Intelligence   Latest   Machine Learning

This AI newsletter is all you need #81

Last Updated on January 10, 2024 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

This week, we have been watching signs of competition heating up in LLM Chatbot products, with Google and Microsoft developing their ChatGPT alternatives. ChatGPT retains the leading market share and is the most common way non-technical people have interacted with LLMs, but OpenAI is not standing still and will imminently launch its app store for GPTs. We think the GPT Store can incentivize many more developers to launch their own GPTs to enhance ChatGPT for specific tasks and applications. We are interested to learn more details on the economic model and when developer revenue share will be activated.

The news of the GPT Store release follows a report in the Information that OpenAI has reached $1.6 billion annualized revenue, which also reported some internal forecasts for a $5bn rate by year-end. This monetization is moving fast relative to just $22m reported in 2022! It is unclear how much ChatGPT+ consumer product revenue is relative to cloud API developer revenues.

As OpenAI continues to commercialize ChatGPT, we also saw reports this week that Google is readying an upgraded paid version of Bard called “Bard Advanced” via a Google One subscription. This model is likely powered by the as-yet unreleased Gemini Ultra model, which is roughly GPT-4 class according to Google’s benchmarks. Microsoft also recently rebranded its Bing Chat product to Microsoft Copilot and rolled it out for mobile. This integrates many of OpenAI’s features available in ChatGPT and allows access to GPT-4 even without a subscription, a move we expect is aimed at gaining market share.

Why should you care?

Advanced LLM-based chat models are the easiest way for ordinary people to interact with the latest AI capabilities and are also paving the way for monetizing these technologies. We think it is important for the industry that competition remains strong while also seeing continued progress on alternative open-source products. We believe the success of the GPT Store release will be a key signal for the direction AI development will take; will developers primarily build and release their AI apps through their own channels — or will they integrate into Chat model platforms such as ChatGPT and Bard? A lot will come down to the economic model and the effectiveness of the UI/UX.

– Louie Peters — Towards AI Co-founder and CEO

Towards AI launches our AI Tutor Bot to accompany the third instalment of our GenAI 360: Foundational Model Certification.

The AI Tutor chatbot has full access to 100+ lessons from all 3 courses and thousands of additional pages of helpful supporting tutorials and documentation.

The AI Tutor leverages Retrieval Augmented Generation (RAG) to provide scalable, personalized support for thousands of students enrolled in the Gen AI 360 online courses. We use RAG to improve answer accuracy, reduce hallucinations, and cite its sources (also great recommendations for further reading!). The AI Tutor can help with your questions as you learn about topics such as building LLM apps with Langchain and the Deep Lake vector database, training and fine-tuning LLMs, and advanced RAG techniques! Read more in our blog post: Introducing our AI Tutor Bot — a RAG App Created with Towards AI & Activeloop.

Hottest News

  1. OpenAI’s Annualized Revenue Tops $1.6 Billion

OpenAI recently topped $1.6 billion in annualized revenue on strong growth from its ChatGPT product, up from $1.3 billion as of mid-October. The Information also reported some internal forecasts for a $5bn rate by year-end.

2. OpenAI’s App Store for GPTs Will Launch Next Week

OpenAI plans to launch a store for GPTs, custom apps based on its text-generating AI models, sometime in the coming week. GPTs don’t require coding experience; developers can simply type the capabilities of their GPT in plain language, and GPT Builder will attempt to make an AI-powered chatbot for those.

3. GitHub Copilot Chat Opens for Organizations and Individuals

GitHub Copilot Chat is open for organizations and individuals as a core piece of their AI-powered developer platform, GitHub Copilot. It is now generally available for both Visual Studio Code and Visual Studio. It is included in all GitHub Copilot plans and is free for verified teachers, students, and maintainers of popular open-source projects.

4. Perplexity AI Raises $74M To Take On Google and Microsoft Bing With AI-Native Search

Perplexity AI has raised $73.6 million in a series B round of funding with a total of $100 million at an estimated valuation of over $500 million. It plans to use the capital to build its AI-native search engine offering and take on heavyweights Google and Microsoft.

5. Open Source, AI Voice Cloning, Arrives With MyShell’s New OpenVoice Model

OpenVoice, developed by researchers at MIT, Tsinghua University, China, and members of AI startup MyShell, offers open-source voice cloning with granular controls not found on other platforms. It comprises two AI models: a text-to-speech (TTS) model and a “tone converter.” Find the repository and paper below.

Five 5-minute reads/videos to keep you learning

  1. Forrester Identifies Biggest Barriers to Generative AI Success

This article highlights the key concerns about generative AI, condensed from Forrester Consulting’s survey of 220 AI decision-makers. The poll highlights major roadblocks (including well-known issues like hallucinations) that keep organizations from operationalizing foundation models for planned use cases.

2. How Important Is Explainability? Applying Critical Trial Principles to AI Safety Testing

AI explainability remains a heightened focus for AI providers and regulators across industries. This article highlights how A/B testing can help determine AI safety, what effective measurement of AI safety looks like, and how to ensure the accountability of the AI system.

3. A Case for AI Alignment Being Difficult

AI alignment focuses on ensuring that AI systems conform to human values and societal norms, which presents significant complexities in implementation. This essay attempts to find a branch close enough to Yudowsky’s model that it’s possible to talk in the same language.

4. Learning JAX as a PyTorch Developer

This guide provides insights for PyTorch developers transitioning to JAX. It emphasizes the advantages of JAX’s JIT compilation for improved performance by compiling entire computations at once. It highlights the need to understand JAX’s tracing mechanism for compilation and the use of JAX-specific functions for conditional logic.

5. Advancement of AI: Machine Learning Examples in Real Life

User personalization or natural language processing (NLP) are all machine learning (ML) examples. While ML continues to evolve, businesses take steps further to implement it in their applications. This article focuses on how ML works and how we interact with its examples in day-to-day life.

Repositories & Tools

  1. Mixtral-offloading optimizes Mixtral-8x7B models for consumer-level hardware, including Colab, by enhancing memory efficiency.
  2. OpenVoice provides advanced voice replication across languages and accents with fine-tuning capabilities for emotion and intonation, requiring only minimal data.
  3. Akkio adds AI-powered analytics and predictive modeling to your service offering.
  4. Anote is an AI-assisted data labeling platform that accelerates the labeling process for unstructured text data.

Top Papers of The Week

  1. Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

SPIN (Self-Play fine-tuning) is a new approach for improving the performance of LLMs without relying on human-annotated data. Using self-play to iterate and learn, SPIN enables LLMs to refine their capabilities by utilizing human-curated content. In tests, LLMs fine-tuned with SPIN showcased superior performance compared to those adjusted with Direct Preference Optimization and additional GPT-4 data.

2. DocLLM: A Layout-Aware Generative Language Model for Multimodal Document Understanding

DocLLM is an LLM tailored for document management that integrates OCR text with bounding box data, bypassing the need for image encoders. By incorporating text with spatial layouts via disentangled matrices, DocLLM offers a novel pre-training regimen that enhances its adaptability to varied document formats and content.

3. Improving Text Embeddings with Large Language Models

Researchers are advancing text embedding quality by utilizing LLMs to generate synthetic data for a wide range of text embedding tasks in nearly 100 languages. This synthetic data is then leveraged to fine-tune open-source decoder-only LLMs, such as Mistral-7B, with standard contrastive loss.

4. Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models

In a study assessing parameter-efficient fine-tuning (PEFT) methods for large language models (LLMs) of up to 16 billion parameters, full-parameter fine-tuning (FFT) consistently delivered superior performance across various tasks and datasets. However, Low-Rank Adapters (LoRA) emerged as a cost-effective alternative, especially when scaling models.

5. OpenVoice: Versatile Instant Voice Cloning

This paper introduces OpenVoice, a versatile voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages. This addresses two open challenges in the field: Flexible Voice Style Control and Zero-Shot Cross-Lingual Voice Cloning.

Quick Links

1. Nabla, a French startup building an AI copilot to accelerate how doctors work with patients, announced it has raised $24 million in a series B round of funding.

2. The viral 4,700-person list reveals famous artists whose work was used to train AI generator. It was used in a November court exhibit in a lawsuit against Midjourney, Stability AI, DeviantArt, and Runway AI.

3. Knownwell, a Washington, DC-based AIaaS platform company, raised $2M in Pre-Seed funding to build an AI-powered platform that helps executives drive client-driven operations and high-level execution.

Who’s Hiring in AI

Java Software Engineer (Junior Level) @J-Mack Technologies (Remote)

Intermediate/Senior Software Engineer @TerraSense Analytics Ltd (Remote)

Data Scientist @LeanTaaS (US/Remote)

Machine Learning Researcher @VERSES (Remote)

Full Stack Engineer with Front End Focus @BlueOcean AI (Remote)

AI Architect Part-Time Hourly Freelancer @Foxbox Digital (Remote)

Software Engineer I, Backend (Anywhere Underwriting ML) @Affirm (Remote)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

Think a friend would enjoy this too? Share the newsletter and let them join the conversation.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓