Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

This AI newsletter is all you need #72
Artificial Intelligence   Latest   Machine Learning

This AI newsletter is all you need #72

Last Updated on November 8, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

This week, AI news was dominated by OpenAI’s Devday and the launch of many new models and features, which drowned out Elon Musk’s earlier entry into the LLM race with xAI’s Grok GPT-3 class model. OpenAI’s Devday included the launch of a new, better, faster, and cheaper GPT-4 Turbo model, vision capability via API, an integrated retrieval engine, as well as API integration with several other models (Dalle-3, a new Whisper speech-to-text and new text to speech models). But the event’s surprise was the release of β€œGPTs” β€” a no-code solution and future β€œGPT Store” app store for people to build and monetize their own custom GPT agents via ChatGPT.

This latest set of releases from Open AI leaves us wondering, could this be the β€œChatGPT” moment of GPT-4 class models? The initial ChatGPT launch involved the iterative improvement of GPT-3 class models together with significant improvements to UI that allowed for the widespread adoption of GPT-3 for chatbot applications. This latest set of releases has a similar feel but this time for GPT-4 class models and instead for the adoption of LLM RAG and agent applications, where a more easy to use agent building UI, faster and more affordable models allow the LLMs to reach a new potential.

Following the event, we noted the conversation focussed on 1) Will OpenAI’s aggressive pricing and rolling in increased functionalities threaten other AI startups and β€œGPT wrappers” and 2) Will OpenAI create a new AppStore ecosystem with the new GPT product? For the first point, we have sympathetic arguments on both sides. For example, OpenAI has caught up with Anthropic’s Claude 2 on its previously differentiating longer context length (and at a slightly lower price), released a text-to-speech API significantly cheaper than Eleven Labs whilst also encroaching on the territory of Document processing and Retrieval Augmented Generation (RAG) ecosystem players such as Langchain. At the same time, however, it has released an extremely powerful toolkit for building much more powerful LLM and RAG products with a better GPT-4 and many new models available via API. They have also made it much easier to build on top of OpenAI models and significantly lowered barriers to entry for people wanting to build their own projects.

– Louie Peters β€” Towards AI Co-founder and CEO

Hottest News

  1. All the news from OpenAI’s first developer conference

OpenAI’s Devday included the launch of a new, better, faster, and cheaper GPT-4 Turbo model, vision capability via API, an integrated retrieval engine, as well as API integration with several other models. But the event’s surprise was the release of β€œGPTs” β€” a no-code solution and future β€œGPT Store” app store.

2. RedPajama-Data-v2: An Open Dataset With 30 Trillion Tokens for Training LLMs

RedPajama-Data-V2, the largest public training dataset for language model research, is a cleaned dataset comprising 30 trillion tokens from 84 CommonCrawl dumps in five major languages. It includes pre-computed quality annotations for filtering and weighting purposes and is now available for research and commercial use.

3. Elon Musk’s First AI Product Is a Chatbot Named Grok

Elon Musk’s AI startup xAI has launched its first chatbot, Grok, which will be available to X Premium+ subscribers. The Grok team includes AI specialists from DeepMind, OpenAI, Google, Microsoft, and Tesla. Musk highlights that Grok’s ability to access real-time information on the X platform gives it an edge over other chatbots.

4. A New Beatles Song Is Set for Release After 45 Years β€” With Help From AI

A new Beatles song featuring the complete Fab Four was released 45 years after John Lennon began writing it β€” with the help of artificial intelligence. This opens possibilities for reviving more old recordings or even creating new music, but it also poses ethical questions about consent and art manipulation.

5. A Glimpse of the Next Generation of AlphaFold

AlphaFold, an advanced AI model, accurately predicts molecules in the Protein Data Bank, improving understanding of biomolecules and supporting research in complex protein structures. It has potential applications in cancer drug discovery, vaccine development, and pollution reduction.

Five 5-minute reads/videos to keep you learning

  1. huggingface/alignment-handbook: Robust Recipes To Align Language Models With Human and AI Preferences

Hugging Face has released a set of alignment guides in their Alignment Handbook for language model training. The guides cover techniques such as supervised fine-tuning, reward modeling, rejection sampling, and direct preference optimization (DPO) to enhance language model performance.

2. How AI Detectors Can Destroy Innocent Writers’ Livelihoods

The massive false positive rate of general AI detectors had a devastating effect on freelance writer Michael Berben: being falsely accused of cheating, he lost his job. The article sheds light on the common false positives and lack of effective mechanisms for challenging AI detectors in the field.

3. AI + APIs β€” What 12 Experts Think the Future Holds

The convergence of AI and APIs is revolutionizing the tech world. Startups leveraging these tools can challenge established giants and reshape power dynamics in the digital economy. This essay highlights the thoughts and opinions of 12 experts on the opportunity that sits at the crossroads of AI and APIs.

4. After 500+ LoRAs Made, Here Is the Secret

This blog emphasizes the significance of quality datasets and parameter optimization in maximizing the efficiency of LoRAs. It highlights the importance of a clear dataset and recommends using a 33B model for better finetuning. Additionally, it cautions about potential impacts on quality from gradient accumulation.

5. What Is Multimodal Artificial Intelligence (AI)?

The guide explains the concept of multimodal artificial intelligence (AI) and its transformative impact on various fields. It explores the practical applications of multimodal AI, discusses fusion techniques, and offers a concise glossary of key terms in this domain.

Papers & Repositories

  1. Zephyr: Direct Distillation of LM Alignment

The Zephyr 7B, developed by Hugging Face, has achieved impressive results by surpassing the Chat Llama 70B in various benchmarks. Its training approach involves dataset construction, fine-tuning, AI feedback collection, and preference optimization.

2. huggingface/distil-whisper

Distil-Whisper is an impressive AI model that offers faster inference speed and reduced size compared to Whisper. It performs well in noisy environments and exhibits lower word repetition and insertion errors. The model utilizes an innovative distillation process trained on a large and diverse dataset, ensuring robustness across various domains.

3. LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts

This work conducts a quantitative evaluation of different IR models with human-written and LLM-generated texts. Search engines tend to favor LLM-generated texts over human-written ones. This raises concerns about source bias and calls for further exploration and evaluation in the era of LLM.

4. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents

This paper investigates generative LLMs such as ChatGPT and GPT-4 for relevance ranking in Information Retrieval. It shows that LLMs, when guided effectively, can achieve better results than state-of-the-art supervised methods on information retrieval benchmarks.

5. Large Language Models Understand and Can Be Enhanced by Emotional Stimuli

This paper takes the first step towards exploring the ability of LLMs to understand emotional stimuli. It found that providing specific phrases to the GPT-4 AI model improved its performance. Using β€œEmotionPrompts” resulted in higher-quality outputs, with an improvement of 8% during instruction induction and 115% on higher-stakes tasks.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Weekly AI Podcast

In this week’s episode of What’s AI, Louis Bouchard shares his experience about his PhD in AI and why he decided to discontinue it. This episode is particularly relevant for those contemplating the choice between pursuing a Ph.D. in AI, working in applied AI, or joining a fast-paced startup. Louis offers insights on the suitability of a Ph.D., factors to consider when making such a decision, and more. Tune in on YouTube or read the full article to find valuable guiding points for your journey.

Meme of the week!

Meme shared by rucha8062

Featured Community Post from the Discord

Henry has just launched DearFlow for beta testing! It’s an all-in-one platform for you to create and discover AI use cases (chatbots and workflows). It combines the power of FlowGPT with Notion. It allows users to execute complex workflows that chat interfaces like ChatGPT cannot handle. Check it out here and support a fellow community member! Share your thoughts and feedback in the thread here.

AI poll of the week!

Tell us how you are able to boost your productivity or leverage them for your current job! Join the discussion on Discord.

TAI Curated section

Article of the week

Top Important LLM Papers for the Week from 23/10 to 29/10 by Youssef Hosni

Large language models (LLMs) have advanced rapidly in recent years. As new generations of models are developed, it’s essential for researchers and engineers to stay informed on the latest progress. This article summarizes some of the most important LLM papers published during the Fourth week of October.

Our must-read articles

Is it Possible to Prove the Simulation Hypothesis? by Lee Vaughan

Enhancing The Robustness of Regression Model with Time-Series Analysis β€” Part 1 by Mirza Anandita

A Complete Guide for Creating an AI Assistant for Summarizing YouTube Videos β€” Part 2 by Amin Kamali

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Data Engineer @Pearl Technologies (Remote)

Robotics Software Intern 2024 @Rapyuta Robotics (Japan)

Mobile Engineer, Full Stack (LLM/GenAI) @Mercari, inc. (Remote)

Data Analytics Manager @Humanforce (Sydney, Australia)

Quantitative Developer β€” Temporary @Twine (Remote)

QA Engineer @CRISP (London, UK)

Python Interns (Mumbai) @Docsumo (Mumbai, India)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

https://www.confetti.ai/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓