Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Learn AI Together — Towards AI Community Newsletter #8
Artificial Intelligence   Latest   Machine Learning

Learn AI Together — Towards AI Community Newsletter #8

Last Updated on January 25, 2024 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

Good morning, fellow AI enthusiasts! We are kicking off this year’s podcast with a unique episode! In this one, I received my friend Jérémy Cohen, founder of Think Autonomous and expert in autonomous vehicles. I drafted some “debate” ideas about AI to discuss with him. It was a super fun episode to record and nice to chat in a 1:1 discussion with an expert in such an emblematic sub-field of AI. I hope you enjoy it!

We’ve also run a very intriguing poll in the community, which we’d love to get your thoughts on: Can you spot generated content? What are the conditions? Can you spot if it is adequately edited? What are the hints you look for or find when determining it is AI-generated? On my end, I feel I can spot most of the generated content, but not all. Mainly because of a few specific words it often uses that you spot instantly if you leverage GPT a lot (as I do). I use GPT occasionally to generate text, but I edit it heavily so it doesn’t look “generated.” Jump on the conversation with us on Discord in the polls section!

We are also excited to announce that our AI Tutor chatbot is now available on the GPT store! The chatbot can help with your questions as you learn about topics such as building LLM apps with Langchain and LlamaIndex, training and fine-tuning LLMs, and advanced RAG techniques!

Try it out now, ask it any RAG or LLM-related questions, and let us know your feedback!

Happy reading, and I wish you a great end of the week and weekend!

– Louis-François Bouchard, Towards AI Co-founder & Head of Community

What’s AI Weekly

In this week’s episode of The What’s AI Podcast, Louis Bouchard interviewed Jérémy Cohen from Think Autonomous. This episode explores the practical and ethical layers of autonomous vehicles. The discussion goes beyond just the technical aspects, diving into how AI shapes crucial decisions in transportation and the implications for human involvement and responsibility. The episode also covers AI’s far-reaching impact across diverse sectors like healthcare and finance, examining the evolving world of AI startups and the balance between AI innovations and human skills. Join the discussion on AI, autonomous vehicles, and their transformative effects on society by tuning in on YouTube, Spotify, or Apple Podcasts!

Learn AI Together Community section!

Featured Community post from the Discord

Toni just created their first GitHub project that studies data parallelism to accelerate the training of transformers, both with multiple GPUs on the same node and across multiple nodes. This repository is a starting point for future projects related to training transformers on infrastructures equipped with the Slurm job scheduler. Check it out on GitHub and support a fellow community member. Share your opinions and questions in the Discord thread!

AI poll of the week!

Do you trust the AI text detection tools currently available online? Share your thoughts on Discord.

Collaboration Opportunities

The Learn AI Together Discord community is flooding with collaboration opportunities. If you are excited to dive into applied AI, want a study partner, or even want to find a partner for your passion project, join the collaboration channel! Keep an eye on this section, too — we share cool opportunities every week!

1. Trevorhunter is working on an LLM project and needs extra hands. They are looking for someone based in the US with deep expertise in LLM/NLP. If you are an experienced individual, connect with them in the thread!

2. yamantal3#0415 is working on developing skills in deep learning. They are currently looking for someone to teach them Deep Learning. If you are well-versed in deep learning and enjoy teaching, contact them in the thread!

3. Ritwikraj is seeking a passionate visionary under 17 with skills in web development, app development, game development, AI, and data science. This opportunity is for individuals based in India. If this is relevant to you or you know someone who fits the description, contact them in the thread!

Meme of the week!

Meme shared by rucha8062

TAI Curated section

Article of the week

The Simple Principle Behind Retrieval Augmented Generation in Large Language Models by Krupesh Raikar

ChatGPT is good at tackling general questions but may struggle with particular queries. Retrieval Augmented Generation (RAG) provides an affordable solution to extract insights from specialized documents using AI without the cost and effort of building one from scratch. Explore how RAG enables easy, secure chats with localized data on a budget.

Our must-read articles

1. A Detailed Explanation of the Mixtral 8x7B Model by Florian

Since the end of 2023, the Mixtral 8x7B[1] has become a highly popular model in the LLM industry. It has gained this popularity because it outperforms the Llama2 70B model with fewer parameters (less than 8x7B) and computations (less than 2x7B) and even exceeds the capabilities of GPT-3.5 in certain aspects. This article primarily focuses on the code and includes illustrations to explain the principles behind the Mixtral model.

2. Page by Page Review: Mixtral of Experts (8x7B) by Dr. Mandar Karhade, MD. Ph.D.

Mixtral 8x7B, a new open-source Sparse Mixture of Experts AI model, employs eight expert models with 7 billion parameters each but uses only two experts at once for efficient VRAM use. The model features Chat and Instruct versions, outdoing models like LLaMA 70B and GPT-3.5 in complex tasks like math, coding, and languages. Instruct excels cost-effectively without IP risks. Explore Mixtral’s mechanics and commercial uses in this comprehensive review to see how it can enhance your projects.

3. Gradient Descent and the Melody of Optimization Algorithms by Abhinav Kimothi

Gradient Descent guides AI models to minimize loss during training. The algorithm simplifies model optimization by iteratively adjusting parameters in the direction that minimizes the loss function. It is vital for AI training key to develop sound machine learning models. Discover how practical analogies can clarify AI concepts. Enhance your grasp of Gradient Descent with the ‘Melody of Optimization Algorithms.

If you want to publish with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Think a friend would enjoy this too? Share the newsletter and let them join the conversation.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓