Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


Learn to Build — Towards AI Community Newsletter #2
Artificial Intelligence   Latest   Machine Learning

Learn to Build — Towards AI Community Newsletter #2

Last Updated on November 24, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What a weekend and week in AI…

You missed out if you haven’t followed the OpenAI drama over the past few days. Something unbelievable happened. Most employees decided to sign and send a letter to the OpenAI direction board to bring back Sam Altman (OpenAI’s CEO) who was just fired last Friday, or they would leave with him. Have you ever heard of such a story? Talk about a community. Sam Altman, Greg Brockman, and OpenAI built something as powerful as a family.

This is the power of communities when done right. They can defeat dominant entities, even if smaller or less powerful. And this is also why I am incredibly proud to be part of this community with amazing people in the AI space.

P.S. If this saga interests you and you didn’t follow it online, read more in our last newsletter iteration.

Now, let’s get into this week’s news from our own family and what we’ve all been up to, starting with an amazing conversation with the CEO and a friend of mine at MindStudio about the future of AI, LLMs development, usage, and more…

– Louis-François Bouchard, Towards AI Co-founder & Head of Community

What’s AI Weekly

In this week’s What’s AI Podcast episode, Louis Bouchard interviewed Dmitry Shapiro, who previously worked at Google and was the CTO of MySpace Music. Now, Dmitry is building something super ambitious to democratize artificial intelligence. In this episode, they discuss his platform, YouAI and Mindstudio, along with some applicable tips for creating better AI apps, such as model selection, prompting, RAG, and more. If you are curious about user perspective, democratizing AI, and the future of AI, tune in on YouTube, Spotify, and Apple Podcasts!

Learn AI Together Community section!

Featured Community post from the Discord

Mateomd_dev built helpful something for us all! Code Project Helper assists with learning a programming language by recommending projects for their unique strengths and use cases. By providing the name of a programming language you are interested in learning, the GPT will give you a short description of the language, examples, and project ideas. Try it out here and support a fellow community member. Share your experience and feedback in the thread!

AI poll of the week!

Since most of you voted to understand ‘What is RAG?’ Here’s a quick explanation:

The RAG, or retrieval augmented generation architecture has efficiently overcome the LLM input length limit and the knowledge cutoff problem. In simple terms, RAG is an AI framework that enhances the quality of LLM output by incorporating external knowledge sources. This also has the benefit of reducing model hallucinations. If you have a knowledge base (in other words text dataset), you should be using RAG!

Share your questions or insights for the community in the thread!

Collaboration Opportunities

The Learn AI Together Discord community is flooding with collaboration opportunities. If you are excited to dive into applied AI, want a study partner, or even want to find a partner for your passion project, join the collaboration channel! Keep an eye on this section, too — we share cool opportunities every week!

  1. Louis Bouchard shared a great opportunity to work at Towards AI as an AI Technical Writer and Developer for LLMs. They are looking for people in the field to help us build technical & applied learning resources. This would be a paid opportunity, mostly involving LLM-based content creation and the writing of tutorials/course projects. Find more information in the thread here.
  2. Cozzy implemented Lunar Lander with DQN and DDQN and is now looking to add Duel DQN and compare between the 3. They are currently searching for contributors to the repository. If you are interested, connect with them in the thread.
  3. Adhishta is looking for a study partner to develop skills such as data cleaning, EDA, visualization, and more. If you are enthusiastic about learning and exploring various datasets, reach out in the thread here.

Meme of the week!

Meme shared by ghost_in_the_machine

TAI Curated section

Article of the week

LangChain Cheatsheet — All Secrets on a Single Page by Ivan Reznikov

LangChain allows for the easy setup of text generation models with performance-boosting methods like quantization and CUDA. Want to dive deeper into optimizing language model performance? The author has created onepager summary of the basics of LangChain. In this article, he goes through sections of code and describes the starter package you need to ace LangChain.

Our must-read articles

  1. Inside Ghostbuster: Berkeley University’s New Method for Detecting AI-Generated Content by Jesus Rodriguez

Berkeley’s Ghostbuster is a game-changer for detecting AI text and discerning between human and machine-written content without knowing the AI model used. Ghostbuster transforms words into vectors with language models to train a classifier that distinguishes humans from AI texts, achieving a 99.0 F1 score for accuracy across domains.

2. OpenAI API Dev Day Upgrades Are Mindblowing: Here’s DALL-E 3, GPT-4 Vision, and GPT-4 Turbo in Action by Kris Ograbek

OpenAI’s new API developments enhance AI capabilities, offering advanced vision and creativity through upgrades like DALL-E 3, GPT-4 Vision, and GPT-4 Turbo for improved AI applications. For those ready to get their hands dirty, the article provides updated code examples so you can test-drive these new features yourself.

3. Speak Only About What You Have Read: Can LLMs Generalize Beyond Their Pretraining Data? by Salvatore Raieli

Large Language Models’ in-context learning lets them respond intelligently to provided examples. Researchers explore the extent and origin of this ability and its impact on artificial general intelligence. Explore Large Language Models’ in-context learning to grasp their future in AI, offering insights into the evolving tech landscape.

If you want to publish with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Think a friend would enjoy this too? Share the newsletter and let them join the conversation.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓