#47 Building a NotebookLM Clone, Time Series Clustering, Instruction Tuning, and More!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! As we wrap up October, we’ve compiled a bunch of diverse resources for you — from the latest developments in generative AI to tips for fine-tuning your LLM …
TAI #123; Strong Upgrade to Anthropic’s Sonnet and Haiku 3.5, but Where’s Opus?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week Anthropic released significant upgrades to its Claude model family with an improved Sonnet 3.5, the first smaller Haiku model in the 3.5 …
#46 Why Can’t We Just Remove All Bias in AI?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! An interesting ongoing discussion in the community is around bias in AI. Currently, we are very close to releasing the most comprehensive practical LLM Python developer course out …
TAI #122; LLMs for Enterprise Tasks; Agent Builders or Fully Custom Pipelines?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, the focus on customizing LLMs for enterprises gained further momentum with Microsoft’s announcement of Copilot Studio agents, following Salesforce’s launch of AgentForce …
#45 Is Prompting a Future-Proof Skill?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Over the past few months, we have discussed the AI engineer’s toolkit for building reliable LLM products multiple times. We believe that combining RAG, prompting, and fine-tuning will …
TAI #121: Is This the Beginning of AI Starting To Sweep the Nobel Prizes?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, our attention was on the Nobel prizes, where two prizes were awarded explicitly for AI research for the first time. The Physics …
#44 Why is Model Distillation the Hottest Trend in AI Right Now?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week we discuss the hottest trend in AI: Model Distillation, along with some interesting articles on RAG, Llama 3.2, and Bayesian methods. What’s AI Weekly This week, …
Building LLMs for Production Gets a Massive Update!
Author(s): Towards AI Editorial Team Originally published on Towards AI. We are excited to announce the new and improved version of Building LLMs for Production. The latest version of the book offers an improved structure, fresher insights, more up-to-date information, and optimized …
TAI #120; OpenAI DevDay in Focus!
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie OpenAI’s 2024 DevDay event came amidst a backdrop of significant changes within the company, including executive departures and new fundraising efforts. Despite the turbulence, …
#43 MemoRAG, RAG Agent, RAG Fusion, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, we are diving into different RAG approaches, programming tips, community discussions, and some fun collaboration opportunities. Dive in and enjoy the read! What’s AI Weekly This …
TAI #119 New LLM audio capabilities with NotebookLM and ChatGPT Advanced Voice
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, we were focused on new voice capabilities for LLMs with Google’s recently released NotebookLM’s audio features and OpenAI’s move to roll out …
#42 Teaching AI to “Think”, Fine-Tuning to SQL, Encoder-only models, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This is another resource-heavy issue with articles focusing on everything from early AI architectures to the latest developments in AI reasoning abilities. Enjoy the read! What’s AI Weekly …
A Practical Approach to Using Web Data for AI and LLMs
Author(s): Towards AI Editorial Team Originally published on Towards AI. As businesses and researchers work to advance AI models and LLMs, the demand for high-quality, diverse, and ethically sourced web data is growing rapidly. If you’re working on AI applications or building …
TAI #118: Open source LLMs progress with Qwen 2.5 and Pixtral 12B
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, several new strong open-source LLM models were released. Following OpenAI’s huge LLM progress with its o1 “reasoning” model family last week, it …
#41 OpenAI’s “innovation,” LLM Quantization, Feature Selection, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, we are sharing lots of resources covering some developments in the AI landscape. Today’s articles cover everything from the speed issues with Open AI’s new model …