Why Thereβs No Better Time to Learn LLM Development
Author(s): Towards AI Editorial Team Originally published on Towards AI. LLMs are already beginning to deliver significant efficiency savings and productivity boosts when assisting workflows for early adopters. However, a large amount of work has to be delivered to access the potential …
#47 Building a NotebookLM Clone, Time Series Clustering, Instruction Tuning, and More!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! As we wrap up October, weβve compiled a bunch of diverse resources for you β from the latest developments in generative AI to tips for fine-tuning your LLM …
TAI #123; Strong Upgrade to Anthropicβs Sonnet and Haiku 3.5, but Whereβs Opus?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week Anthropic released significant upgrades to its Claude model family with an improved Sonnet 3.5, the first smaller Haiku model in the 3.5 …
#46 Why Canβt We Just Remove All Bias in AI?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! An interesting ongoing discussion in the community is around bias in AI. Currently, we are very close to releasing the most comprehensive practical LLM Python developer course out …
TAI #122; LLMs for Enterprise Tasks; Agent Builders or Fully Custom Pipelines?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, the focus on customizing LLMs for enterprises gained further momentum with Microsoftβs announcement of Copilot Studio agents, following Salesforceβs launch of AgentForce …
#45 Is Prompting a Future-Proof Skill?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Over the past few months, we have discussed the AI engineerβs toolkit for building reliable LLM products multiple times. We believe that combining RAG, prompting, and fine-tuning will …
TAI #121: Is This the Beginning of AI Starting To Sweep the Nobel Prizes?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, our attention was on the Nobel prizes, where two prizes were awarded explicitly for AI research for the first time. The Physics …
#44 Why is Model Distillation the Hottest Trend in AI Right Now?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week we discuss the hottest trend in AI: Model Distillation, along with some interesting articles on RAG, Llama 3.2, and Bayesian methods. Whatβs AI Weekly This week, …
Building LLMs for Production Gets a Massive Update!
Author(s): Towards AI Editorial Team Originally published on Towards AI. We are excited to announce the new and improved version of Building LLMs for Production. The latest version of the book offers an improved structure, fresher insights, more up-to-date information, and optimized …
TAI #120; OpenAI DevDay in Focus!
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie OpenAIβs 2024 DevDay event came amidst a backdrop of significant changes within the company, including executive departures and new fundraising efforts. Despite the turbulence, …
#43 MemoRAG, RAG Agent, RAG Fusion, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, we are diving into different RAG approaches, programming tips, community discussions, and some fun collaboration opportunities. Dive in and enjoy the read! Whatβs AI Weekly This …
TAI #119 New LLM audio capabilities with NotebookLM and ChatGPT Advanced Voice
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, we were focused on new voice capabilities for LLMs with Googleβs recently released NotebookLMβs audio features and OpenAIβs move to roll out …
#42 Teaching AI to βThinkβ, Fine-Tuning to SQL, Encoder-only models, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This is another resource-heavy issue with articles focusing on everything from early AI architectures to the latest developments in AI reasoning abilities. Enjoy the read! Whatβs AI Weekly …
A Practical Approach to Using Web Data for AI and LLMs
Author(s): Towards AI Editorial Team Originally published on Towards AI. As businesses and researchers work to advance AI models and LLMs, the demand for high-quality, diverse, and ethically sourced web data is growing rapidly. If youβre working on AI applications or building …
TAI #118: Open source LLMs progress with Qwen 2.5 and Pixtral 12B
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, several new strong open-source LLM models were released. Following OpenAIβs huge LLM progress with its o1 βreasoningβ model family last week, it …