#64 Hereβs how you keep up with AI!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, weβre diving into a challenge many of us face: keeping up with the rapid pace of AI and answering some extremely thought-provoking questions, such as: Is …
TAI #141: Claude 3.7 Sonnet; Software Dev Focus in Anthropicβs First Thinking Model
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie Anthropic Claude 3.7 Sonnet reasoning model stole the show this week. This is partly due to how quickly you can test and see the …
#63: Full of Frameworks: APDTFlow, NSGM, MLFlow, and more!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, we are introducing new frameworks through hands-on guides such as APDTFlow (addresses challenges with time series forecasting), NSGM (addresses variable selection and time-series network modeling), and …
#62 Will AI Take Your Job?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Yet another week, and reasoning models and Deepseek are still the most talked about in AI. We are joining the bandwagon with this weekβs resources focusing on whether …
TAI #139: LLM Adoption; Anthropic Measures Use Cases. OpenAI API Traffic up 7x in 2024
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, Google DeepMind expanded access to Gemini 2.0, OpenAI increased transparency in ChatGPTβs reasoning and thinking steps, and Mistral launched its rapid AI …
#61: Are LLMs Entering the Age of Agents?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Reasoning agents seem to have taken over AI in the last couple of weeks. While it is early, this class of reasoning-powered agents is likely to progress LLM …
TAI #138: OpenAIβs o3-Mini and Deep Research: A New Era of Reasoning Powered Agents?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie We realize that we have been alternating between OpenAI and DeepSeek-focused discussions recently, but this is with good reason, given some very impressive models …
#60: DeepSeek, CAG, and the Future of AI Reasoning
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! The last two weeks in AI have been all about Deepseek-R1. So this weekβs issue includes resources and discussions on that, along with emerging techniques such as CAG, …
TAI #137: DeepSeek r1 Ignites Debate: Efficiency vs. Scale and China vs. US in the AI Race
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This weekβs AI discourse centered on DeepSeekβs r1 release, which sparked a heated debate about its implications for OpenAI, GPUs, and the broader industry. …
#59: The Agentic AI Era, Smolagents, and a βGatekeeperβ Agent Prototype
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! As you already know, we recently launched our 8-hour Generative AI Primer course, a programming language-agnostic 1-day LLM Bootcamp designed for developers like you. We also have a …
Our NEW 8-Hour AI Crash Course for Developers!
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Iβm sharing a special issue this week to talk about our newest offering, the 8-hour Generative AI Primer course, a programming language-agnostic 1-day LLM Bootcamp designed for developers …
TAI #136: DeepSeek-R1 Challenges OpenAI-o1 With ~30x Cheaper Open-Source Reasoning Model
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, the LLM race was blown wide open with Deepseekβs open-source release of R1. Performance is close to o1 in most benchmarks. Built …
#TAI 135: Introducing the 8-Hour Generative AI Primer
Author(s): Towards AI Editorial Team Originally published on Towards AI. 95% of developers we meet are only scratching the surface of what LLMs can do. When working with LLMs, you are CONSTANTLY making decisions such as open-source vs. closed-source, how to fit …
Why Most Developers Miss the True Potential of LLMs
Author(s): Towards AI Editorial Team Originally published on Towards AI. The 8-Hour Generative AI Primer shows you how to ask the right questions, avoid common mistakes, and build AI prototypes in one day Building LLM-powered applications and workflows lends itself to βmodularβ …
#58 Can We Use One Big Model To Train Smaller Models?
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, we explore LLM optimization techniques that can make building LLMs from scratch more accessible with limited resources. We also discuss building agents, image analysis, large concept …