LAI #101: Designing Memory, Building Agents, and the Rise of Multimodal AI
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week, we explore how AI systems are becoming more structured, contextual, and multimodal. We examine how vision-language models like GPT-4o and Qwen 2.5 VL are redefining what …
TAI #178: Kimi K2 Thinking Steals the Open-Source Crown With a New Agentic Contender
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie The AI playing field was reshaped yet again this week with the release of Kimi K2 Thinking from Moonshot AI. This release feels like …
TAI #176: DeepSeek’s Optical Compression: A Cheaper OCR or a New Path for LLMs?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie DeepSeek has been relatively quiet this year after a series of huge innovations in 2024 culminated in it breaking into mainstream awareness in early …
Fast vs. Slow: How (and When) to Make Models Think
Author(s): Towards AI Editorial Team Originally published on Towards AI. Our AI team attended COLM 2025 this year. In this piece, François Huppé-Marcoux, one of our AI engineers, shares the “aha” moment that reshaped his view of reasoning in LLMs. We talk …
LAI #97: Claude 4.5 Benchmarks, Function-Calling Fine-Tunes, and the Future of Model Alignment
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week’s issue dives deep into how models are evolving across capability, specialization, and alignment. We examine Claude Sonnet 4.5, how it outperforms GPT-5 (Codex) and Gemini 2.5 …
TAI #174: Gemini 2.5 Computer Use Hits SotA but Not Yet an Unlock for Production Agents
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie After a frenetic period of product announcements, this week felt much slower on the release front. Following OpenAI’s DevDay deluge, Google offered its response …
LAI #96: From Building LLMs by Hand to Smarter Agent Patterns
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! AI isn’t just about bigger models; it’s about building smarter, more trustworthy systems. This week, we start with the fundamentals: a step-by-step guide to creating an LLM from …
TAI #173: OpenAI’s DevDay Deluge: Sora 2, AgentKit, and an App Store Reboot
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week was dominated by a deluge of product releases from OpenAI, culminating in its 2025 DevDay. The first big splash came with the …
October Cohort Kicks Off on 5th October — 2 Days Left
Author(s): Towards AI Editorial Team Originally published on Towards AI. Enroll today to unlock October’s live kick-off, updated courses, and hands-on projects. The October cohort kicks off on 5th October (in less than 48 hours). If you’ve been waiting for the right …
LAI #95: Fine-Tuning RAG, Smarter Agents, and Tackling GPU Bottlenecks
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week’s focus is on making AI systems more efficient and reliable, starting with the question of fine-tuning in RAG pipelines. When does it actually improve retrieval and …
TAI #172:OpenAI’s GDPval Shows AI Nearing Expert Parity on Real-World Work
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week feels like a follow-up to last week’s discussion, as our attention was again drawn to both OpenAI’s escalating energy ambitions and additional …
LAI #94: Deep Learning Myths, Multi-Agent Frameworks, and Synthetic Data in Practice
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week, we take a closer look at what deep learning really is — and what it isn’t. Rather than true intelligence, it’s better thought of as sophisticated …
TAI #171: How is AI Actually Being Used? Frontier Ambitions Meet Real-World Adoption Data
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, AI models continued to push the frontiers of capability, with both OpenAI and DeepMind achieving gold-medal-level results at the 2025 ICPC World …
LAI #93: Smarter Model Choices, Multi-Agent Systems, and Cutting Through AI Noise
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, Choosing the right model is becoming just as important as knowing how to prompt it. In What’s AI, I explain why treating LLMs as interchangeable is a mistake …
TAI #170: Why Are So Many of the AI Success Stories AI for Coding?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, OpenAI released a significant upgrade to its coding platform with GPT-5-Codex, a specialized version of GPT-5 fine-tuned for agentic software engineering. This …