TAI #194: AI Goes Macro; Job Loss Fears, Military Usage, OpenAI $110B Raise
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week brought a series of developments that signal AI is quickly becoming more than just a technology story: AI’s revenue, its politics, and …
LAI #115: The Hidden Cost of “Agent-First” Thinking
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! AI is getting embedded into real workflows: repos, data platforms, enterprise search, and production infrastructure. And as that happens, a pattern is showing up everywhere: the biggest failures …
TAI #192: AI Enters the Scientific Discovery Loop
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, LLMs crossed from tools into participants in scientific discovery. OpenAI released a preprint, “Single-minus gluon tree amplitudes are nonzero,” in which GPT-5.2 …
TAI #191: Opus 4.6 and Codex 5.3 Ship Minutes Apart as the Long-Horizon Agent Race Goes Vertical
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie On February 5th, Anthropic and OpenAI released Claude Opus 4.6 and GPT-5.3-Codex, respectively, within minutes of each other. Both are point releases, but both …
LAI #113: The Engineering Work That Decides Whether AI Holds Up
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, Shipping AI in 2026 is about operational discipline: catching data drift before users do, keeping inference fast as workloads grow, choosing architectures that survive real traffic, and understanding …
LAI #110: Fixing Context Rot and Rethinking How Agents Reason
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week, we’re looking at why agent systems drift, confuse themselves, or quietly break when tasks get long. I unpack the real cause of “random” agent degradation: context …
TAI #187: OpenAI’s Health Push and the Real State of LLMs in Medicine
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie OpenAI made its biggest healthcare push this week with two launches: ChatGPT Health for consumers and OpenAI for Healthcare for enterprises. The consumer product …
LAI #108: Building What Lasts in the Year Ahead
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, and happy new year 🎉 This is the first issue of the year, and it feels like a good moment to reset expectations and direction. We’re starting 2026 …
TAI #181: DeepSeek’s V3.2 “Speciale” Delivery Challenges the US Frontier;
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie If the last few weeks were defined by the sheer scale of the US tech giants, with Google’s Gemini 3.0 claiming the throne and …
TAI #180: DeepMind Pulling Ahead in the AI Race with Gemini 3.0 Pro and Nano Banana Pro?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, DeepMind finally released the much-anticipated Gemini 3.0 Pro, which sailed into the lead on multiple measures. There is much discussion about whether …
LAI #101: Designing Memory, Building Agents, and the Rise of Multimodal AI
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week, we explore how AI systems are becoming more structured, contextual, and multimodal. We examine how vision-language models like GPT-4o and Qwen 2.5 VL are redefining what …
TAI #178: Kimi K2 Thinking Steals the Open-Source Crown With a New Agentic Contender
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie The AI playing field was reshaped yet again this week with the release of Kimi K2 Thinking from Moonshot AI. This release feels like …
TAI #176: DeepSeek’s Optical Compression: A Cheaper OCR or a New Path for LLMs?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie DeepSeek has been relatively quiet this year after a series of huge innovations in 2024 culminated in it breaking into mainstream awareness in early …
Fast vs. Slow: How (and When) to Make Models Think
Author(s): Towards AI Editorial Team Originally published on Towards AI. Our AI team attended COLM 2025 this year. In this piece, François Huppé-Marcoux, one of our AI engineers, shares the “aha” moment that reshaped his view of reasoning in LLMs. We talk …
LAI #97: Claude 4.5 Benchmarks, Function-Calling Fine-Tunes, and the Future of Model Alignment
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts, This week’s issue dives deep into how models are evolving across capability, specialization, and alignment. We examine Claude Sonnet 4.5, how it outperforms GPT-5 (Codex) and Gemini 2.5 …