Fine-Tuning Open-Source LLMs for Text-to-SQL: Project Overview and Motivations (article 1 of 3)
Author(s): Lorentz Yeung Originally published on Towards AI. OpenAI’s GPT-4 Mini as a benchmark for this project. Photo by Growtika on Unsplash In the rapidly evolving world of AI, transforming natural language questions into executable SQL queries — known as text-to-SQL — …
Fine-Tuning Open-Source LLMs for Text-to-SQL: Setting Up a Machine for Fine-Tuning LLMs on WSL2 (article 2 of 3)
Author(s): Lorentz Yeung Originally published on Towards AI. Meta’s LlaMa and Alibaba’s Qwen are both finetuned to reach their limit. Photo by Gabriel Istvan on Unsplash If you want to review Part 1 (Project Overview and Motivations), please click here: Fine-Tuning Open-Source …
TAI #161: Grok 4’s Benchmark Dominance vs. METR’s Sobering Reality Check on AI for Code
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie It was a very eventful week in AI, with xAI dominating the headlines for both mixed reasons. On one hand, the release of Grok …
NLQ-to-SQL Evaluation: The Metrics That Matter
Author(s): Tiyasa Mukherjee Originally published on Towards AI. This article explores how a typical Natural Language to SQL (NLQ-to-SQL) pipeline works, why its evaluation is critical, and introduces key metrics — including LLM-based, rule-based, and mathematical approaches — to measure its accuracy …
Prediction, Generation, or Inference? Matching Your Goal to the Right Data Tool
Author(s): Bushra Anjum, Ph.D. Originally published on Towards AI. Large Language Models (LLMs) are making headlines every day. At the same time, traditional machine learning (ML) and statistical methods are firmly holding their ground and continue to be used widely. So, which …
Embrace AI, Optimize Later
Author(s): Suyash Damle Originally published on Towards AI. As technologies gain traction, the ecosystem responds and targeted optimization tools emerge. Enabling widespread experimentation, and lowering barriers to entry are more critical for accelerating progress- than achieving peak performance from day one. Recent …
Initialization, BatchNorm, and LayerNorm: Beyond textbook definitions
Author(s): Adam Elimadi Originally published on Towards AI. The Holy Trilogy There are a ton of blog posts out there breaking down both initialization and normalization. However, I feel like most authors fail to get into the apprentice’s shoe especially those that …
Re-imagining Bridging with AI Assistants: From Dashboards to Dialogue
Author(s): Ramkumar K Originally published on Towards AI. Photo by Алекс Арцибашев on Unsplash Understanding the ‘why’ behind business shifts and strategic deviations is a recurring challenge for teams focused on performance and planning. This pursuit, often called “bridging analysis” or “variance …
The Harsh Reality of AI Startup Funding: Only 23% Survive the Series A Transition
Author(s): Jitesh Prasad Gurav Originally published on Towards AI. Building an AI startup has never been more challenging. Recent research examining nearly 1,000 generative artificial intelligence companies reveals that only 22.6% successfully transition from seed to Series A funding rounds. This statistic …
LLMs are not Stochastic Parrots— How Randomness Prevents Parroting, Not Causes It
Author(s): Antares Originally published on Towards AI. The term “stochastic parrots” — coined by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in their 2021 FAccT paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — …