LLMs as Judges: Practical Problems and How to Avoid Them
Last Updated on September 4, 2025 by Editorial Team
Author(s): Katherine Munro
Originally published on Towards AI.
Concrete advice for teams building LLM-powered evaluations
My last post was all about conceptual problems with using Large Language Models to judge other LLMs.

The article discusses the practical challenges of using Large Language Models (LLMs) as judges in evaluations, highlighting issues such as non-determinism in both the LLMs being evaluated and the evaluators themselves, prompting errors, and biases inherent in LLMs. It emphasizes the importance of human oversight, the complexity of accurately assessing LLM outputs, and the need for comprehensive evaluation metrics to ensure reliable assessments while cautioning against over-reliance on automated evaluations.
Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.