Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

From JSON to TOON: Evolving Serialization for LLMs
Artificial Intelligence   Latest   Machine Learning

From JSON to TOON: Evolving Serialization for LLMs

Last Updated on November 11, 2025 by Editorial Team

Author(s): Kushal Banda

Originally published on Towards AI.

From JSON to TOON: Evolving Serialization for LLMs
TOON (Token Oriented Object Notation)

When you’re scaling AI applications, token efficiency isn’t just a buzzword it’s your bottom line. Every token wasted is money left on the table and latency you didn’t ask for. Enter TOON (Token-Oriented Object Notation), a format engineered specifically for LLM contexts that achieves what JSON couldn’t: 30–60% token savings while improving LLM comprehension.

This isn’t about replacing JSON everywhere. It’s about choosing the right tool when you’re feeding massive datasets to Claude, GPT-5, or Gemini.

The Problem We’re Actually Trying to Solve

Let’s be honest: JSON is verbose. When you’re making hundreds of API calls to LLMs with tabular data, that verbosity compounds into real costs.

Consider this scenario. You’re building a data retrieval agent that processes GitHub repository metadata. Your dataset looks like this:

{
"repositories": [
{
"id": 28457823,
"name": "freeCodeCamp",
"repo": "freeCodeCamp/freeCodeCamp",
"description": "freeCodeCamp.org's open-source codebase and curriculum. Learn math, programming,…",
"createdAt": "2014-12-24T17:49:19Z",
"updatedAt": "2025-10-28T11:58:08Z",
"pushedAt": "2025-10-28T10:17:16Z",
"stars": 430886,
"watchers": 8583,
"forks": 42146,
"defaultBranch": "main"
},
{
"id": 132750724,
"name": "build-your-own-x",
"repo": "codecrafters-io/build-your-own-x",
"description": "Master programming by recreating your favorite technologies from scratch.",
"createdAt": "2018-05-09T12:03:18Z",
"updatedAt": "2025-10-28T12:37:11Z",
"pushedAt": "2025-10-10T18:45:01Z",
"stars": 430877,
"watchers": 6332,
"forks": 40453,
"defaultBranch": "master"
}
]
}

15,145 tokens. That’s your baseline.

Now meet TOON:

repositories[2]{id,name,repo,description,createdAt,updatedAt,pushedAt,stars,watchers,forks,defaultBranch}:
28457823,freeCodeCamp,freeCodeCamp/freeCodeCamp,"freeCodeCamp.org's open-source codebase and curriculum. Learn math, programming,…","2014-12-24T17:49:19Z","2025-10-28T11:58:08Z","2025-10-28T10:17:16Z",430886,8583,42146,main
132750724,build-your-own-x,codecrafters-io/build-your-own-x,Master programming by recreating your favorite technologies from scratch.,"2018-05-09T12:03:18Z","2025-10-28T12:37:11Z","2025-10-10T18:45:01Z",430877,6332,40453,master

8,745 tokens. Same data. 42.3% fewer tokens.

Scale that across 100 API calls, and suddenly you’re looking at serious cost reduction. But here’s where it gets interesting: you’re not just saving tokens. LLMs understand TOON better.

Why TOON Makes LLMs Smarter

This is where TOON shifts from “cost optimization hack” to “genuinely useful architecture decision.”

The benchmarks are compelling:

  • TOON achieves 70.1% accuracy on data retrieval tasks (vs JSON’s 65.4%)
  • 46.3% fewer tokens while improving comprehension
  • 96.1% accuracy on GPT-5 Nano (vs 86.4% for JSON)

Why? Because TOON’s structure is explicit. The header repositories[2]{id,name,repo,...}: tells the LLM:

  • I have 2 records
  • Each record has exactly these fields
  • Fields follow this order
  • Treat everything below as data rows

CSV does this too, but loses the structure tracking. JSON provides structure but drowns it in syntax. TOON found the sweet spot.

Efficiency Ranking (Accuracy per 1K Tokens)

Each format’s overall performance, balancing accuracy against token cost:

TOON ████████████████████ 26.9 │ 73.9% acc │ 2,744 tokens
JSON compact █████████████████░░░ 23 │ 71% acc │ 3,081 tokens
YAML ██████████████░░░░░░ 18.6 │ 69.0% acc │ 3,719 tokens
JSON ███████████░░░░░░░░░ 15.3 │ 69.7% acc │ 4,545 tokens
XML ██████████░░░░░░░░░░ 13.0 │ 67.1% acc │ 5,167 tokens

How TOON Works?

TOON steals concepts from YAML (indentation-based), CSV (tabular data), and JSON (structure) then optimizes them for LLM ingestion.

Simple Objects

id: 123
name: Ada
active: true

Straightforward. No braces, no quotes around keys. Just colons.

Nested Objects

user:
id: 123
name: Ada
created: 2025-01-15T10:30:00Z

YAML-style nesting with 2-space indentation.

Primitive Arrays

tags[3]: admin,ops,dev

One line. The [3] tells LLMs there are 3 elements without counting commas.

The Power Move: Tabular Arrays

This is where TOON dominates. Uniform objects (same fields, primitive values):

items[2]{sku,qty,price}:
A1,2,9.99
B2,1,14.5

Instead of repeating keys and punctuation for every row, you declare them once. The format is self-describing: the model knows exactly which columns exist and their order. Nested arrays inherit this structure recursively.

Complex Mixed Arrays

When objects aren’t uniform or contain nested structures, TOON falls back to list format:

items[2]:
- id: 1
name: First
tags[2]: foo,bar
- id: 2
name: Second
nested:
key: value

The format adapts. It’s not dogmatic.

Benchmarked Performance: The Data Behind the Claims

Three real-world scenarios were tested across multiple LLMs and formats:

Scenario 1: GitHub Repositories (100 records, 11 fields)

  • JSON (15,145 tokens) vs TOON (8,745 tokens)
  • Savings: 6,400 tokens (42.3% reduction)
  • Accuracy improvement: +4.7% (TOON 70.1% → JSON 65.4% on retrieval tasks)
  • Use case: Repository metadata queries, filtering by stars/forks, aggregations

Scenario 2: Daily Analytics (180 days of metrics, 6 fields)

  • JSON (10,977 tokens) vs TOON (4,507 tokens)
  • Savings: 6,470 tokens (58.9% reduction)
  • This is where TOON shines — repeated structures compound savings
  • Accuracy: TOON 78.8% vs JSON 76.9% on time-series aggregations

Scenario 3: E-Commerce Orders (nested customers, item arrays)

  • JSON (257 tokens) vs TOON (166 tokens)
  • Savings: 91 tokens (35.4% reduction)
  • Even with nested complexity, TOON holds ground
  • When you have 10,000 orders to process, 91 tokens × 10,000 = ~900K tokens saved

The pattern is clear: uniform tabular data = maximum savings. Mixed/nested structures = still beats JSON, but not dramatically.

The API: How to Use TOON in Your Stack

Installation

pip install python-toon

Encoding JSON to TOON

from toon import encode

data = {
"users": [
{"id": 1, "name": "Alice", "role": "admin"},
{"id": 2, "name": "Bob", "role": "user"},
{"id": 3, "name": "Charlie", "role": "user"}
]
}
print(encode(data))
# Output:
# users[3]{id,name,role}:
# 1,Alice,admin
# 2,Bob,user
# 3,Charlie,user

The simplicity is intentional. One function, one job.

Decoding TOON Back to JSON

from toon import decode

toon_str = """users[3]{id,name,role}:
1,Alice,admin
2,Bob,user
3,Charlie,user"""

toon_str = """users[3]{id,name,role}:
1,Alice,admin
2,Bob,user
3,Charlie,user"""


data = decode(toon_str)

# {
# "users": [
# {"id": 1, "name": "Alice", "role": "admin"},
# {"id": 2, "name": "Bob", "role": "user"},
# {"id": 3, "name": "Charlie", "role": "user"}
# ]
# }

Perfect for pipelines: JSON → TOON → LLM → parse response → JSON.

Optimizing Data for LLM Efficiency

TOON represents a shift in how we think about data serialization in AI contexts. We’re no longer optimizing purely for human readability or general-purpose parsers. We’re optimizing for LLM efficiency.

This trend will accelerate. As LLMs become central to application architecture, formats like TOON will become standard practice. You’ll see:

  • TOON support baked into LLM frameworks
  • JSON↔TOON conversion in middleware
  • LLM APIs offering TOON as native input format (saving computational overhead)
  • Multi-format support in databases (store JSON, serve TOON to LLMs)

The AI engineer who understands these tradeoffs token efficiency vs. readability, structure vs. flexibility, format selection as architectural decision will build better systems.

Conclusion

When you’re managing multi-agent orchestration, context efficiency becomes critical. Every token you waste is cognitive load you’re imposing on your agents.

TOON isn’t revolutionary. It’s evolutionary taking proven concepts (YAML, CSV) and optimizing them for a specific, high-value use case: feeding structured data to LLMs at scale.

The benchmarks don’t lie:

  • 30–60% token reduction
  • 70.1% accuracy (vs JSON’s 65.4%)
  • Easy integration
  • Minimal maintenance burden

For AI engineers building production systems, TOON deserves a spot in your toolkit. Not as a universal replacement for JSON, but as a deliberate choice for cost-critical, high-volume LLM pipelines.

Use it wisely. Your budget will thank you.

🔗 Resources

🌐 Connect

For more insights on AI, data formats, and LLM systems follow me on:

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.