Every API call you make with JSON is costing you more than you think.
I ran real-world extractions using Gemini 2.5 Flash, and the results were startling: JSON consistently used 30–40% more output tokens than TOON format. In one test, JSON consumed 471 output tokens while TOON used just 227 — a 51% reduction.
But here’s where it gets interesting: TOON initially failed 70% of the time.
After optimization, I achieved 100% parsing success and discovered something counterintuitive — it uses more prompt tokens, with TOON actually saves you money overall. When I tested structured outputs with Pydantic models, JSON required 389 output tokens versus TOON’s simpler encoding.
The hidden goldmine? Tool/function calling. That’s where...
Every API call you make with JSON is costing you more than you think.
I ran real-world extractions using Gemini 2.5 Flash, and the results were startling: JSON consistently used 30–40% more output tokens than TOON format. In one test, JSON consumed 471 output tokens while TOON used just 227 — a 51% reduction.
But here’s where it gets interesting: TOON initially failed 70% of the time.
After optimization, I achieved 100% parsing success and discovered something counterintuitive — it uses more prompt tokens, with TOON actually saves you money overall. When I tested structured outputs with Pydantic models, JSON required 389 output tokens versus TOON’s simpler encoding.
The hidden goldmine? Tool/function calling. That’s where TOON’s compact format shines brightest, slashing token costs in agentic workflows where responses become the next prompt.
This isn’t theoretical. I’m sharing the actual prompts, parsing errors, token counts, and code that took TOON from a 70% failure rate to production-ready. Whether TOON beats JSON depends on your use case — and I have the data to prove exactly when.
Let’s break down the numbers.
Experiment #1: The Initial TOON Failure (70% Success Rate)
I started with what seemed like a straightforward test: extracting structured job description data using TOON instead of JSON.
The Setup:
My prompt was simple — ask Gemini 2.5 Flash to extract role, skills, experience, location, and responsibilities from a job posting. For the output format, I did what seemed logical: I showed TOON’s encoded structure using the actual output format (essentially a drop-in replacement approach).
Prompt:
Extract Role, Primary Skills, Secondary Skills,
Minimum Experience, Maximum Experience,
Location, Employment Type, Summary, and Responsibilities
Job Description:
<JD Text>
Output in TOON format:
Role: “”
“Primary Skills”[2]: Python,JavaScript
“Secondary Skills”[2]: Responsibility,Communication
“Minimum Experience”: “”
“Maximum Experience”: “”
Location: “”
“Employment Type”: “”
Summary: “”
Responsibilities[2]: Task A,Task B
Here's what I suspected would work: By showing the encoded format with empty strings and generic placeholders, the model would understand the structure.
Reality check: 70% failure rate.
The errors were telling:
Error parsing TOON format for JD#2: Expected 10 values, but got 16Error parsing TOON format for JD#5: Missing colon after key
The model was confused about arrays. Sometimes it outputs Skills: Python, JavaScript, React as a flat string. Other times, it attempted brackets but malformed the syntax.
The hypothesis: Maybe showing encoded/empty examples was the problem. The model needed to see real data patterns, especially for arrays.
Token Usage (Failed Attempts, 70% Success Rate):
- Prompt: 729 tokens
- Output: 227 tokens
- Success Rate: ~30% initially, improved to 70% after adding two real examples with populated arrays
Json Token Usages:
- Prompt: 723 tokens
- Output: 471 tokens
Key Insight:
TOON's compact syntax is unforgiving. JSON has redundancy ({"key": "value"}) that helps models self-correct. TOON's Key: value format offers no such safety net. The model needed concrete examples, not abstract templates.
But 70% wasn't good enough for production. Time to fix this properly.
Experiment #2: Achieving 100% Parsing Success (And the Token Trade-off)
I needed to fix the 70% success rate. The solution? Stop being minimalist with examples.
Instead of showing encoded/empty structures, I gave the model a complete, realistic example with proper TOON formatting — especially for arrays.
The Revised Prompt:
Extract Role, Primary Skills, Secondary Skills,
Minimum Experience, Maximum Experience,
Location, Employment Type, Summary, and Responsibilities
Job Description:
<JD Text>
Output in TOON format. Example structure:
Role: “Senior Data Scientist”
Primary_Skills:
[1]: “Machine Learning”
[2]: “Statistical Analysis”
Secondary_Skills:
[0]: “Big Data”
[1]: “Cloud Platforms”
Minimum_Experience: “5 years”
Maximum_Experience: “10 years”
Location: “New York, NY or Remote”
Employment_Type: “Full-time”
Summary: “Lead data science initiatives”
Responsibilities:
[0]: “Design ML models”
[1]: “Analyze datasets”
Now provide the extraction in TOON format. Keep the format exactly
as shown above.
Result: 100% parsing. No more malformed arrays. No more missing colons.
But here's the catch—the prompt got heavier.
The Token Comparison: TOON vs JSON
Let me show you the actual numbers across the same 10 job descriptions:
JSON Approach: Token Usage
- Prompt tokens: 723
- Output tokens: 471
- Success rate: 100% (JSON is forgiving)
TOON Approach (Initial — 70% success)
- Prompt tokens: 729
- Output tokens: 227 ✅ (51.8% reduction vs JSON)
- Total: 956 tokens (saves 238 tokens per request)
- Success rate: 70% ❌
TOON Approach (Optimized — 100% success)
- Prompt tokens: 802 ❌ (+11% vs JSON)
- Output tokens: 455 ✅ (3.4% reduction vs JSON)
- Success rate: 100% ✅
The Uncomfortable Truth
For basic extraction tasks, optimized TOON costs MORE than JSON.
Yes, the output is slightly more compact (455 vs 471 tokens), but the verbose prompting needed to achieve 100% reliability completely erases any savings. In fact, you’re paying 5% more per request.
So why am I still testing TOON?
Because this experiment revealed something crucial: the baseline comparison is misleading. Real-world LLM applications don’t just extract data once — they use structured outputs for:
- Pydantic model validation (native SDK support)
- Tool/function calling (where output becomes input)
- Multi-turn agentic workflows (repeated serialization)
That’s where the math changes completely. Let me show you.
Experiment #3: Pydantic Models — Where the SDK Does the Heavy Lifting
Here’s where things get interesting. Modern LLM SDKs have first-class support for structured outputs using Pydantic models. Instead of prompt engineering, you define a schema and let the SDK handle formatting.
The key difference: You don’t need to explain the output format in your prompt — the SDK extracts it from your Pydantic model automatically.
The Setup: Google’s GenAI SDK
I used the same job extraction task, but this time with a Pydantic model:
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=prompt,
config={
"response_mime_type": "application/json",
"response_schema": JobModel,
},
)
Notice what’s missing: No output format instructions. No examples. No “Output as JSON with these exact keys.”
Become a member
The SDK injects the schema behind the scenes.
Token Comparison: Pydantic JSON vs Manual TOON
Pydantic + JSON (SDK-Managed)
- Prompt tokens: 647 ✅ (19.3% less than optimized TOON)
- Output tokens: 389 ✅ (14.5% less than optimized TOON)
- Success rate: 100% ✅
- Parsing: Native (SDK returns typed Python objects)
Manual TOON (From Experiment #2)
- Prompt tokens: 802 ❌
- Output tokens: 455 ❌
- Success rate: 100% ✅
- Parsing: Custom (you write the parser)
The Brutal Takeaway
For structured extraction with strong SDK support, Pydantic really shines. Native Pydantic integration delivers:
- ✅ Cleaner prompts (~155 fewer prompt tokens)
- ✅ Smaller outputs (~66 fewer output tokens)
- ✅ No custom parsing logic
- ✅ Built-in type validation
- ✅ Parsed objects returned directly, ready to use
- ✅ A much smoother developer experience
Because of this, I’ll increasingly rely on Pydantic and native parsing support for structured extraction. It’s simply more reliable and maintainable than handling parsing and validation manually.
That said, there’s one scenario where JSON’s verbosity becomes a genuine liability: tool calling in agentic workflows.
That’s where TOON finally proves its worth.
Experiment #4: Tool Calling — Where TOON Finally Wins
This is where everything clicked.
In agentic workflows, your LLM doesn’t just extract data once — it calls tools, receives results, and uses those results to reason further. The tool’s response becomes part of the next prompt. And if that response is bloated with JSON syntax, you’re paying for it twice: once as output, once as input.
The insight: Tool results are pure token waste. The model doesn’t need {"key": "value"} ceremony—it needs the data, efficiently encoded.
The Setup: Weather Agent with Function Calling
I built a simple agent that calls a get_current_weather function. The user asks for weather, the model calls the tool, the function returns data, and the model synthesizes a response.
The critical moment: What format should get_current_weather return?
Version A: JSON Tool Response
data = {
"location": location,
"current": {
"temperature": "72 F",
"condition": "sunny",
},
"forecast": forecast,
}
return json.dumps(data) # Returns JSON string
Version B: TOON Tool Response
data = {
"location": location,
"current": {
"temperature": "72 F",
"condition": "sunny",
},
"forecast": forecast,
}
return encode(data) # Returns TOON-encoded string
Main code
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="What is the weather like in New York? Share next 15 days forecast as well.",
config=types.GenerateContentConfig(
tools=[get_current_weather],
),
)
Result Token Usage:
- Initial prompt tokens: 152 (user message + tool definition)
- Tool response tokens (becomes input): 480 ✅ (24% reduction)
- Model’s final output: 384 (slightly longer, but reasonable)
- Total tokens: 1,016 ✅ (11.5% reduction overall)
Why TOON Wins in Agentic Workflows
Here’s the math that matters:
Single Tool Call
- JSON approach: 632 tokens for tool result
- TOON approach: 480 tokens for tool result
- Savings: 152 tokens per tool call (24%)
Multi-Turn Agent (5 tool calls)
- JSON approach: 632 × 5 = 3,160 tokens in tool results
- TOON approach: 480 × 5 = 2,400 tokens in tool results
- Savings: 760 tokens (24%)
The Compounding Effect
Why this matters more than single extractions:
- Tool results are pure input tokens — You pay for them every single time
- Verbosity multiplies — JSON’s
{}: ,Syntax adds 20-30% overhead for nested data - No parsing penalty — The model consumes TOON just as easily (we verified this in follow-up tests)
- Scales with agent complexity — More tools = more savings
The difference? Where the efficiency matters.
The Bottom Line
After this test runs across four different scenarios, here’s what the data tells us:
TOON loses at single extractions. Whether you’re doing manual prompting or using Pydantic models, JSON with SDK support is cleaner, cheaper, and more reliable. The 17.6% token savings from native schema integration beats TOON’s manual approach every time.
But TOON wins where it counts for agents: tool calling workflows.
When your LLM’s output becomes the next prompt — when data cycles between model and functions repeatedly — TOON’s 24% reduction per tool call transforms from interesting to impactful. An agent making 20 tool calls saves 3,040 tokens per session.
The decision matrix is simple:
- Building a chatbot that extracts structured data? Use JSON + Pydantic.
- Building an agent that calls tools 10+ times per session? Test TOON.
- Building anything else? Profile first, optimize later.
Try It Yourself
I’ve open-sourced all the experiments, prompts, and token measurements: View complete code and results on GitHub Gist
The repository includes:
- ✅ All four experiment setups with actual prompts
- ✅ Token usage logs for every test case
- ✅ Side-by-side comparison scripts
- ✅ The job descriptions I used for testing
TOON isn’t magic — it’s math. And the math only works when token efficiency genuinely matters. For most applications, JSON’s ecosystem advantages outweigh the savings. But for token-heavy agentic workflows? TOON might just pay for itself.
Now you have the data to decide.