After my recent talk on Agent-in-the-Loop systems, I was asked a seemingly simple question: How do you optimize agents? Link for Talk: https://www.youtube.com/watch?v=HwCR59VuYn4&t=1888s
At first glance, this sounds like a technical question. Many people expect a concrete answer involving prompt engineering, temperature tuning, or model selection. My response, however, was far less satisfying — but far more honest:
It depends on the task.
This answer often feels like a cop-out. In reality, it reflects a deeper truth about agentic systems: you don’t optimize agents in isolation — you optimize the system they operate in.
The Common Misconception: Optimization Means Tuning the Model
When people talk about o…
After my recent talk on Agent-in-the-Loop systems, I was asked a seemingly simple question: How do you optimize agents? Link for Talk: https://www.youtube.com/watch?v=HwCR59VuYn4&t=1888s
At first glance, this sounds like a technical question. Many people expect a concrete answer involving prompt engineering, temperature tuning, or model selection. My response, however, was far less satisfying — but far more honest:
It depends on the task.
This answer often feels like a cop-out. In reality, it reflects a deeper truth about agentic systems: you don’t optimize agents in isolation — you optimize the system they operate in.
The Common Misconception: Optimization Means Tuning the Model
When people talk about optimizing agents, they usually mean optimizing the underlying model. Adjust the prompt, lower the temperature, swap the model, and expect better behavior.
These adjustments can help at the margins, but they rarely address the root cause of failure. That’s because an agent is not just a language model.
An agent is a system composed of:
- a task definition
- an action space (what the agent is allowed to do)
- constraints and boundaries
- feedback and evaluation mechanisms
- stop and escalation conditions
If these components are poorly designed, no amount of prompt tuning will make the system reliable.
Agent Optimization Is a Task Design Problem
In practice, most agent failures are task design failures.
Agents struggle when objectives are too broad, success criteria are vague, or responsibilities are overloaded. Instructions like “do your best” or “solve this end-to-end” leave too much room for interpretation and lead to unpredictable behavior.
Consider the difference between these two prompts:
Poorly framed task:
"Analyze this document and decide what to do."
This instruction hides multiple decisions inside a single step: analysis, prioritization, and action selection. The agent has no clear notion of success or failure.
Well-framed task:
"Summarize the document, estimate uncertainty, and escalate to a human if confidence falls below a defined threshold."
Here, the task is explicit, bounded, and testable. The agent’s role is clear, and human intervention is intentionally designed rather than left implicit.
Optimizing an agent often means narrowing the task:
- defining what success actually means
- specifying what the agent should not do
- breaking complex goals into smaller, verifiable steps
A well-framed task reduces the need for aggressive model-level optimization.
Feedback Loops Matter More Than Prompts
Another common failure point is feedback design. Agents frequently evaluate their own outputs, but self-evaluation can be misleading or overly optimistic.
Effective agent systems rely on feedback loops that are:
- timely
- aligned with real objectives
- capable of triggering escalation
If feedback arrives too late or measures the wrong thing, the agent may appear functional while gradually drifting away from its intended behavior.
Human involvement is most valuable here — not in validating every decision, but in designing how feedback is generated and when intervention is required.
Constraints Are Not a Limitation — They Are a Guide
One of the most overlooked aspects of agent optimization is constraint design.
Constraints define:
- which tools an agent can use
- how often it can retry
- how much context it can consume
- when it must stop or ask for help
Rather than limiting performance, constraints provide structure. They prevent runaway behavior and make agent actions easier to reason about.
Constraints don’t weaken agents — they guide them.
The Role of Humans in Optimized Agent Systems
In optimized Agent-in-the-Loop systems, humans are not prompt engineers or micro-managers. Their role is to design the system boundaries and supervision mechanisms.
Humans are best positioned to:
- define goals and constraints
- decide which failures are acceptable
- interpret ambiguous situations
In other words, humans optimize the decision space, not individual decisions.
Key Takeaways
- Agent optimization starts with task design, not model tuning
- Prompts and temperatures are secondary levers
- Feedback loops determine long-term behavior
- Constraints increase reliability and predictability
- Humans belong above the loop, not inside every step
Final Thoughts
Optimizing agents is not about making them smarter. It’s about making the system clearer.
When tasks are well-defined, feedback is meaningful, and constraints are explicit, agents don’t need to be aggressively optimized — they simply work better.