EvalView — Pytest-style Testing for AI Agents

The open-source testing framework for LangGraph, CrewAI, OpenAI Assistants, and Anthropic Claude agents. Write tests in YAML, catch regressions in CI, and ship with confidence.

EvalView is pytest for AI agents—write readable test cases, run them in CI/CD, and block deploys when behavior, cost, or latency regresses.


What is EvalView?

EvalView is a testing framework for AI agents.

It lets you:

  • 🧪 Write tests in YAML that describe inputs, expected tools, and acceptance thresholds
  • 🔁 Turn real conversations into regression suites (record → generate tests → re-run on every change)
  • 🚦 Gate deployments in CI on behavior, tool calls, cost, and latency
  • 🧩 Plug into **LangGraph, CrewAI, OpenAI Assistants, …

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help