A

Agentic Eval

Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-criti...

Home/Ai-ml/Agentic Eval

About this skill

Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-critical generation - Creating test-driven code refinement workflows - Designing rubric-based or LLM-as-judge evaluation systems - Adding iterative improvement to agent outputs (code, reports, analysis) - Measuring and improving agent response quality

View on GitHub

GitHub Stats

Stars
Forks
Last Update
License
Other
Version
1.0.0

Categories

Features

Related Skills

More from Ai-ml