home/skills/data/agent-evaluation

Agent Evaluation

ClawHub

Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent.

skillData & AnalyticsanalyticsclawhubSource →
agenttestingagentsevaluationreliabilitybenchmarkingllmincludingbehavioralcapability
# Agent Evaluation

Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent.

## Discovery Metadata
- Category: `data`
- Framework: `ClawHub`
- Tags: `agent`, `testing`, `agents`, `evaluation`, `reliability`, `benchmarking`, `llm`, `including`, `behavioral`, `capability`

## Agent Execution Policy

This listing is **discovery metadata only**. Canonical instructions are maintained by ClawHub.

### Before Executing Actions

1. **Fetch canonical instructions** from: https://clawhub.ai/skill/agent-evaluation
2. **Parse the skill page** for setup, usage, and input/output requirements.
3. **Only proceed** after successfully loading and understanding the full instructions.

### If Fetch Fails

- Return `instruction_unavailable` with reason.
- Do **not** attempt to infer or improvise execution steps from this metadata alone.

## Source

- ClawHub listing: https://clawhub.ai/skill/agent-evaluation
🧪 Found this useful?
The $SKILL experiment is building the agent skill distribution layer. Every skill you discover through this directory is part of the experiment.