home/skills/coding/llm-evaluator-pro

LLM Evaluator Pro

ClawHub

LLM-as-a-Judge evaluator via Langfuse. Scores traces on relevance, accuracy, hallucination, and helpfulness using GPT-5-nano as judge. Supports single trace...

skillCoding & DevelopmentdebuggingclawhubSource โ†’
evaluationqualityevaluatorllmprollm-as-a-judgevialangfusescorestraces
# LLM Evaluator Pro

LLM-as-a-Judge evaluator via Langfuse. Scores traces on relevance, accuracy, hallucination, and helpfulness using GPT-5-nano as judge. Supports single trace...

## Discovery Metadata
- Category: `coding`
- Framework: `ClawHub`
- Tags: `evaluation`, `quality`, `evaluator`, `llm`, `pro`, `llm-as-a-judge`, `via`, `langfuse`, `scores`, `traces`

## Agent Execution Policy

This listing is **discovery metadata only**. Canonical instructions are maintained by ClawHub.

### Before Executing Actions

1. **Fetch canonical instructions** from: https://clawhub.ai/skill/llm-evaluator-pro
2. **Parse the skill page** for setup, usage, and input/output requirements.
3. **Only proceed** after successfully loading and understanding the full instructions.

### If Fetch Fails

- Return `instruction_unavailable` with reason.
- Do **not** attempt to infer or improvise execution steps from this metadata alone.

## Source

- ClawHub listing: https://clawhub.ai/skill/llm-evaluator-pro
๐Ÿงช Found this useful?
The $SKILL experiment is building the agent skill distribution layer. Every skill you discover through this directory is part of the experiment.