rag-eval
ClawHubEvaluate your RAG pipeline quality using Ragas metrics (faithfulness, answer relevancy, context precision).
rag-evalevaluateragpipelinequalityusingragasmetricsfaithfulnessanswer
# rag-eval Evaluate your RAG pipeline quality using Ragas metrics (faithfulness, answer relevancy, context precision). ## Discovery Metadata - Category: `automation` - Framework: `ClawHub` - Tags: `rag-eval`, `evaluate`, `rag`, `pipeline`, `quality`, `using`, `ragas`, `metrics`, `faithfulness`, `answer` ## Agent Execution Policy This listing is **discovery metadata only**. Canonical instructions are maintained by ClawHub. ### Before Executing Actions 1. **Fetch canonical instructions** from: https://clawhub.ai/skill/rag-eval 2. **Parse the skill page** for setup, usage, and input/output requirements. 3. **Only proceed** after successfully loading and understanding the full instructions. ### If Fetch Fails - Return `instruction_unavailable` with reason. - Do **not** attempt to infer or improvise execution steps from this metadata alone. ## Source - ClawHub listing: https://clawhub.ai/skill/rag-eval
๐งช Found this useful?
The $SKILL experiment is building the agent skill distribution layer. Every skill you discover through this directory is part of the experiment.