## Get evaluation results for a dataset **get** `/api/v1/datasets/{dataset_id}/evaluation` Get evaluation results for a dataset ### Path Parameters - `dataset_id: string` ### Returns - `dataset_id: string` Dataset ID - `quality: object { grade_after, grade_before, improvement_percent, 3 more }` Structured quality metrics. Null until evaluation completes. - `grade_after: string` Letter grade (A-E) after augmentation - `grade_before: string` Letter grade (A-E) before augmentation - `improvement_percent: number` Relative quality improvement as a percentage - `percentile_after: number` Percentile rank (0-100) after augmentation - `score_after: number` Quality score (0-10) after augmentation - `score_before: number` Quality score (0-10) before augmentation - `raw_results: map[unknown]` Raw evaluation results payload for advanced use. Null until evaluation completes. - `status: string` Evaluation pipeline status: pending | running | succeeded | failed | skipped ### Example ```http curl https://api.adaptionlabs.ai/api/v1/datasets/$DATASET_ID/evaluation \ -H "Authorization: Bearer $ADAPTION_API_KEY" ``` #### Response ```json { "dataset_id": "dataset_id", "quality": { "grade_after": "A", "grade_before": "C", "improvement_percent": 37.1, "percentile_after": 92.3, "score_after": 8.5, "score_before": 6.2 }, "raw_results": { "foo": "bar" }, "status": "succeeded" } ```