Skip to content
Get started

Get evaluation results for a dataset

datasets.get_evaluation(strdataset_id) -> DatasetGetEvaluationResponse
GET/api/v1/datasets/{dataset_id}/evaluation

Get evaluation results for a dataset

ParametersExpand Collapse
dataset_id: str
ReturnsExpand Collapse
class DatasetGetEvaluationResponse:
dataset_id: str

Dataset ID

quality: Optional[Quality]

Structured quality metrics. Null until evaluation completes.

grade_after: Optional[str]

Letter grade (A-E) after augmentation

grade_before: Optional[str]

Letter grade (A-E) before augmentation

improvement_percent: Optional[float]

Relative quality improvement as a percentage

percentile_after: Optional[float]

Percentile rank (0-100) after augmentation

score_after: Optional[float]

Quality score (0-10) after augmentation

score_before: Optional[float]

Quality score (0-10) before augmentation

raw_results: Optional[Dict[str, object]]

Raw evaluation results payload for advanced use. Null until evaluation completes.

status: Optional[str]

Evaluation pipeline status: pending | running | succeeded | failed | skipped

Get evaluation results for a dataset

import os
from adaption import Adaption

client = Adaption(
    api_key=os.environ.get("ADAPTION_API_KEY"),  # This is the default and can be omitted
)
response = client.datasets.get_evaluation(
    "dataset_id",
)
print(response.dataset_id)
{
  "dataset_id": "dataset_id",
  "quality": {
    "grade_after": "A",
    "grade_before": "C",
    "improvement_percent": 37.1,
    "percentile_after": 92.3,
    "score_after": 8.5,
    "score_before": 6.2
  },
  "raw_results": {
    "foo": "bar"
  },
  "status": "succeeded"
}
Returns Examples
{
  "dataset_id": "dataset_id",
  "quality": {
    "grade_after": "A",
    "grade_before": "C",
    "improvement_percent": 37.1,
    "percentile_after": 92.3,
    "score_after": 8.5,
    "score_before": 6.2
  },
  "raw_results": {
    "foo": "bar"
  },
  "status": "succeeded"
}