# Upload ## Initiate a dataset upload `datasets.upload.initiate(UploadInitiateParams**kwargs) -> UploadInitiateResponse` **post** `/api/v1/datasets/upload/initiate` Initiate a dataset upload ### Parameters - `file_format: Literal["csv", "json", "jsonl", "parquet"]` Format of the file being uploaded - `"csv"` - `"json"` - `"jsonl"` - `"parquet"` - `name: str` Human-readable name for the dataset ### Returns - `class UploadInitiateResponse: …` - `upload_url: str` Pre-signed S3 URL — upload the file directly to this URL via HTTP PUT ### Example ```python import os from adaption import Adaption client = Adaption( api_key=os.environ.get("ADAPTION_API_KEY"), # This is the default and can be omitted ) response = client.datasets.upload.initiate( file_format="csv", name="my-training-data", ) print(response.upload_url) ``` #### Response ```json { "upload_url": "https://s3.amazonaws.com/bucket/key?X-Amz-Signature=..." } ``` ## Complete a dataset upload and trigger processing `datasets.upload.complete(UploadCompleteParams**kwargs) -> UploadCompleteResponse` **post** `/api/v1/datasets/upload/complete` Complete a dataset upload and trigger processing ### Parameters - `file_format: Literal["csv", "json", "jsonl", "parquet"]` Format of the uploaded file - `"csv"` - `"json"` - `"jsonl"` - `"parquet"` - `file_size_bytes: float` Size of the uploaded file in bytes - `name: str` Human-readable name for the dataset - `s3_key: str` S3 object key returned in the pre-signed URL response from /upload/initiate ### Returns - `class UploadCompleteResponse: …` - `dataset_id: str` ID of the newly created dataset ### Example ```python import os from adaption import Adaption client = Adaption( api_key=os.environ.get("ADAPTION_API_KEY"), # This is the default and can be omitted ) response = client.datasets.upload.complete( file_format="csv", file_size_bytes=1048576, name="my-training-data", s3_key="uploads/550e8400-e29b-41d4-a716-446655440000/my-training-data.csv", ) print(response.dataset_id) ``` #### Response ```json { "dataset_id": "550e8400-e29b-41d4-a716-446655440000" } ``` ## Complete a file upload and trigger processing `datasets.upload.complete_by_id(strdataset_id, UploadCompleteByIDParams**kwargs) -> UploadCompleteByIDResponse` **post** `/api/v1/datasets/{dataset_id}/upload/complete` File uploads only. Call after uploading bytes to the presigned URL from POST /datasets. Verifies the file exists in S3, then triggers the preprocessing pipeline. ### Parameters - `dataset_id: str` - `file_size_bytes: float` Size of the uploaded file in bytes (for verification) - `sha256: Optional[str]` SHA-256 hex digest of the uploaded file (for integrity verification) ### Returns - `class UploadCompleteByIDResponse: …` - `dataset_id: str` ID of the dataset - `status: str` Current status of the dataset after completing upload ### Example ```python import os from adaption import Adaption client = Adaption( api_key=os.environ.get("ADAPTION_API_KEY"), # This is the default and can be omitted ) response = client.datasets.upload.complete_by_id( dataset_id="dataset_id", file_size_bytes=1048576, ) print(response.dataset_id) ``` #### Response ```json { "dataset_id": "550e8400-e29b-41d4-a716-446655440000", "status": "processing" } ```