---
title: CLI
---
:::important
This page covers `clearml-data`, ClearML's file-based data management solution.
See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced queryable dataset management solution.
:::
`clearml-data` is a data management CLI tool that comes as part of the `clearml` python package. Use `clearml-data` to
create, modify, and manage your datasets. You can upload your dataset to any storage service of your choice (S3 / GS /
Azure / Network Storage) by setting the dataset’s upload destination (see [`--storage`](#upload)). Once you have uploaded
your dataset, you can access it from any machine.
The following page provides a reference to `clearml-data`'s CLI commands.
## create
Creates a new dataset.
```bash
clearml-data create [-h] [--parents [PARENTS [PARENTS ...]]] [--project PROJECT]
--name NAME [--version VERSION] [--output-uri OUTPUT_URI]
[--tags [TAGS [TAGS ...]]]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--name` |Dataset's name|
|
|`--project`|Dataset's project|
|
|`--version` |Dataset version. Use the [semantic versioning](https://semver.org) scheme. If not specified a version will automatically be assigned |
|
|`--parents`|IDs of the dataset's parents. The dataset inherits all of its parents' content. Multiple parents can be entered, but they are merged in the order they were entered|
|
|`--output-uri`| Sets where dataset and its previews are uploaded to|
|
|`--tags` |Dataset user tags. The dataset can be labeled, which can be useful for organizing datasets|
|
:::tip Dataset ID
* For datasets created with `clearml` v1.6 or newer on ClearML Server v1.6 or newer, find the ID in the dataset version’s info panel in the [Dataset UI](../webapp/datasets/webapp_dataset_viewing.md).
For datasets created with earlier versions of `clearml`, or if using an earlier version of ClearML Server, find the ID in the task header of the [dataset task's info panel](../webapp/webapp_exp_track_visual.md).
* clearml-data works in a stateful mode so once a new dataset is created, the following commands
do not require the `--id` flag.
:::
## add
Add individual files or complete folders to the dataset.
```bash
clearml-data add [-h] [--id ID] [--dataset-folder DATASET_FOLDER]
[--files [FILES [FILES ...]]] [--wildcard [WILDCARD [WILDCARD ...]]]
[--links [LINKS [LINKS ...]]] [--non-recursive] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id` | Dataset's ID. Default: previously created / accessed dataset|
|
|`--files`| Files / folders to add. Items will be uploaded to the dataset’s designated storage. |
|
|`--wildcard`| Add specific set of files, denoted by these wildcards. For example: `~/data/*.jpg ~/data/json`. Multiple wildcards can be passed. |
|
|`--links`| Files / folders link to add. Supports s3, gs, azure links. Example: `s3://bucket/data` `azure://bucket/folder`. Items remain in their original location. |
|
|`--dataset-folder` | Dataset base folder to add the files to in the dataset. Default: dataset root|
|
|`--non-recursive` | Disable recursive scan of files |
|
|`--verbose` | Verbose reporting |
|
## remove
Remove files/links from the dataset.
```bash
clearml-data remove [-h] [--id ID] [--files [FILES [FILES ...]]]
[--non-recursive] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id` | Dataset's ID. Default: previously created / accessed dataset|
|
|`--files` | Files / folders to remove (wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`). Notice: file path is the path within the dataset, not the local path. For links, you can specify their URL (e.g. `s3://bucket/data`) |
|
|`--non-recursive` | Disable recursive scan of files |
|
|`--verbose` | Verbose reporting |
|
## upload
Upload the local dataset changes to the server. By default, it's uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md). It's possible to specify a different storage
medium by entering an upload destination, such as `s3://bucket`, `gs://`, `azure://`, `/mnt/shared/`.
```bash
clearml-data upload [-h] [--id ID] [--storage STORAGE] [--chunk-size CHUNK_SIZE]
[--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`| Dataset's ID. Default: previously created / accessed dataset|
|
|`--storage`| Remote storage to use for the dataset files. Default: files_server |
|
|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |
|
|`--verbose` | Verbose reporting |
|
## close
Finalize the dataset and make it ready to be consumed. This automatically uploads all files that were not previously uploaded.
Once a dataset is finalized, it can no longer be modified.
```bash
clearml-data close [-h] [--id ID] [--storage STORAGE] [--disable-upload]
[--chunk-size CHUNK_SIZE] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`| Dataset's ID. Default: previously created / accessed dataset|
|
|`--storage`| Remote storage to use for the dataset files. Default: files_server |
|
|`--disable-upload` | Disable automatic upload when closing the dataset |
|
|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |
|
|`--verbose` | Verbose reporting |
|
## sync
Sync a folder's content with ClearML. This option is useful in case a user has a single point of truth (i.e. a folder) which
updates from time to time.
Once an update should be reflected in ClearML's system, call `clearml-data sync` and pass the folder path,
and the changes (either file addition, modification and removal) will be reflected in ClearML.
This command also uploads the data and finalizes the dataset automatically.
```bash
clearml-data sync [-h] [--id ID] [--dataset-folder DATASET_FOLDER] --folder FOLDER
[--parents [PARENTS [PARENTS ...]]] [--project PROJECT] [--name NAME]
[--version VERSION] [--output-uri OUTPUT_URI] [--tags [TAGS [TAGS ...]]]
[--storage STORAGE] [--skip-close] [--chunk-size CHUNK_SIZE] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`| Dataset's ID. Default: previously created / accessed dataset|
|
|`--dataset-folder`|Dataset base folder to add the files to (default: Dataset root)|
|
|`--folder`|Local folder to sync. Wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`|
|
|`--storage`|Remote storage to use for the dataset files. Default: files server |
|
|`--parents`|IDs of the dataset's parents (i.e. merge all parents). All modifications made to the folder since the parents were synced will be reflected in the dataset|
|
|`--project`|If creating a new dataset, specify the dataset's project name|
|
|`--name`|If creating a new dataset, specify the dataset's name|
|
|`--version`|Specify the dataset’s version using the [semantic versioning](https://semver.org) scheme. Default: `1.0.0`|
|
|`--tags`|Dataset user tags|
|
|`--skip-close`|Do not auto close dataset after syncing folders|
|
|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |
|
|`--verbose` | Verbose reporting |
|
## list
List a dataset's contents.
```bash
clearml-data list [-h] [--id ID] [--project PROJECT] [--name NAME] [--version VERSION]
[--filter [FILTER [FILTER ...]]] [--modified]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`|Dataset ID whose contents will be shown (alternatively, use project / name combination). Default: previously accessed dataset|
|
|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|
|
|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|
|
|`--version`|Specify dataset version. Default: most recent version |
|
|`--filter`|Filter files based on folder / wildcard. Multiple filters are supported. Example: `folder/date_*.json folder/sub-folder`|
|
|`--modified`|Only list file changes (add / remove / modify) introduced in this version|
|
## set-description
Sets the description of an existing dataset.
```bash
clearml-data set-description [-h] [--id ID] [--description DESCRIPTION]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`|Dataset’s ID|
|
|`--description`|Description to be set|
|
## delete
Deletes dataset(s). Pass any of the attributes of the dataset(s) you want to delete. Multiple datasets matching the
request will raise an exception, unless you pass `--entire-dataset` and `--force`. In this case, all matching datasets
will be deleted.
If a dataset is a parent to a dataset(s), you must pass `--force` in order to delete it.
:::warning
Deleting a parent dataset may cause child datasets to lose data!
:::
```bash
clearml-data delete [-h] [--id ID] [--project PROJECT] [--name NAME]
[--version VERSION] [--force] [--entire-dataset]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`|ID of the dataset to delete (alternatively, use project / name combination).|
|
|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|
|
|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|
|
|`--version`|Specify dataset version|
|
|`-–force`|Force dataset deletion even if other dataset versions depend on it. Must also be used if `--entire-dataset` flag is used|
|
|`--entire-dataset`|Delete all found datasets|
|
## rename
Rename a dataset (and all of its versions).
```bash
clearml-data rename [-h] --new-name NEW_NAME --project PROJECT --name NAME
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--new-name`|The new name of the dataset|
|
|`--project`|The project the dataset to be renamed belongs to|
|
|`--name`|The current name of the dataset(s) to be renamed|
|
## move
Moves a dataset to another project
```bash
clearml-data move [-h] --new-project NEW_PROJECT --project PROJECT --name NAME
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--new-project`|The new project of the dataset|
|
|`--project`|The current project the dataset to be move belongs to|
|
|`--name`|The name of the dataset to be moved|
|
## search
Search datasets in the system by project, name, ID, and/or tags.
Returns list of all datasets in the system that match the search request, sorted by creation time.
```bash
clearml-data search [-h] [--ids [IDS [IDS ...]]] [--project PROJECT]
[--name NAME] [--tags [TAGS [TAGS ...]]]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--ids`|A list of dataset IDs|
|
|`--project`|The project name of the datasets|
|
|`--name`|A dataset name or a partial name to filter datasets by|
|
|`--tags`|A list of dataset user tags|
|
## compare
Compare two datasets (target vs. source). The command returns a comparison summary that looks like this:
`Comparison summary: 4 files removed, 3 files modified, 0 files added`
```bash
clearml-data compare [-h] --source SOURCE --target TARGET [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--source`|Source dataset ID (used as baseline)|
|
|`--target`|Target dataset ID (compare against the source baseline dataset)|
|
|`--verbose`|Verbose report all file changes (instead of summary)|
|
## squash
Squash multiple datasets into a single dataset version (merge down).
```bash
clearml-data squash [-h] --name NAME --ids [IDS [IDS ...]] [--storage STORAGE] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--name`|Create squashed dataset name|
|
|`--ids`|Source dataset IDs to squash (merge down)|
|
|`--storage`|Remote storage to use for the dataset files. Default: files_server |
|
|`--verbose`|Verbose report all file changes (instead of summary)|
|
## verify
Verify that the dataset content matches the data from the local source.
```bash
clearml-data verify [-h] [--id ID] [--folder FOLDER] [--filesize] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`|Specify dataset ID. Default: previously created/accessed dataset|
|
|`--folder`|Specify dataset local copy (if not provided the local cache folder will be verified)|
|
|`--filesize`| If `True`, only verify file size and skip hash checks (default: `False`)|
|
|`--verbose`|Verbose report all file changes (instead of summary)|
|
## get
Get a local copy of a dataset. By default, you get a read only cached folder, but you can get a mutable copy by using the
`--copy` flag.
```bash
clearml-data get [-h] [--id ID] [--copy COPY] [--link LINK] [--part PART]
[--num-parts NUM_PARTS] [--overwrite] [--verbose]
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`| Specify dataset ID. Default: previously created / accessed dataset|
|
|`--copy`| Get a writable copy of the dataset to a specific output folder|
|
|`--link`| Create a soft link (not supported on Windows) to a read-only cached folder containing the dataset|
|
|`--part`|Retrieve a partial copy of the dataset. Part number (0 to `--num-parts`-1) of total parts `--num-parts`.|
|
|`--num-parts`|Total number of parts to divide the dataset into. Notice, minimum retrieved part is a single chunk in a dataset (or its parents). Example: Dataset gen4, with 3 parents, each with a single chunk, can be divided into 4 parts |
|
|`--overwrite`| If `True`, overwrite the target folder|
|
|`--verbose`| Verbose report all file changes (instead of summary)|
|
## publish
Publish the dataset for public use. The dataset must be [finalized](#close) before it is published.
```bash
clearml-data publish [-h] --id ID
```
**Parameters**
|Name|Description|Optional|
|---|---|---|
|`--id`| The dataset task ID to be published.|
|