` format. For example, use `17-20 TUE` to set Tuesday's uptime to 17-20.
NOTES:- This feature is available under the ClearML Enterprise plan
- Make sure to configure only `--uptime` or `--downtime`, but not both.
|
|
+|`--use-owner-token`| Generate and use the task owner's token for the execution of the task.|
|
## execute
@@ -123,23 +123,23 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor
### Parameters
-|Name | Description| Optional |
+|Name | Description| Mandatory |
|---|----|---|
-|`--id`| The ID of the Task to build|
|
-|`--clone`| Clone the Task specified by `--id`, and then execute that cloned Task.|
|
-|`--cpu-only`| Disable GPU access for the daemon, only use CPU in either docker or virtual environment.|
|
-|`--docker`| Run in Docker mode. Execute the Task inside a Docker container. To specify the image name and optional arguments, use one of the following: - `--docker ` on the command line
- `--docker` on the command line, and specify the default image name and arguments in the configuration file.
Environment variable settings for Dockers containers: - `CLEARML_DOCKER_SKIP_GPUS_FLAG` - Ignore the `--gpus` flag inside the Docker container. This also lets you execute ClearML Agent using Docker versions earlier than 19.03.
- `NVIDIA_VISIBLE_DEVICES` - Limit GPU visibility for the Docker container.
- `CLEARML_AGENT_GIT_USER` and `CLEARML_AGENT_GIT_PASS` - Pass these credentials to the Docker container at execution.
|
|
-|`--disable-monitoring`| Disable logging and monitoring, except for stdout.|
|
-|`--full-monitoring`| Create a full log, including the environment setup log, Task log, and monitoring, as well as stdout.|
|
-|`--git-pass`| Git password for repository access.|
|
-|`--git-user`| Git username for repository access.|
|
-|`--gpus`| Specify active GPUs for the daemon to use (docker / virtual environment). Equivalent to setting `NVIDIA_VISIBLE_DEVICES`. For example: - `--gpus 0`
- `--gpu 0,1,2`
- `--gpus all`
|
|
-|`-h`, `--help`| Get help for this command.|
|
-|`--log-file`| The log file for Task execution output (stdout / stderr) to a text file.|
|
-|`--log-level`| SDK log level. The values are:- `DEBUG`
- `INFO`
- `WARN`
- `WARNING`
- `ERROR`
- `CRITICAL`
|
|
-|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|
|
-|`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|
|
-|`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|
|
+|`--id`| The ID of the Task to build|
|
+|`--clone`| Clone the Task specified by `--id`, and then execute that cloned Task.|
|
+|`--cpu-only`| Disable GPU access for the daemon, only use CPU in either docker or virtual environment.|
|
+|`--docker`| Run in Docker mode. Execute the Task inside a Docker container. To specify the image name and optional arguments, use one of the following: - `--docker ` on the command line
- `--docker` on the command line, and specify the default image name and arguments in the configuration file.
Environment variable settings for Dockers containers: - `CLEARML_DOCKER_SKIP_GPUS_FLAG` - Ignore the `--gpus` flag inside the Docker container. This also lets you execute ClearML Agent using Docker versions earlier than 19.03.
- `NVIDIA_VISIBLE_DEVICES` - Limit GPU visibility for the Docker container.
- `CLEARML_AGENT_GIT_USER` and `CLEARML_AGENT_GIT_PASS` - Pass these credentials to the Docker container at execution.
|
|
+|`--disable-monitoring`| Disable logging and monitoring, except for stdout.|
|
+|`--full-monitoring`| Create a full log, including the environment setup log, Task log, and monitoring, as well as stdout.|
|
+|`--git-pass`| Git password for repository access.|
|
+|`--git-user`| Git username for repository access.|
|
+|`--gpus`| Specify active GPUs for the daemon to use (docker / virtual environment). Equivalent to setting `NVIDIA_VISIBLE_DEVICES`. For example: - `--gpus 0`
- `--gpu 0,1,2`
- `--gpus all`
|
|
+|`-h`, `--help`| Get help for this command.|
|
+|`--log-file`| The log file for Task execution output (stdout / stderr) to a text file.|
|
+|`--log-level`| SDK log level. The values are:- `DEBUG`
- `INFO`
- `WARN`
- `WARNING`
- `ERROR`
- `CRITICAL`
|
|
+|`-O`| Compile optimized pyc code (see [python documentation](https://docs.python.org/3/using/cmdline.html#cmdoption-O)). Repeat for more optimization.|
|
+|`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|
|
+|`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|
|
## list
diff --git a/docs/clearml_data/clearml_data_cli.md b/docs/clearml_data/clearml_data_cli.md
index a9c80966..bc205cf8 100644
--- a/docs/clearml_data/clearml_data_cli.md
+++ b/docs/clearml_data/clearml_data_cli.md
@@ -28,14 +28,14 @@ clearml-data create [-h] [--parents [PARENTS [PARENTS ...]]] [--project PROJECT]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--name` |Dataset's name|

|
-|`--project`|Dataset's project|

|
-|`--version` |Dataset version. Use the [semantic versioning](https://semver.org) scheme. If not specified a version will automatically be assigned |

|
-|`--parents`|IDs of the dataset's parents. The dataset inherits all of its parents' content. Multiple parents can be entered, but they are merged in the order they were entered|

|
-|`--output-uri`| Sets where dataset and its previews are uploaded to|

|
-|`--tags` |Dataset user tags. The dataset can be labeled, which can be useful for organizing datasets|

|
+|`--name` |Dataset's name|

|
+|`--project`|Dataset's project|

|
+|`--version` |Dataset version. Use the [semantic versioning](https://semver.org) scheme. If not specified a version will automatically be assigned |

|
+|`--parents`|IDs of the dataset's parents. The dataset inherits all of its parents' content. Multiple parents can be entered, but they are merged in the order they were entered|

|
+|`--output-uri`| Sets where dataset and its previews are uploaded to|

|
+|`--tags` |Dataset user tags. The dataset can be labeled, which can be useful for organizing datasets|

|
@@ -63,15 +63,15 @@ clearml-data add [-h] [--id ID] [--dataset-folder DATASET_FOLDER]
@@ -90,12 +90,12 @@ clearml-data remove [-h] [--id ID] [--files [FILES [FILES ...]]]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id` | Dataset's ID. Default: previously created / accessed dataset|

|
-|`--files` | Files / folders to remove (wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`). Notice: file path is the path within the dataset, not the local path. For links, you can specify their URL (for example, `s3://bucket/data`) |

|
-|`--non-recursive` | Disable recursive scan of files |

|
-|`--verbose` | Verbose reporting |

|
+|`--id` | Dataset's ID. Default: previously created / accessed dataset|

|
+|`--files` | Files / folders to remove (wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`). Notice: file path is the path within the dataset, not the local path. For links, you can specify their URL (for example, `s3://bucket/data`) |

|
+|`--non-recursive` | Disable recursive scan of files |

|
+|`--verbose` | Verbose reporting |

|
@@ -121,12 +121,12 @@ clearml-data upload [-h] [--id ID] [--storage STORAGE] [--chunk-size CHUNK_SIZE]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
-|`--storage`| Remote storage to use for the dataset files. Default: files_server |

|
-|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
-|`--verbose` | Verbose reporting |

|
+|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
+|`--storage`| Remote storage to use for the dataset files. Default: files_server |

|
+|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
+|`--verbose` | Verbose reporting |

|
@@ -146,13 +146,13 @@ clearml-data close [-h] [--id ID] [--storage STORAGE] [--disable-upload]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
-|`--storage`| Remote storage to use for the dataset files. Default: files_server |

|
-|`--disable-upload` | Disable automatic upload when closing the dataset |

|
-|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
-|`--verbose` | Verbose reporting |

|
+|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
+|`--storage`| Remote storage to use for the dataset files. Default: files_server |

|
+|`--disable-upload` | Disable automatic upload when closing the dataset |

|
+|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
+|`--verbose` | Verbose reporting |

|
@@ -179,20 +179,20 @@ clearml-data sync [-h] [--id ID] [--dataset-folder DATASET_FOLDER] --folder FOLD
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
-|`--dataset-folder`|Dataset base folder to add the files to (default: Dataset root)|

|
-|`--folder`|Local folder to sync. Wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`|

|
-|`--storage`|Remote storage to use for the dataset files. Default: files server |

|
-|`--parents`|IDs of the dataset's parents (i.e. merge all parents). All modifications made to the folder since the parents were synced will be reflected in the dataset|

|
-|`--project`|If creating a new dataset, specify the dataset's project name|

|
-|`--name`|If creating a new dataset, specify the dataset's name|

|
-|`--version`|Specify the dataset's version using the [semantic versioning](https://semver.org) scheme. Default: `1.0.0`|

|
-|`--tags`|Dataset user tags|

|
-|`--skip-close`|Do not auto close dataset after syncing folders|

|
-|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
-|`--verbose` | Verbose reporting |

|
+|`--id`| Dataset's ID. Default: previously created / accessed dataset|

|
+|`--dataset-folder`|Dataset base folder to add the files to (default: Dataset root)|

|
+|`--folder`|Local folder to sync. Wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`|

|
+|`--storage`|Remote storage to use for the dataset files. Default: files server |

|
+|`--parents`|IDs of the dataset's parents (i.e. merge all parents). All modifications made to the folder since the parents were synced will be reflected in the dataset|

|
+|`--project`|If creating a new dataset, specify the dataset's project name|

|
+|`--name`|If creating a new dataset, specify the dataset's name|

|
+|`--version`|Specify the dataset's version using the [semantic versioning](https://semver.org) scheme. Default: `1.0.0`|

|
+|`--tags`|Dataset user tags|

|
+|`--skip-close`|Do not auto close dataset after syncing folders|

|
+|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |

|
+|`--verbose` | Verbose reporting |

|
@@ -211,14 +211,14 @@ clearml-data list [-h] [--id ID] [--project PROJECT] [--name NAME] [--version VE
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`|Dataset ID whose contents will be shown (alternatively, use project / name combination). Default: previously accessed dataset|

|
-|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|

|
-|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|

|
-|`--version`|Specify dataset version. Default: most recent version |

|
-|`--filter`|Filter files based on folder / wildcard. Multiple filters are supported. Example: `folder/date_*.json folder/subfolder`|

|
-|`--modified`|Only list file changes (add / remove / modify) introduced in this version|

|
+|`--id`|Dataset ID whose contents will be shown (alternatively, use project / name combination). Default: previously accessed dataset|

|
+|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|

|
+|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|

|
+|`--version`|Specify dataset version. Default: most recent version |

|
+|`--filter`|Filter files based on folder / wildcard. Multiple filters are supported. Example: `folder/date_*.json folder/subfolder`|

|
+|`--modified`|Only list file changes (add / remove / modify) introduced in this version|

|
@@ -236,10 +236,10 @@ clearml-data set-description [-h] [--id ID] [--description DESCRIPTION]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`|Dataset's ID|

|
-|`--description`|Description to be set|

|
+|`--id`|Dataset's ID|

|
+|`--description`|Description to be set|

|
@@ -268,14 +268,14 @@ clearml-data delete [-h] [--id ID] [--project PROJECT] [--name NAME]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`|ID of the dataset to delete (alternatively, use project / name combination).|

|
-|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|

|
-|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|

|
-|`--version`|Specify dataset version|

|
-|`-–force`|Force dataset deletion even if other dataset versions depend on it. Must also be used if `--entire-dataset` flag is used|

|
-|`--entire-dataset`|Delete all found datasets|

|
+|`--id`|ID of the dataset to delete (alternatively, use project / name combination).|

|
+|`--project`|Specify dataset project name (if used instead of ID, dataset name is also required)|

|
+|`--name`|Specify dataset name (if used instead of ID, dataset project is also required)|

|
+|`--version`|Specify dataset version|

|
+|`-–force`|Force dataset deletion even if other dataset versions depend on it. Must also be used if `--entire-dataset` flag is used|

|
+|`--entire-dataset`|Delete all found datasets|

|
@@ -293,11 +293,11 @@ clearml-data rename [-h] --new-name NEW_NAME --project PROJECT --name NAME
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--new-name`|The new name of the dataset|

|
-|`--project`|The project the dataset to be renamed belongs to|

|
-|`--name`|The current name of the dataset(s) to be renamed|

|
+|`--new-name`|The new name of the dataset|

|
+|`--project`|The project the dataset to be renamed belongs to|

|
+|`--name`|The current name of the dataset(s) to be renamed|

|
@@ -316,11 +316,11 @@ clearml-data move [-h] --new-project NEW_PROJECT --project PROJECT --name NAME
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--new-project`|The new project of the dataset|

|
-|`--project`|The current project the dataset to be move belongs to|

|
-|`--name`|The name of the dataset to be moved|

|
+|`--new-project`|The new project of the dataset|

|
+|`--project`|The current project the dataset to be move belongs to|

|
+|`--name`|The name of the dataset to be moved|

|
@@ -341,12 +341,12 @@ clearml-data search [-h] [--ids [IDS [IDS ...]]] [--project PROJECT]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--ids`|A list of dataset IDs|

|
-|`--project`|The project name of the datasets|

|
-|`--name`|A dataset name or a partial name to filter datasets by|

|
-|`--tags`|A list of dataset user tags|

|
+|`--ids`|A list of dataset IDs|

|
+|`--project`|The project name of the datasets|

|
+|`--name`|A dataset name or a partial name to filter datasets by|

|
+|`--tags`|A list of dataset user tags|

|
@@ -367,11 +367,11 @@ clearml-data compare [-h] --source SOURCE --target TARGET [--verbose]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--source`|Source dataset ID (used as baseline)|

|
-|`--target`|Target dataset ID (compare against the source baseline dataset)|

|
-|`--verbose`|Verbose report all file changes (instead of summary)|

|
+|`--source`|Source dataset ID (used as baseline)|

|
+|`--target`|Target dataset ID (compare against the source baseline dataset)|

|
+|`--verbose`|Verbose report all file changes (instead of summary)|

|
@@ -387,12 +387,12 @@ clearml-data squash [-h] --name NAME --ids [IDS [IDS ...]] [--storage STORAGE] [
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--name`|Create squashed dataset name|

|
-|`--ids`|Source dataset IDs to squash (merge down)|

|
-|`--storage`|Remote storage to use for the dataset files. Default: files_server |

|
-|`--verbose`|Verbose report all file changes (instead of summary)|

|
+|`--name`|Create squashed dataset name|

|
+|`--ids`|Source dataset IDs to squash (merge down)|

|
+|`--storage`|Remote storage to use for the dataset files. Default: files_server |

|
+|`--verbose`|Verbose report all file changes (instead of summary)|

|
@@ -408,12 +408,12 @@ clearml-data verify [-h] [--id ID] [--folder FOLDER] [--filesize] [--verbose]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`|Specify dataset ID. Default: previously created/accessed dataset|

|
-|`--folder`|Specify dataset local copy (if not provided the local cache folder will be verified)|

|
-|`--filesize`| If `True`, only verify file size and skip hash checks (default: `False`)|

|
-|`--verbose`|Verbose report all file changes (instead of summary)|

|
+|`--id`|Specify dataset ID. Default: previously created/accessed dataset|

|
+|`--folder`|Specify dataset local copy (if not provided the local cache folder will be verified)|

|
+|`--filesize`| If `True`, only verify file size and skip hash checks (default: `False`)|

|
+|`--verbose`|Verbose report all file changes (instead of summary)|

|
@@ -431,15 +431,15 @@ clearml-data get [-h] [--id ID] [--copy COPY] [--link LINK] [--part PART]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`| Specify dataset ID. Default: previously created / accessed dataset|

|
-|`--copy`| Get a writable copy of the dataset to a specific output folder|

|
-|`--link`| Create a soft link (not supported on Windows) to a read-only cached folder containing the dataset|

|
-|`--part`|Retrieve a partial copy of the dataset. Part number (0 to `--num-parts`-1) of total parts `--num-parts`.|

|
-|`--num-parts`|Total number of parts to divide the dataset into. Notice, minimum retrieved part is a single chunk in a dataset (or its parents). Example: Dataset gen4, with 3 parents, each with a single chunk, can be divided into 4 parts |

|
-|`--overwrite`| If `True`, overwrite the target folder|

|
-|`--verbose`| Verbose report all file changes (instead of summary)|

|
+|`--id`| Specify dataset ID. Default: previously created / accessed dataset|

|
+|`--copy`| Get a writable copy of the dataset to a specific output folder|

|
+|`--link`| Create a soft link (not supported on Windows) to a read-only cached folder containing the dataset|

|
+|`--part`|Retrieve a partial copy of the dataset. Part number (0 to `--num-parts`-1) of total parts `--num-parts`.|

|
+|`--num-parts`|Total number of parts to divide the dataset into. Notice, minimum retrieved part is a single chunk in a dataset (or its parents). Example: Dataset gen4, with 3 parents, each with a single chunk, can be divided into 4 parts |

|
+|`--overwrite`| If `True`, overwrite the target folder|

|
+|`--verbose`| Verbose report all file changes (instead of summary)|

|
@@ -455,8 +455,8 @@ clearml-data publish [-h] --id ID
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`| The dataset task ID to be published.|

|
+|`--id`| The dataset task ID to be published.|

|
diff --git a/docs/clearml_serving/clearml_serving_cli.md b/docs/clearml_serving/clearml_serving_cli.md
index 7abfe7db..be377690 100644
--- a/docs/clearml_serving/clearml_serving_cli.md
+++ b/docs/clearml_serving/clearml_serving_cli.md
@@ -19,11 +19,11 @@ clearml-serving [-h] [--debug] [--yes] [--id ID] {list,create,metrics,config,mod
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--id`|Serving Service (Control plane) Task ID to configure (if not provided, automatically detect the running control plane Task) |

|
-|`--debug` | Print debug messages |

|
-|`--yes` |Always answer YES on interactive inputs|

|
+|`--id`|Serving Service (Control plane) Task ID to configure (if not provided, automatically detect the running control plane Task) |

|
+|`--debug` | Print debug messages |

|
+|`--yes` |Always answer YES on interactive inputs|

|
@@ -51,11 +51,11 @@ clearml-serving create [-h] [--name NAME] [--tags TAGS [TAGS ...]] [--project PR
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--name` |Serving service's name. Default: `Serving-Service`|

|
-|`--project`|Serving service's project. Default: `DevOps`|

|
-|`--tags` |Serving service's user tags. The serving service can be labeled, which can be useful for organizing |

|
+|`--name` |Serving service's name. Default: `Serving-Service`|

|
+|`--project`|Serving service's project. Default: `DevOps`|

|
+|`--tags` |Serving service's user tags. The serving service can be labeled, which can be useful for organizing |

|
@@ -81,13 +81,13 @@ clearml-serving metrics add [-h] --endpoint ENDPOINT [--log-freq LOG_FREQ]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--endpoint`|Metric endpoint name including version (e.g. `"model/1"` or a prefix `"model/*"`). Notice: it will override any previous endpoint logged metrics|

|
-|`--log-freq`|Logging request frequency, between 0.0 to 1.0. Example: 1.0 means all requests are logged, 0.5 means half of the requests are logged if not specified. To use global logging frequency, see [`config --metric-log-freq`](#config)|

|
-|`--variable-scalar`|Add float (scalar) argument to the metric logger, `
=`. Example: with specific buckets: `"x1=0,0.2,0.4,0.6,0.8,1"` or with min/max/num_buckets `"x1=0.0/1.0/5"`. Notice: In cases where 1000s of requests per second reach the serving, it makes no sense to display every datapoint. So scalars can be divided in buckets, and for each minute for example. Then it's possible to calculate what % of the total traffic fell in bucket 1, bucket 2, bucket 3 etc. The Y axis represents the buckets, color is the value in % of traffic in that bucket, and X is time. |
|
-|`--variable-enum`|Add enum (string) argument to the metric logger, `=`. Example: `"detect=cat,dog,sheep"` |
|
-|`--variable-value`|Add non-samples scalar argument to the metric logger, ``. Example: `"latency"` |
|
+|`--endpoint`|Metric endpoint name including version (e.g. `"model/1"` or a prefix `"model/*"`). Notice: it will override any previous endpoint logged metrics|
|
+|`--log-freq`|Logging request frequency, between 0.0 to 1.0. Example: 1.0 means all requests are logged, 0.5 means half of the requests are logged if not specified. To use global logging frequency, see [`config --metric-log-freq`](#config)|
|
+|`--variable-scalar`|Add float (scalar) argument to the metric logger, `=`. Example: with specific buckets: `"x1=0,0.2,0.4,0.6,0.8,1"` or with min/max/num_buckets `"x1=0.0/1.0/5"`. Notice: In cases where 1000s of requests per second reach the serving, it makes no sense to display every datapoint. So scalars can be divided in buckets, and for each minute for example. Then it's possible to calculate what % of the total traffic fell in bucket 1, bucket 2, bucket 3 etc. The Y axis represents the buckets, color is the value in % of traffic in that bucket, and X is time. |
|
+|`--variable-enum`|Add enum (string) argument to the metric logger, `=`. Example: `"detect=cat,dog,sheep"` |
|
+|`--variable-value`|Add non-samples scalar argument to the metric logger, ``. Example: `"latency"` |
|
@@ -103,10 +103,10 @@ clearml-serving metrics remove [-h] [--endpoint ENDPOINT]
@@ -135,12 +135,12 @@ clearml-serving config [-h] [--base-serving-url BASE_SERVING_URL]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--base-serving-url`|External base serving service url. Example: `http://127.0.0.1:8080/serve`|

|
-|`--triton-grpc-server`|External ClearML-Triton serving container gRPC address. Example: `127.0.0.1:9001`|

|
-|`--kafka-metric-server`|External Kafka service url. Example: `127.0.0.1:9092`|

|
-|`--metric-log-freq`|Set default metric logging frequency between 0.0 to 1.0. 1.0 means that 100% of all requests are logged|

|
+|`--base-serving-url`|External base serving service url. Example: `http://127.0.0.1:8080/serve`|

|
+|`--triton-grpc-server`|External ClearML-Triton serving container gRPC address. Example: `127.0.0.1:9001`|

|
+|`--kafka-metric-server`|External Kafka service url. Example: `127.0.0.1:9092`|

|
+|`--metric-log-freq`|Set default metric logging frequency between 0.0 to 1.0. 1.0 means that 100% of all requests are logged|

|
@@ -174,9 +174,9 @@ clearml-serving model remove [-h] [--endpoint ENDPOINT]
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--endpoint` | Model endpoint name |

|
+|`--endpoint` | Model endpoint name |

|
@@ -193,16 +193,16 @@ clearml-serving model upload [-h] --name NAME [--tags TAGS [TAGS ...]] --project
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--name`|Specifying the model name to be registered in|

|
-|`--tags`| Add tags to the newly created model|

|
-|`--project`| Specify the project for the model to be registered in|

|
-|`--framework`| Specify the model framework. Options are: 'tensorflow', 'tensorflowjs', 'tensorflowlite', 'pytorch', 'torchscript', 'caffe', 'caffe2', 'onnx', 'keras', 'mknet', 'cntk', 'torch', 'darknet', 'paddlepaddle', 'scikitlearn', 'xgboost', 'lightgbm', 'parquet', 'megengine', 'catboost', 'tensorrt', 'openvino', 'custom' |

|
-|`--publish`| Publish the newly created model (change model state to "published" (i.e. locked and ready to deploy)|

|
-|`--path`|Specify a model file/folder to be uploaded and registered|

|
-|`--url`| Specify an already uploaded model url (e.g. `s3://bucket/model.bin`, `gs://bucket/model.bin`)|

|
-|`--destination`|Specify the target destination for the model to be uploaded. For example: `s3://bucket/folder/`, `s3://host_addr:port/bucket` (for non-AWS S3-like services like MinIO), `gs://bucket-name/folder`, `azure://
.blob.core.windows.net/path/to/file`|
|
+|`--name`|Specifying the model name to be registered in|
|
+|`--tags`| Add tags to the newly created model|
|
+|`--project`| Specify the project for the model to be registered in|
|
+|`--framework`| Specify the model framework. Options are: 'tensorflow', 'tensorflowjs', 'tensorflowlite', 'pytorch', 'torchscript', 'caffe', 'caffe2', 'onnx', 'keras', 'mknet', 'cntk', 'torch', 'darknet', 'paddlepaddle', 'scikitlearn', 'xgboost', 'lightgbm', 'parquet', 'megengine', 'catboost', 'tensorrt', 'openvino', 'custom' |
|
+|`--publish`| Publish the newly created model (change model state to "published" (i.e. locked and ready to deploy)|
|
+|`--path`|Specify a model file/folder to be uploaded and registered|
|
+|`--url`| Specify an already uploaded model url (e.g. `s3://bucket/model.bin`, `gs://bucket/model.bin`)|
|
+|`--destination`|Specify the target destination for the model to be uploaded. For example: `s3://bucket/folder/`, `s3://host_addr:port/bucket` (for non-AWS S3-like services like MinIO), `gs://bucket-name/folder`, `azure://.blob.core.windows.net/path/to/file`|
|
@@ -221,12 +221,12 @@ clearml-serving model canary [-h] [--endpoint ENDPOINT] [--weights WEIGHTS [WEIG
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--endpoint`| Model canary serving endpoint name (e.g. `my_model/latest`)|

|
-|`--weights`| Model canary weights (order matching model ep), (e.g. 0.2 0.8) |

|
-|`--input-endpoints`|Model endpoint prefixes, can also include version (e.g. `my_model`, `my_model/v1`)|

|
-|`--input-endpoint-prefix`| Model endpoint prefix, lexicographic order or by version `
` (e.g. `my_model/1`, `my_model/v1`), where the first weight matches the last version.|
|
+|`--endpoint`| Model canary serving endpoint name (e.g. `my_model/latest`)|
|
+|`--weights`| Model canary weights (order matching model ep), (e.g. 0.2 0.8) |
|
+|`--input-endpoints`|Model endpoint prefixes, can also include version (e.g. `my_model`, `my_model/v1`)|
|
+|`--input-endpoint-prefix`| Model endpoint prefix, lexicographic order or by version `` (e.g. `my_model/1`, `my_model/v1`), where the first weight matches the last version.|
|
@@ -250,23 +250,23 @@ clearml-serving model auto-update [-h] [--endpoint ENDPOINT] --engine ENGINE
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--endpoint`| Base model endpoint (must be unique)|

|
-|`--engine`| Model endpoint serving engine (triton, sklearn, xgboost, lightgbm)|

|
-|`--max-versions`|Max versions to store (and create endpoints) for the model. Highest number is the latest version |

|
-|`--name`| Specify model name to be selected and auto-updated (notice regexp selection use `"$name^"` for exact match) |

|
-|`--tags`|Specify tags to be selected and auto-updated |

|
-|`--project`|Specify model project to be selected and auto-updated |

|
-|`--published`| Only select published model for auto-update |

|
-|`--preprocess` |Specify Pre/Post processing code to be used with the model (point to local file / folder) - this should hold for all the models |

|
-|`--input-size`| Specify the model matrix input size [Rows x Columns X Channels etc ...] |

|
-|`--input-type`| Specify the model matrix input type. Examples: uint8, float32, int16, float16 etc. |

|
-|`--input-name`|Specify the model layer pushing input into. Example: layer_0 |

|
-|`--output-size`|Specify the model matrix output size [Rows x Columns X Channels etc ...]|

|
-|`--output_type`| Specify the model matrix output type. Examples: uint8, float32, int16, float16 etc. |

|
-|`--output-name`|Specify the model layer pulling results from. Examples: layer_99|

|
-|`--aux-config`| Specify additional engine specific auxiliary configuration in the form of key=value. Example: `platform=onnxruntime_onnx response_cache.enable=true max_batch_size=8`. Notice: you can also pass a full configuration file (e.g. Triton "config.pbtxt")|

|
+|`--endpoint`| Base model endpoint (must be unique)|

|
+|`--engine`| Model endpoint serving engine (triton, sklearn, xgboost, lightgbm)|

|
+|`--max-versions`|Max versions to store (and create endpoints) for the model. Highest number is the latest version |

|
+|`--name`| Specify model name to be selected and auto-updated (notice regexp selection use `"$name^"` for exact match) |

|
+|`--tags`|Specify tags to be selected and auto-updated |

|
+|`--project`|Specify model project to be selected and auto-updated |

|
+|`--published`| Only select published model for auto-update |

|
+|`--preprocess` |Specify Pre/Post processing code to be used with the model (point to local file / folder) - this should hold for all the models |

|
+|`--input-size`| Specify the model matrix input size [Rows x Columns X Channels etc ...] |

|
+|`--input-type`| Specify the model matrix input type. Examples: uint8, float32, int16, float16 etc. |

|
+|`--input-name`|Specify the model layer pushing input into. Example: layer_0 |

|
+|`--output-size`|Specify the model matrix output size [Rows x Columns X Channels etc ...]|

|
+|`--output_type`| Specify the model matrix output type. Examples: uint8, float32, int16, float16 etc. |

|
+|`--output-name`|Specify the model layer pulling results from. Examples: layer_99|

|
+|`--aux-config`| Specify additional engine specific auxiliary configuration in the form of key=value. Example: `platform=onnxruntime_onnx response_cache.enable=true max_batch_size=8`. Notice: you can also pass a full configuration file (e.g. Triton "config.pbtxt")|

|
@@ -289,24 +289,24 @@ clearml-serving model add [-h] --engine ENGINE --endpoint ENDPOINT [--version VE
-|Name|Description|Optional|
+|Name|Description|Mandatory|
|---|---|---|
-|`--engine`| Model endpoint serving engine (triton, sklearn, xgboost, lightgbm)|

|
-|`--endpoint`| Base model endpoint (must be unique)|

|
-|`--version`|Model endpoint version (default: None) |

|
-|`--model-id`|Specify a model ID to be served|

|
-|`--preprocess` |Specify Pre/Post processing code to be used with the model (point to local file / folder) - this should hold for all the models |

|
-|`--input-size`| Specify the model matrix input size [Rows x Columns X Channels etc ...] |

|
-|`--input-type`| Specify the model matrix input type. Examples: uint8, float32, int16, float16 etc. |

|
-|`--input-name`|Specify the model layer pushing input into. Example: layer_0 |

|
-|`--output-size`|Specify the model matrix output size [Rows x Columns X Channels etc ...]|

|
-|`--output_type`| Specify the model matrix output type. Examples: uint8, float32, int16, float16 etc. |

|
-|`--output-name`|Specify the model layer pulling results from. Examples: layer_99|

|
-|`--aux-config`| Specify additional engine specific auxiliary configuration in the form of key=value. Example: `platform=onnxruntime_onnx response_cache.enable=true max_batch_size=8`. Notice: you can also pass a full configuration file (e.g. Triton "config.pbtxt")|

|
-|`--name`| Instead of specifying `--model-id` select based on model name |

|
-|`--tags`|Specify tags to be selected and auto-updated |

|
-|`--project`|Instead of specifying `--model-id` select based on model project |

|
-|`--published`| Instead of specifying `--model-id` select based on model published |

|
+|`--engine`| Model endpoint serving engine (triton, sklearn, xgboost, lightgbm)|

|
+|`--endpoint`| Base model endpoint (must be unique)|

|
+|`--version`|Model endpoint version (default: None) |

|
+|`--model-id`|Specify a model ID to be served|

|
+|`--preprocess` |Specify Pre/Post processing code to be used with the model (point to local file / folder) - this should hold for all the models |

|
+|`--input-size`| Specify the model matrix input size [Rows x Columns X Channels etc ...] |

|
+|`--input-type`| Specify the model matrix input type. Examples: uint8, float32, int16, float16 etc. |

|
+|`--input-name`|Specify the model layer pushing input into. Example: layer_0 |

|
+|`--output-size`|Specify the model matrix output size [Rows x Columns X Channels etc ...]|

|
+|`--output_type`| Specify the model matrix output type. Examples: uint8, float32, int16, float16 etc. |

|
+|`--output-name`|Specify the model layer pulling results from. Examples: layer_99|

|
+|`--aux-config`| Specify additional engine specific auxiliary configuration in the form of key=value. Example: `platform=onnxruntime_onnx response_cache.enable=true max_batch_size=8`. Notice: you can also pass a full configuration file (e.g. Triton "config.pbtxt")|

|
+|`--name`| Instead of specifying `--model-id` select based on model name |

|
+|`--tags`|Specify tags to be selected and auto-updated |

|
+|`--project`|Instead of specifying `--model-id` select based on model project |

|
+|`--published`| Instead of specifying `--model-id` select based on model published |

|