diff --git a/docs/clearml_serving/clearml_serving_cli.md b/docs/clearml_serving/clearml_serving_cli.md index fb9795d9..96c4a8c4 100644 --- a/docs/clearml_serving/clearml_serving_cli.md +++ b/docs/clearml_serving/clearml_serving_cli.md @@ -185,7 +185,7 @@ Upload and register model files/folder. ```bash clearml-serving model upload [-h] --name NAME [--tags TAGS [TAGS ...]] --project PROJECT - [--framework {scikit-learn,xgboost,lightgbm,tensorflow,pytorch}] + [--framework {tensorflow,tensorflowjs,tensorflowlite,pytorch,torchscript,caffe,caffe2,onnx,keras,mknet,cntk,torch,darknet,paddlepaddle,scikitlearn,xgboost,lightgbm,parquet,megengine,catboost,tensorrt,openvino,custom}] [--publish] [--path PATH] [--url URL] [--destination DESTINATION] ``` @@ -198,7 +198,7 @@ clearml-serving model upload [-h] --name NAME [--tags TAGS [TAGS ...]] --project |`--name`|Specifying the model name to be registered in| No| |`--tags`| Add tags to the newly created model| Yes| |`--project`| Specify the project for the model to be registered in| No| -|`--framework`| Specify the model framework. Options are: "scikit-learn", "xgboost", "lightgbm", "tensorflow", "pytorch" | Yes| +|`--framework`| Specify the model framework. Options are: 'tensorflow', 'tensorflowjs', 'tensorflowlite', 'pytorch', 'torchscript', 'caffe', 'caffe2', 'onnx', 'keras', 'mknet', 'cntk' , 'torch', 'darknet', 'paddlepaddle', 'scikitlearn', 'xgboost', 'lightgbm', 'parquet', 'megengine', 'catboost', 'tensorrt', 'openvino', 'custom' | Yes| |`--publish`| Publish the newly created model (change model state to "published" (i.e. locked and ready to deploy)|Yes| |`--path`|Specify a model file/folder to be uploaded and registered| Yes| |`--url`| Specify an already uploaded model url (e.g. `s3://bucket/model.bin`, `gs://bucket/model.bin`)|Yes| @@ -294,7 +294,7 @@ clearml-serving model add [-h] --engine ENGINE --endpoint ENDPOINT [--version VE |`--engine`| Model endpoint serving engine (triton, sklearn, xgboost, lightgbm)| No| |`--endpoint`| Base model endpoint (must be unique)| No| |`--version`|Model endpoint version (default: None) | Yes| -|`model-id`|Specify a model ID to be served|No| +|`--model-id`|Specify a model ID to be served|No| |`--preprocess` |Specify Pre/Post processing code to be used with the model (point to local file / folder) - this should hold for all the models |Yes| |`--input-size`| Specify the model matrix input size [Rows x Columns X Channels etc ...] | Yes| |`--input-type`| Specify the model matrix input type. Examples: uint8, float32, int16, float16 etc. |Yes| @@ -303,10 +303,10 @@ clearml-serving model add [-h] --engine ENGINE --endpoint ENDPOINT [--version VE |`--output_type`| Specify the model matrix output type. Examples: uint8, float32, int16, float16 etc. | Yes| |`--output-name`|Specify the model layer pulling results from. Examples: layer_99| Yes| |`--aux-config`| Specify additional engine specific auxiliary configuration in the form of key=value. Example: `platform=onnxruntime_onnx response_cache.enable=true max_batch_size=8`. Notice: you can also pass a full configuration file (e.g. Triton "config.pbtxt")|Yes| -|`--name`| Instead of specifying `model-id` select based on model name | Yes| +|`--name`| Instead of specifying `--model-id` select based on model name | Yes| |`--tags`|Specify tags to be selected and auto-updated |Yes| -|`--project`|Instead of specifying `model-id` select based on model project | Yes| -|`--published`| Instead of specifying `model-id` select based on model published |Yes| +|`--project`|Instead of specifying `--model-id` select based on model project | Yes| +|`--published`| Instead of specifying `--model-id` select based on model published |Yes| diff --git a/docs/clearml_serving/clearml_serving_tutorial.md b/docs/clearml_serving/clearml_serving_tutorial.md index 604c8c49..206c798f 100644 --- a/docs/clearml_serving/clearml_serving_tutorial.md +++ b/docs/clearml_serving/clearml_serving_tutorial.md @@ -38,7 +38,7 @@ clearml-serving --id model add --engine sklearn --endpoint "test_mo :::info Service ID Make sure that you have executed `clearml-serving`'s -[initial setup](clearml_serving.md#initial-setup), in which you create a Serving Service. +[initial setup](clearml_serving_setup.md#initial-setup), in which you create a Serving Service. The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands ::: @@ -85,10 +85,10 @@ Uploading an existing model file into the model repository can be done via the ` or with the `clearml-serving` CLI. 1. Upload the model file to the `clearml-server` file storage and register it. The `--path` parameter is used to input - the path to a local model file. + the path to a local model file (local model created in [step 1](#step-1--train-model) located in `./sklearn-model.pkl`). ```bash - clearml-serving --id model upload --name "manual sklearn model" --project "serving examples" --framework "scikit-learn" --path examples/sklearn/sklearn-model.pkl + clearml-serving --id model upload --name "manual sklearn model" --project "serving examples" --framework "scikitlearn" --path ./sklearn-model.pkl ``` You now have a new Model named `manual sklearn model` in the `serving examples` project. The CLI output prints