Small edits (#965)

This commit is contained in:
pollfly 2024-11-17 13:32:05 +02:00 committed by GitHub
parent 8f268402a1
commit 77b1b15962
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 40 additions and 40 deletions

View File

@ -7,7 +7,7 @@ In this tutorial, you will go over the model lifecycle -- from training to servi
* Serving the model using **ClearML Serving** * Serving the model using **ClearML Serving**
* Spinning the inference container * Spinning the inference container
The tutorial will also go over these additional options that you can use with `clearml-serving`: The tutorial also covers the following`clearml-serving` features:
* Automatic model deployment * Automatic model deployment
* Canary endpoints * Canary endpoints
* Model performance monitoring * Model performance monitoring
@ -19,18 +19,17 @@ Before executing the steps below, make sure you have completed `clearml-serving`
## Steps ## Steps
### Step 1: Train Model ### Step 1: Train Model
Train a model. Work from your local `clearml-serving` repository's root. Train a model using the example script. Start from the root directory of your local `clearml-serving` repository.
- Create a python virtual environment 1. Create a Python virtual environment
- Install the script requirements `pip3 install -r examples/sklearn/requirements.txt` 1. Install the script requirements: `pip3 install -r examples/sklearn/requirements.txt`
- Execute the [training script](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/train_model.py) 1. Execute the [training script](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/train_model.py): `python3 examples/sklearn/train_model.py`.
`python3 examples/sklearn/train_model.py`.
During execution, ClearML automatically registers the sklearn model and uploads it into the model repository. During execution, ClearML automatically registers the sklearn model and uploads it to the model repository.
For information about explicit model registration, see [Registering and Deploying New Models Manually](#registering-and-deploying-new-models-manually). For information about explicit model registration, see [Registering and Deploying New Models Manually](#registering-and-deploying-new-models-manually).
### Step 2: Register Model ### Step 2: Register Model
Register the new Model on the Serving Service. Register the new Model on the Serving Service:
```bash ```bash
clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model - sklearn-model" --project "serving examples" clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model - sklearn-model" --project "serving examples"
@ -44,42 +43,43 @@ The Serving Service's ID is required to register a model, and to execute `clearm
:::note :::note
The preprocessing python code is packaged and uploaded to the Serving Service, to be used by any inference container, The preprocessing Python code is packaged and uploaded to the Serving Service, to be used by any inference container,
and downloaded in realtime when updated and downloaded in real time when updated
::: :::
### Step 3: Spin Inference Container ### Step 3: Spin Inference Container
Spin the Inference Container Spin the Inference Container:
- Customize container [Dockerfile](https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving/Dockerfile) if needed 1. Customize container [Dockerfile](https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving/Dockerfile) if needed
- Build container: 1. Build container:
```bash ```bash
docker build --tag clearml-serving-inference:latest -f clearml_serving/serving/Dockerfile . docker build --tag clearml-serving-inference:latest -f clearml_serving/serving/Dockerfile .
``` ```
- Spin the inference container: 1. Spin the inference container:
```bash ```bash
docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest
``` ```
Now, test the new model inference endpoint: 1. Test the new model inference endpoint:
```bash
curl -X POST "http://127.0.0.1:8080/serve/test_model_sklearn" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}' ```bash
``` curl -X POST "http://127.0.0.1:8080/serve/test_model_sklearn" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'
```
Now that you have an inference container running, you can add new model inference endpoints directly with the CLI. The Now that you have an inference container running, you can add new model inference endpoints directly with the CLI. The
inference container will automatically sync once every 5 minutes. On the first few requests the inference container inference container will automatically sync every 5 minutes. The first few requests may take longer while the inference container
needs to download the model file and preprocessing python code, this means the request might take a little longer, once downloads the model file and preprocessing Python code. Once
everything is cached, it will return almost immediately. everything is cached, responses will return almost immediately.
:::note :::note
Review the model repository in the ClearML web UI, under the "serving examples" Project on your ClearML Review the model repository in the ClearML web UI, under the "serving examples" Project on your ClearML
account/server ([free hosted](https://app.clear.ml) or [self-deployed](https://github.com/allegroai/clearml-server)). account/server ([free hosted](https://app.clear.ml) or [self-deployed](https://github.com/allegroai/clearml-server)).
Inference services status, console outputs and machine metrics are available in the ClearML UI in the Serving Service Inference services status, console outputs and machine metrics are available in the ClearML UI in the Serving Service
project (default: "DevOps" project) project (default: "DevOps" project).
::: :::
## Registering and Deploying New Models Manually ## Registering and Deploying New Models Manually
@ -119,18 +119,18 @@ model name
The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model
repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you
`publish` or `tag` a new model in the ClearML model repository. This automation interface allows for simpler CI/CD model `publish` or `tag` a new model in the ClearML model repository. This automation interface allows for a simpler CI/CD model
deployment process, as a single API automatically deploys (or removes) a model from the Serving Service. deployment process, as a single API automatically deploys (or removes) a model from the Serving Service.
#### Automatic Model Deployment Example #### Automatic Model Deployment Example
1. Configure the model auto-update on the Serving Service 1. Configure the model auto-update on the Serving Service:
```bash ```bash
clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2` clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2`
``` ```
1. Deploy the Inference container (if not already deployed) 1. Deploy the Inference container (if not already deployed)
1. Publish a new model the model repository in one of the following ways: 1. Publish a new model in the model repository in one of the following ways:
- Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right-click and select "Publish" - Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right-click and select "Publish"
- Use the RestAPI (see [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many)) - Use the RestAPI (see [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many))
- Use Python interface: - Use Python interface:
@ -251,10 +251,10 @@ You can also specify per-endpoint log frequency with the `clearml-serving` CLI.
See examples of ClearML Serving with other supported frameworks: See examples of ClearML Serving with other supported frameworks:
* [scikit-learn](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/readme.md) - random data * [scikit-learn](https://github.com/allegroai/clearml-serving/blob/main/examples/sklearn/readme.md) - Random data
* [scikit-learn Model Ensemble](https://github.com/allegroai/clearml-serving/blob/main/examples/ensemble/readme.md) - random data * [scikit-learn Model Ensemble](https://github.com/allegroai/clearml-serving/blob/main/examples/ensemble/readme.md) - Random data
* [XGBoost](https://github.com/allegroai/clearml-serving/blob/main/examples/xgboost/readme.md) - iris dataset * [XGBoost](https://github.com/allegroai/clearml-serving/blob/main/examples/xgboost/readme.md) - Iris dataset
* [LightGBM](https://github.com/allegroai/clearml-serving/blob/main/examples/lightgbm/readme.md) - iris dataset * [LightGBM](https://github.com/allegroai/clearml-serving/blob/main/examples/lightgbm/readme.md) - Iris dataset
* [PyTorch](https://github.com/allegroai/clearml-serving/blob/main/examples/pytorch/readme.md) - mnist dataset * [PyTorch](https://github.com/allegroai/clearml-serving/blob/main/examples/pytorch/readme.md) - MNIST dataset
* [TensorFlow/Keras](https://github.com/allegroai/clearml-serving/blob/main/examples/keras/readme.md) - mnist dataset * [TensorFlow/Keras](https://github.com/allegroai/clearml-serving/blob/main/examples/keras/readme.md) - MNIST dataset
* [Model Pipeline](https://github.com/allegroai/clearml-serving/blob/main/examples/pipeline/readme.md) - random data * [Model Pipeline](https://github.com/allegroai/clearml-serving/blob/main/examples/pipeline/readme.md) - Random data

View File

@ -39,8 +39,8 @@ Hover over the model configuration area to access the following actions:
* <img src="/docs/latest/icons/ico-search.svg" alt="Magnifying glass" className="icon size-md space-sm" /> Search * <img src="/docs/latest/icons/ico-search.svg" alt="Magnifying glass" className="icon size-md space-sm" /> Search
* <img src="/docs/latest/icons/ico-copy-to-clipboard.svg" alt="Copy" className="icon size-md space-sm" /> Copy configuration * <img src="/docs/latest/icons/ico-copy-to-clipboard.svg" alt="Copy" className="icon size-md space-sm" /> Copy configuration
* <img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />CLEAR (for Draft models) - Delete the configuration * <img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />**CLEAR** (for *Draft* models) - Delete the configuration
* EDIT (for Draft models) - Modify / Add model configuration * **EDIT** (for *Draft* models) - Modify / Add model configuration
## Label Enumeration ## Label Enumeration