Add missing requirements

This commit is contained in:
allegroai 2022-03-06 02:05:52 +02:00
parent 78436106f5
commit 451e335ceb
11 changed files with 38 additions and 87 deletions

View File

@ -248,7 +248,7 @@ Example:
- `curl -X POST "http://127.0.0.1:8080/serve/test_model" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'`
### Model inference Examples
### Model Serving Examples
- Scikit-Learn [example](examples/sklearn/readme.md) - random data
- XGBoost [example](examples/xgboost/readme.md) - iris dataset

View File

@ -1,6 +1,6 @@
clearml >= 1.1.6
clearml-serving
tritonclient
tritonclient[grpc]
grpcio
Pillow
pathlib2

View File

@ -11,4 +11,6 @@ numpy
pandas
scikit-learn
grpcio
Pillow
Pillow
xgboost
lightgbm

View File

@ -1,10 +1,11 @@
# Train and Deploy Keras model with Nvidia Triton Engine
## training mock model
## training mnist digit classifier model
Run the mock python training code
```bash
python3 train_keras_mnist.py
pip install -r examples/keras/requirements.txt
python examples/keras/train_keras_mnist.py
```
The output will be a model created on the project "serving examples", by the name "train keras model"
@ -13,10 +14,10 @@ The output will be a model created on the project "serving examples", by the nam
1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID)
2. Create model endpoint:
`clearml-serving --id <service_id> model add --engine triton --endpoint "test_model_keras" --preprocess "preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32
`clearml-serving --id <service_id> model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32
`
Or auto update
`clearml-serving --id <service_id> model auto-update --engine triton --endpoint "test_model_auto" --preprocess "preprocess.py" --name "train keras model" --project "serving examples" --max-versions 2
`clearml-serving --id <service_id> model auto-update --engine triton --endpoint "test_model_auto" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --max-versions 2
--input-size 1 784 --input-name "dense_input" --input-type float32
--output-size -1 10 --output-name "activation_2" --output-type float32
`
@ -31,16 +32,3 @@ Or add Canary endpoint
> **_Notice:_** You can also change the serving service while it is already running!
This includes adding/removing endpoints, adding canary model routing etc.
### Running / debugging the serving service manually
Once you have setup the Serving Service Task
```bash
$ pip3 install -r clearml_serving/serving/requirements.txt
$ CLEARML_SERVING_TASK_ID=<service_id> PYHTONPATH=$(pwd) python3 -m gunicorn \
--preload clearml_serving.serving.main:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8080
```

View File

@ -1,10 +1,11 @@
# Train and Deploy LightGBM model
## training mock model
## training iris classifier model
Run the mock python training code
```bash
python3 train_model.py
pip install -r examples/lightgbm/requirements.txt
python examples/lightgbm/train_model.py
```
The output will be a model created on the project "serving examples", by the name "train lightgbm model"
@ -15,9 +16,9 @@ The output will be a model created on the project "serving examples", by the nam
2. Create model endpoint:
3. `clearml-serving --id <service_id> model add --engine lightgbm --endpoint "test_model_lgbm" --preprocess "preprocess.py" --name "train lightgbm model" --project "serving examples"`
3. `clearml-serving --id <service_id> model add --engine lightgbm --endpoint "test_model_lgbm" --preprocess "examples/lightgbm/preprocess.py" --name "train lightgbm model" --project "serving examples"`
Or auto-update
`clearml-serving --id <service_id> model auto-update --engine lightgbm --endpoint "test_model_auto" --preprocess "preprocess.py" --name "train lightgbm model" --project "serving examples" --max-versions 2`
`clearml-serving --id <service_id> model auto-update --engine lightgbm --endpoint "test_model_auto" --preprocess "examples/lightgbm/preprocess.py" --name "train lightgbm model" --project "serving examples" --max-versions 2`
Or add Canary endpoint
`clearml-serving --id <service_id> model canary --endpoint "test_model_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_auto`
@ -27,16 +28,3 @@ Or add Canary endpoint
> **_Notice:_** You can also change the serving service while it is already running!
This includes adding/removing endpoints, adding canary model routing etc.
### Running / debugging the serving service manually
Once you have setup the Serving Service Task
```bash
$ pip3 install -r clearml_serving/serving/requirements.txt
$ CLEARML_SERVING_TASK_ID=<service_id> PYHTONPATH=$(pwd) python3 -m gunicorn \
--preload clearml_serving.serving.main:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8080
```

View File

@ -0,0 +1,3 @@
clearml >= 1.1.6
lightgbm

View File

@ -1,10 +1,11 @@
# Train and Deploy Keras model with Nvidia Triton Engine
## training mock model
## training mnist digit classifier model
Run the mock python training code
```bash
python3 train_pytorch_mnist.py
pip install -r examples/pytorch/requirements.txt
python examples/pytorch/train_pytorch_mnist.py
```
The output will be a model created on the project "serving examples", by the name "train pytorch model"
@ -14,12 +15,12 @@ The output will be a model created on the project "serving examples", by the nam
1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID)
2. Create model endpoint:
`clearml-serving --id <service_id> model add --engine triton --endpoint "test_model_pytorch" --preprocess "preprocess.py" --name "train pytorch model" --project "serving examples"
`clearml-serving --id <service_id> model add --engine triton --endpoint "test_model_pytorch" --preprocess "examples/pytorch/preprocess.py" --name "train pytorch model" --project "serving examples"
--input-size 28 28 1 --input-name "INPUT__0" --input-type float32
--output-size -1 10 --output-name "OUTPUT__0" --output-type float32
`
Or auto update
`clearml-serving --id <service_id> model auto-update --engine triton --endpoint "test_model_pytorch_auto" --preprocess "preprocess.py" --name "train pytorch model" --project "serving examples" --max-versions 2
`clearml-serving --id <service_id> model auto-update --engine triton --endpoint "test_model_pytorch_auto" --preprocess "examples/pytorch/preprocess.py" --name "train pytorch model" --project "serving examples" --max-versions 2
--input-size 28 28 1 --input-name "INPUT__0" --input-type float32
--output-size -1 10 --output-name "OUTPUT__0" --output-type float32
`
@ -35,15 +36,3 @@ Or add Canary endpoint
> **_Notice:_** You can also change the serving service while it is already running!
This includes adding/removing endpoints, adding canary model routing etc.
### Running / debugging the serving service manually
Once you have setup the Serving Service Task
```bash
$ pip3 install -r clearml_serving/serving/requirements.txt
$ CLEARML_SERVING_TASK_ID=<service_id> PYHTONPATH=$(pwd) python3 -m gunicorn \
--preload clearml_serving.serving.main:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8080
```

View File

@ -1,10 +1,11 @@
# Train and Deploy Scikit-Learn model
## training mock model
## training mock logistic regression model
Run the mock python training code
```bash
python3 train_model.py
pip install -r examples/sklearn/requirements.txt
python examples/sklearn/train_model.py
```
The output will be a model created on the project "serving examples", by the name "train sklearn model"
@ -13,9 +14,9 @@ The output will be a model created on the project "serving examples", by the nam
1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID)
2. Create model endpoint:
`clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples"`
`clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --project "serving examples"`
Or auto update
`clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2`
`clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2`
Or add Canary endpoint
`clearml-serving --id <service_id> model canary --endpoint "test_model_sklearn_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_sklearn_auto`
@ -24,16 +25,3 @@ Or add Canary endpoint
> **_Notice:_** You can also change the serving service while it is already running!
This includes adding/removing endpoints, adding canary model routing etc.
### Running / debugging the serving service manually
Once you have setup the Serving Service Task
```bash
$ pip3 install -r clearml_serving/serving/requirements.txt
$ CLEARML_SERVING_TASK_ID=<service_id> PYHTONPATH=$(pwd) python3 -m gunicorn \
--preload clearml_serving.serving.main:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8080
```

View File

@ -0,0 +1,2 @@
clearml >= 1.1.6
scikit-learn

View File

@ -1,10 +1,11 @@
# Train and Deploy XGBoost model
## training mock model
## training iris classifier model
Run the mock python training code
```bash
python3 train_model.py
pip install -r examples/xgboost/requirements.txt
python examples/xgboost/train_model.py
```
The output will be a model created on the project "serving examples", by the name "train xgboost model"
@ -14,9 +15,9 @@ The output will be a model created on the project "serving examples", by the nam
1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID)
2. Create model endpoint:
3. `clearml-serving --id <service_id> model add --engine xgboost --endpoint "test_model_xgb" --preprocess "preprocess.py" --name "train xgboost model" --project "serving examples"`
3. `clearml-serving --id <service_id> model add --engine xgboost --endpoint "test_model_xgb" --preprocess "examples/xgboost/preprocess.py" --name "train xgboost model" --project "serving examples"`
Or auto update
`clearml-serving --id <service_id> model auto-update --engine xgboost --endpoint "test_model_xgb_auto" --preprocess "preprocess.py" --name "train xgboost model" --project "serving examples" --max-versions 2`
`clearml-serving --id <service_id> model auto-update --engine xgboost --endpoint "test_model_xgb_auto" --preprocess "examples/xgboost/preprocess.py" --name "train xgboost model" --project "serving examples" --max-versions 2`
Or add Canary endpoint
`clearml-serving --id <service_id> model canary --endpoint "test_model_xgb_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_xgb_auto`
@ -25,16 +26,3 @@ Or add Canary endpoint
> **_Notice:_** You can also change the serving service while it is already running!
This includes adding/removing endpoints, adding canary model routing etc.
### Running / debugging the serving service manually
Once you have setup the Serving Service Task
```bash
$ pip3 install -r clearml_serving/serving/requirements.txt
$ CLEARML_SERVING_TASK_ID=<service_id> PYHTONPATH=$(pwd) python3 -m gunicorn \
--preload clearml_serving.serving.main:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8080
```

View File

@ -0,0 +1,3 @@
clearml >= 1.1.6
xgboost