diff --git a/README.md b/README.md index e2a11f3..3678f1a 100644 --- a/README.md +++ b/README.md @@ -248,7 +248,7 @@ Example: - `curl -X POST "http://127.0.0.1:8080/serve/test_model" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'` -### Model inference Examples +### Model Serving Examples - Scikit-Learn [example](examples/sklearn/readme.md) - random data - XGBoost [example](examples/xgboost/readme.md) - iris dataset diff --git a/clearml_serving/engines/triton/requirements.txt b/clearml_serving/engines/triton/requirements.txt index aec1e2f..4f45b00 100644 --- a/clearml_serving/engines/triton/requirements.txt +++ b/clearml_serving/engines/triton/requirements.txt @@ -1,6 +1,6 @@ clearml >= 1.1.6 clearml-serving -tritonclient +tritonclient[grpc] grpcio Pillow pathlib2 \ No newline at end of file diff --git a/clearml_serving/serving/requirements.txt b/clearml_serving/serving/requirements.txt index 281b0f8..2bacc7a 100644 --- a/clearml_serving/serving/requirements.txt +++ b/clearml_serving/serving/requirements.txt @@ -11,4 +11,6 @@ numpy pandas scikit-learn grpcio -Pillow \ No newline at end of file +Pillow +xgboost +lightgbm diff --git a/examples/keras/readme.md b/examples/keras/readme.md index 185c1ca..96a6a99 100644 --- a/examples/keras/readme.md +++ b/examples/keras/readme.md @@ -1,10 +1,11 @@ # Train and Deploy Keras model with Nvidia Triton Engine -## training mock model +## training mnist digit classifier model Run the mock python training code ```bash -python3 train_keras_mnist.py +pip install -r examples/keras/requirements.txt +python examples/keras/train_keras_mnist.py ``` The output will be a model created on the project "serving examples", by the name "train keras model" @@ -13,10 +14,10 @@ The output will be a model created on the project "serving examples", by the nam 1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID) 2. Create model endpoint: - `clearml-serving --id model add --engine triton --endpoint "test_model_keras" --preprocess "preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32 + `clearml-serving --id model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32 ` Or auto update -`clearml-serving --id model auto-update --engine triton --endpoint "test_model_auto" --preprocess "preprocess.py" --name "train keras model" --project "serving examples" --max-versions 2 +`clearml-serving --id model auto-update --engine triton --endpoint "test_model_auto" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --max-versions 2 --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32 ` @@ -31,16 +32,3 @@ Or add Canary endpoint > **_Notice:_** You can also change the serving service while it is already running! This includes adding/removing endpoints, adding canary model routing etc. - - -### Running / debugging the serving service manually -Once you have setup the Serving Service Task - -```bash -$ pip3 install -r clearml_serving/serving/requirements.txt -$ CLEARML_SERVING_TASK_ID= PYHTONPATH=$(pwd) python3 -m gunicorn \ - --preload clearml_serving.serving.main:app \ - --workers 4 \ - --worker-class uvicorn.workers.UvicornWorker \ - --bind 0.0.0.0:8080 -``` diff --git a/examples/lightgbm/readme.md b/examples/lightgbm/readme.md index 274dcce..3b5656e 100644 --- a/examples/lightgbm/readme.md +++ b/examples/lightgbm/readme.md @@ -1,10 +1,11 @@ # Train and Deploy LightGBM model -## training mock model +## training iris classifier model Run the mock python training code ```bash -python3 train_model.py +pip install -r examples/lightgbm/requirements.txt +python examples/lightgbm/train_model.py ``` The output will be a model created on the project "serving examples", by the name "train lightgbm model" @@ -15,9 +16,9 @@ The output will be a model created on the project "serving examples", by the nam 2. Create model endpoint: -3. `clearml-serving --id model add --engine lightgbm --endpoint "test_model_lgbm" --preprocess "preprocess.py" --name "train lightgbm model" --project "serving examples"` +3. `clearml-serving --id model add --engine lightgbm --endpoint "test_model_lgbm" --preprocess "examples/lightgbm/preprocess.py" --name "train lightgbm model" --project "serving examples"` Or auto-update -`clearml-serving --id model auto-update --engine lightgbm --endpoint "test_model_auto" --preprocess "preprocess.py" --name "train lightgbm model" --project "serving examples" --max-versions 2` +`clearml-serving --id model auto-update --engine lightgbm --endpoint "test_model_auto" --preprocess "examples/lightgbm/preprocess.py" --name "train lightgbm model" --project "serving examples" --max-versions 2` Or add Canary endpoint `clearml-serving --id model canary --endpoint "test_model_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_auto` @@ -27,16 +28,3 @@ Or add Canary endpoint > **_Notice:_** You can also change the serving service while it is already running! This includes adding/removing endpoints, adding canary model routing etc. - - -### Running / debugging the serving service manually -Once you have setup the Serving Service Task - -```bash -$ pip3 install -r clearml_serving/serving/requirements.txt -$ CLEARML_SERVING_TASK_ID= PYHTONPATH=$(pwd) python3 -m gunicorn \ - --preload clearml_serving.serving.main:app \ - --workers 4 \ - --worker-class uvicorn.workers.UvicornWorker \ - --bind 0.0.0.0:8080 -``` diff --git a/examples/lightgbm/requirements.txt b/examples/lightgbm/requirements.txt new file mode 100644 index 0000000..ddc5c29 --- /dev/null +++ b/examples/lightgbm/requirements.txt @@ -0,0 +1,3 @@ +clearml >= 1.1.6 +lightgbm + diff --git a/examples/pytorch/readme.md b/examples/pytorch/readme.md index db89495..926472c 100644 --- a/examples/pytorch/readme.md +++ b/examples/pytorch/readme.md @@ -1,10 +1,11 @@ # Train and Deploy Keras model with Nvidia Triton Engine -## training mock model +## training mnist digit classifier model Run the mock python training code ```bash -python3 train_pytorch_mnist.py +pip install -r examples/pytorch/requirements.txt +python examples/pytorch/train_pytorch_mnist.py ``` The output will be a model created on the project "serving examples", by the name "train pytorch model" @@ -14,12 +15,12 @@ The output will be a model created on the project "serving examples", by the nam 1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID) 2. Create model endpoint: -`clearml-serving --id model add --engine triton --endpoint "test_model_pytorch" --preprocess "preprocess.py" --name "train pytorch model" --project "serving examples" +`clearml-serving --id model add --engine triton --endpoint "test_model_pytorch" --preprocess "examples/pytorch/preprocess.py" --name "train pytorch model" --project "serving examples" --input-size 28 28 1 --input-name "INPUT__0" --input-type float32 --output-size -1 10 --output-name "OUTPUT__0" --output-type float32 ` Or auto update -`clearml-serving --id model auto-update --engine triton --endpoint "test_model_pytorch_auto" --preprocess "preprocess.py" --name "train pytorch model" --project "serving examples" --max-versions 2 +`clearml-serving --id model auto-update --engine triton --endpoint "test_model_pytorch_auto" --preprocess "examples/pytorch/preprocess.py" --name "train pytorch model" --project "serving examples" --max-versions 2 --input-size 28 28 1 --input-name "INPUT__0" --input-type float32 --output-size -1 10 --output-name "OUTPUT__0" --output-type float32 ` @@ -35,15 +36,3 @@ Or add Canary endpoint > **_Notice:_** You can also change the serving service while it is already running! This includes adding/removing endpoints, adding canary model routing etc. - -### Running / debugging the serving service manually -Once you have setup the Serving Service Task - -```bash -$ pip3 install -r clearml_serving/serving/requirements.txt -$ CLEARML_SERVING_TASK_ID= PYHTONPATH=$(pwd) python3 -m gunicorn \ - --preload clearml_serving.serving.main:app \ - --workers 4 \ - --worker-class uvicorn.workers.UvicornWorker \ - --bind 0.0.0.0:8080 -``` diff --git a/examples/sklearn/readme.md b/examples/sklearn/readme.md index 9fcdb40..ae4908a 100644 --- a/examples/sklearn/readme.md +++ b/examples/sklearn/readme.md @@ -1,10 +1,11 @@ # Train and Deploy Scikit-Learn model -## training mock model +## training mock logistic regression model Run the mock python training code ```bash -python3 train_model.py +pip install -r examples/sklearn/requirements.txt +python examples/sklearn/train_model.py ``` The output will be a model created on the project "serving examples", by the name "train sklearn model" @@ -13,9 +14,9 @@ The output will be a model created on the project "serving examples", by the nam 1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID) 2. Create model endpoint: -`clearml-serving --id model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples"` +`clearml-serving --id model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --project "serving examples"` Or auto update -`clearml-serving --id model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2` +`clearml-serving --id model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2` Or add Canary endpoint `clearml-serving --id model canary --endpoint "test_model_sklearn_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_sklearn_auto` @@ -24,16 +25,3 @@ Or add Canary endpoint > **_Notice:_** You can also change the serving service while it is already running! This includes adding/removing endpoints, adding canary model routing etc. - - -### Running / debugging the serving service manually -Once you have setup the Serving Service Task - -```bash -$ pip3 install -r clearml_serving/serving/requirements.txt -$ CLEARML_SERVING_TASK_ID= PYHTONPATH=$(pwd) python3 -m gunicorn \ - --preload clearml_serving.serving.main:app \ - --workers 4 \ - --worker-class uvicorn.workers.UvicornWorker \ - --bind 0.0.0.0:8080 -``` diff --git a/examples/sklearn/requirements.txt b/examples/sklearn/requirements.txt new file mode 100644 index 0000000..eb862f7 --- /dev/null +++ b/examples/sklearn/requirements.txt @@ -0,0 +1,2 @@ +clearml >= 1.1.6 +scikit-learn diff --git a/examples/xgboost/readme.md b/examples/xgboost/readme.md index faee82a..00b054f 100644 --- a/examples/xgboost/readme.md +++ b/examples/xgboost/readme.md @@ -1,10 +1,11 @@ # Train and Deploy XGBoost model -## training mock model +## training iris classifier model Run the mock python training code ```bash -python3 train_model.py +pip install -r examples/xgboost/requirements.txt +python examples/xgboost/train_model.py ``` The output will be a model created on the project "serving examples", by the name "train xgboost model" @@ -14,9 +15,9 @@ The output will be a model created on the project "serving examples", by the nam 1. Create serving Service: `clearml-serving create --name "serving example"` (write down the service ID) 2. Create model endpoint: -3. `clearml-serving --id model add --engine xgboost --endpoint "test_model_xgb" --preprocess "preprocess.py" --name "train xgboost model" --project "serving examples"` +3. `clearml-serving --id model add --engine xgboost --endpoint "test_model_xgb" --preprocess "examples/xgboost/preprocess.py" --name "train xgboost model" --project "serving examples"` Or auto update -`clearml-serving --id model auto-update --engine xgboost --endpoint "test_model_xgb_auto" --preprocess "preprocess.py" --name "train xgboost model" --project "serving examples" --max-versions 2` +`clearml-serving --id model auto-update --engine xgboost --endpoint "test_model_xgb_auto" --preprocess "examples/xgboost/preprocess.py" --name "train xgboost model" --project "serving examples" --max-versions 2` Or add Canary endpoint `clearml-serving --id model canary --endpoint "test_model_xgb_auto" --weights 0.1 0.9 --input-endpoint-prefix test_model_xgb_auto` @@ -25,16 +26,3 @@ Or add Canary endpoint > **_Notice:_** You can also change the serving service while it is already running! This includes adding/removing endpoints, adding canary model routing etc. - - -### Running / debugging the serving service manually -Once you have setup the Serving Service Task - -```bash -$ pip3 install -r clearml_serving/serving/requirements.txt -$ CLEARML_SERVING_TASK_ID= PYHTONPATH=$(pwd) python3 -m gunicorn \ - --preload clearml_serving.serving.main:app \ - --workers 4 \ - --worker-class uvicorn.workers.UvicornWorker \ - --bind 0.0.0.0:8080 -``` diff --git a/examples/xgboost/requirements.txt b/examples/xgboost/requirements.txt new file mode 100644 index 0000000..0b0fe4b --- /dev/null +++ b/examples/xgboost/requirements.txt @@ -0,0 +1,3 @@ +clearml >= 1.1.6 +xgboost +