From 34c4a9df9138fddbae2983abb2d393c2c6c4f9ea Mon Sep 17 00:00:00 2001 From: IlyaMescheryakov1402 Date: Thu, 20 Mar 2025 02:26:54 +0300 Subject: [PATCH] update readme and fix docker-compose-gpu.yml --- docker/docker-compose-gpu.yml | 2 +- examples/vllm/readme.md | 9 +++------ 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/docker/docker-compose-gpu.yml b/docker/docker-compose-gpu.yml index dbb063b..bfeead7 100644 --- a/docker/docker-compose-gpu.yml +++ b/docker/docker-compose-gpu.yml @@ -75,7 +75,7 @@ services: clearml-serving-inference: - image: clearml-serving-inference:latest + image: allegroai/clearml-serving-inference:latest container_name: clearml-serving-inference restart: unless-stopped # optimize perforamnce diff --git a/examples/vllm/readme.md b/examples/vllm/readme.md index 3670a1c..48b93be 100644 --- a/examples/vllm/readme.md +++ b/examples/vllm/readme.md @@ -11,12 +11,7 @@ clearml-serving --id model add --model-id --engine vllm --endpoint "test_vllm" --preprocess "examples/vllm/preprocess.py" ``` -4. If you already have the `clearml-serving` docker-compose running, it might take it a minute or two to sync with the new endpoint. - - Or you can run the clearml-serving container independently: - ``` - docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID= clearml-serving-inference:latest - ``` +4. If you already have the `clearml-serving` docker-compose running, it might take it a minute or two to sync with the new endpoint. To run docker-compose, see [docker-compose instructions](/README.md#nail_care-initial-setup), p. 8 (and use [docker-compose-gpu.yml](/docker/docker-compose-gpu.yml) file for vllm on gpu and [docker-compose.yml](/docker/docker-compose.yml) otherwise) 5. Test new endpoint (do notice the first call will trigger the model pulling, so it might take longer, from here on, it's all in memory): @@ -32,6 +27,8 @@ see [test_openai_api.py](test_openai_api.py) for more information. +6. Check metrics using grafana (You have to select Prometheus as data source, all of vLLM metrics have "vllm:" prefix). For more information, see [Model monitoring and performance metrics](/README.md#bar_chart-model-monitoring-and-performance-metrics-bell) + NOTE! If you want to use send_request method, keep in mind that you have to pass "completions" or "chat/completions" in entrypoint (and pass model as a part of "data" parameter) and use it for non-streaming models: