Small edits (#971)

This commit is contained in:
pollfly 2024-11-24 10:29:57 +02:00 committed by GitHub
parent 24cf6a06f0
commit 9b5df2878e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 10 additions and 9 deletions

View File

@ -831,7 +831,7 @@ task = Task.init(project_name='examples', task_name='parameters')
task.set_parameters({'Args/epochs':7, 'lr': 0.5})
# setting a single parameter
task.set_parameter(name='decay',value=0.001)
task.set_parameter(name='decay', value=0.001)
```
:::warning Overwriting Parameters
@ -889,7 +889,7 @@ me = Person('Erik', 5)
params_dictionary = {'epochs': 3, 'lr': 0.4}
task = Task.init(project_name='examples',task_name='python objects')
task = Task.init(project_name='examples', task_name='python objects')
task.connect(me)
task.connect(params_dictionary)

View File

@ -38,13 +38,13 @@ clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_mo
:::info Service ID
Make sure that you have executed `clearml-serving`'s
[initial setup](clearml_serving_setup.md#initial-setup), in which you create a Serving Service.
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands.
:::
:::note
The preprocessing Python code is packaged and uploaded to the Serving Service, to be used by any inference container,
and downloaded in real time when updated
and downloaded in real time when updated.
:::
### Step 3: Spin Inference Container
@ -110,7 +110,7 @@ or with the `clearml-serving` CLI.
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
`--destination="s3://bucket/folder"`, `s3://host_addr:port/bucket` (for non-AWS S3-like services like MinIO), `gs://bucket/folder`, `azure://<account name>.blob.core.windows.net/path/to/file`. There is no need to provide a unique
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
model name
model name.
:::
## Additional Options
@ -160,7 +160,7 @@ This means that any request coming to `/test_model_sklearn_canary/` will be rout
:::note
As with any other Serving Service configuration, you can configure the Canary endpoint while the Inference containers are
already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes)
already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes).
:::
You can also prepare a "fixed" canary endpoint, always splitting the load between the last two deployed models:
@ -244,7 +244,7 @@ With the new metrics logged, you can create a visualization dashboard over the l
:::note
If not specified all serving requests will be logged, which can be changed with the `CLEARML_DEFAULT_METRIC_LOG_FREQ`
environment variable. For example `CLEARML_DEFAULT_METRIC_LOG_FREQ=0.2` means only 20% of all requests will be logged.
You can also specify per-endpoint log frequency with the `clearml-serving` CLI. See [clearml-serving metrics](clearml_serving_cli.md#metrics)
You can also specify per-endpoint log frequency with the `clearml-serving` CLI. See [clearml-serving metrics](clearml_serving_cli.md#metrics).
:::
## Further Examples

View File

@ -1107,8 +1107,8 @@ URL to a CA bundle, or set this option to `false` to skip SSL certificate verifi
* Log specific environment variables. OS environments are listed in the UI, under an experiment's
**CONFIGURATION > HYPERPARAMETERS > Environment** section.
Multiple selected variables are supported including the suffix "\*". For example: "AWS\_\*" will log any OS environment
variable starting with `"AWS\_"`. Example: `log_os_environments: ["AWS_*", "CUDA_VERSION"]`
Multiple selected variables are supported including the suffix `*`. For example: `"AWS_*"` will log any OS environment
variable starting with `"AWS_"`. Example: `log_os_environments: ["AWS_*", "CUDA_VERSION"]`
* This value can be overwritten with OS environment variable `CLEARML_LOG_ENVIRONMENT=AWS_*,CUDA_VERSION`.

View File

@ -199,6 +199,7 @@ The task's input and output models appear in the **ARTIFACTS** tab. Each model e
* Model name
* ID
* Configuration.
Input models also display their creating experiment, which on-click navigates you to the experiment's page.
![Models in Artifacts tab](../img/webapp_exp_artifacts_01.png)