mirror of
https://github.com/clearml/clearml-docs
synced 2025-04-03 12:51:54 +00:00
Small edits (#971)
This commit is contained in:
parent
24cf6a06f0
commit
9b5df2878e
@ -831,7 +831,7 @@ task = Task.init(project_name='examples', task_name='parameters')
|
|||||||
task.set_parameters({'Args/epochs':7, 'lr': 0.5})
|
task.set_parameters({'Args/epochs':7, 'lr': 0.5})
|
||||||
|
|
||||||
# setting a single parameter
|
# setting a single parameter
|
||||||
task.set_parameter(name='decay',value=0.001)
|
task.set_parameter(name='decay', value=0.001)
|
||||||
```
|
```
|
||||||
|
|
||||||
:::warning Overwriting Parameters
|
:::warning Overwriting Parameters
|
||||||
@ -889,7 +889,7 @@ me = Person('Erik', 5)
|
|||||||
|
|
||||||
params_dictionary = {'epochs': 3, 'lr': 0.4}
|
params_dictionary = {'epochs': 3, 'lr': 0.4}
|
||||||
|
|
||||||
task = Task.init(project_name='examples',task_name='python objects')
|
task = Task.init(project_name='examples', task_name='python objects')
|
||||||
|
|
||||||
task.connect(me)
|
task.connect(me)
|
||||||
task.connect(params_dictionary)
|
task.connect(params_dictionary)
|
||||||
|
@ -38,13 +38,13 @@ clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_mo
|
|||||||
:::info Service ID
|
:::info Service ID
|
||||||
Make sure that you have executed `clearml-serving`'s
|
Make sure that you have executed `clearml-serving`'s
|
||||||
[initial setup](clearml_serving_setup.md#initial-setup), in which you create a Serving Service.
|
[initial setup](clearml_serving_setup.md#initial-setup), in which you create a Serving Service.
|
||||||
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
|
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
The preprocessing Python code is packaged and uploaded to the Serving Service, to be used by any inference container,
|
The preprocessing Python code is packaged and uploaded to the Serving Service, to be used by any inference container,
|
||||||
and downloaded in real time when updated
|
and downloaded in real time when updated.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### Step 3: Spin Inference Container
|
### Step 3: Spin Inference Container
|
||||||
@ -110,7 +110,7 @@ or with the `clearml-serving` CLI.
|
|||||||
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
|
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
|
||||||
`--destination="s3://bucket/folder"`, `s3://host_addr:port/bucket` (for non-AWS S3-like services like MinIO), `gs://bucket/folder`, `azure://<account name>.blob.core.windows.net/path/to/file`. There is no need to provide a unique
|
`--destination="s3://bucket/folder"`, `s3://host_addr:port/bucket` (for non-AWS S3-like services like MinIO), `gs://bucket/folder`, `azure://<account name>.blob.core.windows.net/path/to/file`. There is no need to provide a unique
|
||||||
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
|
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
|
||||||
model name
|
model name.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Additional Options
|
## Additional Options
|
||||||
@ -160,7 +160,7 @@ This means that any request coming to `/test_model_sklearn_canary/` will be rout
|
|||||||
|
|
||||||
:::note
|
:::note
|
||||||
As with any other Serving Service configuration, you can configure the Canary endpoint while the Inference containers are
|
As with any other Serving Service configuration, you can configure the Canary endpoint while the Inference containers are
|
||||||
already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes)
|
already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
You can also prepare a "fixed" canary endpoint, always splitting the load between the last two deployed models:
|
You can also prepare a "fixed" canary endpoint, always splitting the load between the last two deployed models:
|
||||||
@ -244,7 +244,7 @@ With the new metrics logged, you can create a visualization dashboard over the l
|
|||||||
:::note
|
:::note
|
||||||
If not specified all serving requests will be logged, which can be changed with the `CLEARML_DEFAULT_METRIC_LOG_FREQ`
|
If not specified all serving requests will be logged, which can be changed with the `CLEARML_DEFAULT_METRIC_LOG_FREQ`
|
||||||
environment variable. For example `CLEARML_DEFAULT_METRIC_LOG_FREQ=0.2` means only 20% of all requests will be logged.
|
environment variable. For example `CLEARML_DEFAULT_METRIC_LOG_FREQ=0.2` means only 20% of all requests will be logged.
|
||||||
You can also specify per-endpoint log frequency with the `clearml-serving` CLI. See [clearml-serving metrics](clearml_serving_cli.md#metrics)
|
You can also specify per-endpoint log frequency with the `clearml-serving` CLI. See [clearml-serving metrics](clearml_serving_cli.md#metrics).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Further Examples
|
## Further Examples
|
||||||
|
@ -1107,8 +1107,8 @@ URL to a CA bundle, or set this option to `false` to skip SSL certificate verifi
|
|||||||
|
|
||||||
* Log specific environment variables. OS environments are listed in the UI, under an experiment's
|
* Log specific environment variables. OS environments are listed in the UI, under an experiment's
|
||||||
**CONFIGURATION > HYPERPARAMETERS > Environment** section.
|
**CONFIGURATION > HYPERPARAMETERS > Environment** section.
|
||||||
Multiple selected variables are supported including the suffix "\*". For example: "AWS\_\*" will log any OS environment
|
Multiple selected variables are supported including the suffix `*`. For example: `"AWS_*"` will log any OS environment
|
||||||
variable starting with `"AWS\_"`. Example: `log_os_environments: ["AWS_*", "CUDA_VERSION"]`
|
variable starting with `"AWS_"`. Example: `log_os_environments: ["AWS_*", "CUDA_VERSION"]`
|
||||||
|
|
||||||
* This value can be overwritten with OS environment variable `CLEARML_LOG_ENVIRONMENT=AWS_*,CUDA_VERSION`.
|
* This value can be overwritten with OS environment variable `CLEARML_LOG_ENVIRONMENT=AWS_*,CUDA_VERSION`.
|
||||||
|
|
||||||
|
@ -199,6 +199,7 @@ The task's input and output models appear in the **ARTIFACTS** tab. Each model e
|
|||||||
* Model name
|
* Model name
|
||||||
* ID
|
* ID
|
||||||
* Configuration.
|
* Configuration.
|
||||||
|
|
||||||
Input models also display their creating experiment, which on-click navigates you to the experiment's page.
|
Input models also display their creating experiment, which on-click navigates you to the experiment's page.
|
||||||
|
|
||||||

|

|
||||||
|
Loading…
Reference in New Issue
Block a user