mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-22 17:56:07 +00:00
Small edits (#455)
This commit is contained in:
parent
18e3e7abe2
commit
61f822e613
@ -37,7 +37,7 @@ clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_mo
|
|||||||
```
|
```
|
||||||
|
|
||||||
:::info Service ID
|
:::info Service ID
|
||||||
Make sure that you have executed `clearml-servings`'s
|
Make sure that you have executed `clearml-serving`'s
|
||||||
[initial setup](clearml_serving.md#initial-setup), in which you create a Serving Service.
|
[initial setup](clearml_serving.md#initial-setup), in which you create a Serving Service.
|
||||||
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
|
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
|
||||||
:::
|
:::
|
||||||
@ -92,7 +92,7 @@ or with the `clearml-serving` CLI.
|
|||||||
```
|
```
|
||||||
|
|
||||||
You now have a new Model named `manual sklearn model` in the `serving examples` project. The CLI output prints
|
You now have a new Model named `manual sklearn model` in the `serving examples` project. The CLI output prints
|
||||||
the UID of the new model, which you will use it to register a new endpoint.
|
the UID of the new model, which you will use to register a new endpoint.
|
||||||
|
|
||||||
In the [ClearML web UI](../webapp/webapp_overview.md), the new model is listed under the **Models** tab of its project.
|
In the [ClearML web UI](../webapp/webapp_overview.md), the new model is listed under the **Models** tab of its project.
|
||||||
You can also download the model file itself directly from the web UI.
|
You can also download the model file itself directly from the web UI.
|
||||||
@ -105,7 +105,7 @@ or with the `clearml-serving` CLI.
|
|||||||
:::info Model Storage
|
:::info Model Storage
|
||||||
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
|
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
|
||||||
`--destination="s3://bucket/folder"`, `gs://bucket/folder`, `azure://bucket/folder`. There is no need to provide a unique
|
`--destination="s3://bucket/folder"`, `gs://bucket/folder`, `azure://bucket/folder`. There is no need to provide a unique
|
||||||
path tp the destination argument, the location of the model will be a unique path based on the serving service ID and the
|
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
|
||||||
model name
|
model name
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -116,7 +116,7 @@ model name
|
|||||||
The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model
|
The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model
|
||||||
repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you
|
repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you
|
||||||
`publish` or `tag` a new model in the ClearML model repository. This automation interface allows for simpler CI/CD model
|
`publish` or `tag` a new model in the ClearML model repository. This automation interface allows for simpler CI/CD model
|
||||||
deployment process, as a single API automatically deploy (or remove) a model from the Serving Service.
|
deployment process, as a single API automatically deploys (or removes) a model from the Serving Service.
|
||||||
|
|
||||||
#### Automatic Model Deployment Example
|
#### Automatic Model Deployment Example
|
||||||
|
|
||||||
@ -142,7 +142,7 @@ deployment process, as a single API automatically deploy (or remove) a model fro
|
|||||||
|
|
||||||
### Canary Endpoint Setup
|
### Canary Endpoint Setup
|
||||||
|
|
||||||
Canary endpoint deployment add a new endpoint where the actual request is sent to a preconfigured set of endpoints with
|
Canary endpoint deployment adds a new endpoint where the actual request is sent to a preconfigured set of endpoints with
|
||||||
pre-provided distribution. For example, let's create a new endpoint "test_model_sklearn_canary", you can provide a list
|
pre-provided distribution. For example, let's create a new endpoint "test_model_sklearn_canary", you can provide a list
|
||||||
of endpoints and probabilities (weights).
|
of endpoints and probabilities (weights).
|
||||||
|
|
||||||
@ -195,7 +195,7 @@ Example:
|
|||||||
ClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used
|
ClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used
|
||||||
to visualize and create live dashboards.
|
to visualize and create live dashboards.
|
||||||
|
|
||||||
The default docker-compose installation is preconfigured with Prometheus and Grafana, do notice that by default data/ate
|
The default docker-compose installation is preconfigured with Prometheus and Grafana. Notice that by default data/ate
|
||||||
of both containers is *not* persistent. To add persistence, we recommend adding a volume mount.
|
of both containers is *not* persistent. To add persistence, we recommend adding a volume mount.
|
||||||
|
|
||||||
You can also add many custom metrics on the input/predictions of your models. Once a model endpoint is registered,
|
You can also add many custom metrics on the input/predictions of your models. Once a model endpoint is registered,
|
||||||
|
@ -45,7 +45,7 @@ The structure of your pipeline will be derived from looking at this `parents` ar
|
|||||||
|
|
||||||
Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that.
|
Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that.
|
||||||
|
|
||||||
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
|
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name, and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
|
||||||
|
|
||||||
And that’s it! We now have our first pipeline!
|
And that’s it! We now have our first pipeline!
|
||||||
|
|
||||||
|
@ -48,7 +48,7 @@ This is followed by details from the cleanup.
|
|||||||
an `APIClient` object that establishes a session with the ClearML Server, and accomplishes the cleanup by calling:
|
an `APIClient` object that establishes a session with the ClearML Server, and accomplishes the cleanup by calling:
|
||||||
* [`Tasks.get_all`](../../references/api/tasks.md#post-tasksget_all) to get a list of Tasks to delete, providing the following parameters:
|
* [`Tasks.get_all`](../../references/api/tasks.md#post-tasksget_all) to get a list of Tasks to delete, providing the following parameters:
|
||||||
* `system_tags` - Get only Tasks tagged as `archived`.
|
* `system_tags` - Get only Tasks tagged as `archived`.
|
||||||
* `status_changed` - Get Tasks whose last status change is older than then delete threshold (in seconds).
|
* `status_changed` - Get Tasks whose last status change is older than the delete threshold (in seconds).
|
||||||
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
|
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
title: The Dataviews Table
|
title: The Dataviews Table
|
||||||
---
|
---
|
||||||
|
|
||||||
[Dataviews](../dataviews.md) appear in the same Project as the experiment that stored the Dataview in the ClearML Enterprise platform,
|
|
||||||
as well as the **DATAVIEWS** tab in the **All Projects** page.
|
|
||||||
|
|
||||||
The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
|
The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
|
||||||
Use it to view, create, and edit Dataviews in the info panel.
|
Use it to view and create Dataviews, and access their info panels.
|
||||||
|
|
||||||
|
The table lists independent Dataview objects. To see Dataviews logged by a task, go
|
||||||
|
to the specific task's **DATAVIEWS** tab (see [Experiment Dataviews](webapp_exp_track_visual.md)).
|
||||||
|
|
||||||
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
||||||
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
|
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
|
||||||
@ -84,7 +84,7 @@ The same information can be found in the bottom menu, in a tooltip that appears
|
|||||||
|
|
||||||
## Creating a Dataview
|
## Creating a Dataview
|
||||||
|
|
||||||
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which open a
|
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which opens a
|
||||||
**NEW DATAVIEW** window.
|
**NEW DATAVIEW** window.
|
||||||
|
|
||||||

|

|
@ -38,7 +38,7 @@ def main(pickle_url, mock_parameter='mock'):
|
|||||||
the following format: `{'section_name':['param_name']]}`. For example, the pipeline in the code above will store the
|
the following format: `{'section_name':['param_name']]}`. For example, the pipeline in the code above will store the
|
||||||
`pickle_url` parameter in the `General` section and `mock_parameter` in the `Mock` section. By default, arguments will
|
`pickle_url` parameter in the `General` section and `mock_parameter` in the `Mock` section. By default, arguments will
|
||||||
be stored in the `Args` section.
|
be stored in the `Args` section.
|
||||||
* `pool_frequency` - The pooling frequency (in minutes) for monitoring experiments / states.
|
* `pool_frequency` - The polling frequency (in minutes) for monitoring experiments / states.
|
||||||
* `add_pipeline_tags` - If `True`, add `pipe: <pipeline_task_id>` tag to all steps (Tasks) created by this pipeline
|
* `add_pipeline_tags` - If `True`, add `pipe: <pipeline_task_id>` tag to all steps (Tasks) created by this pipeline
|
||||||
(this is useful to create better visibility in projects with multiple pipelines, and for easy selection) (default:
|
(this is useful to create better visibility in projects with multiple pipelines, and for easy selection) (default:
|
||||||
`False`).
|
`False`).
|
||||||
|
@ -10,7 +10,7 @@ This release is not backwards compatible
|
|||||||
|
|
||||||
**Breaking Changes**
|
**Breaking Changes**
|
||||||
* `preprocess` and `postprocess` class functions get 3 arguments
|
* `preprocess` and `postprocess` class functions get 3 arguments
|
||||||
* Add support for per-request state storage, passing information between the pre/post processing functions
|
* Add support for per-request state storage, passing information between the pre/post-processing functions
|
||||||
|
|
||||||
**Features & Bug Fixes**
|
**Features & Bug Fixes**
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ The worker’s details panel includes the following two tabs:
|
|||||||
* Current Experiment - The experiment currently being executed by the worker
|
* Current Experiment - The experiment currently being executed by the worker
|
||||||
* Experiment Runtime - How long the currently executing experiment has been running
|
* Experiment Runtime - How long the currently executing experiment has been running
|
||||||
* Experiment iteration - The last reported training iteration for the experiment
|
* Experiment iteration - The last reported training iteration for the experiment
|
||||||
* **QUEUES** - information about the queues that the worker is assigned to:
|
* **QUEUES** - Information about the queues that the worker is assigned to:
|
||||||
* Queue - The name of the Queue
|
* Queue - The name of the Queue
|
||||||
* Next experiment - The next experiment available in this queue
|
* Next experiment - The next experiment available in this queue
|
||||||
* In Queue - The number of experiments currently enqueued
|
* In Queue - The number of experiments currently enqueued
|
||||||
|
Loading…
Reference in New Issue
Block a user