Small edits (#455)

This commit is contained in:
pollfly 2023-01-25 13:25:29 +02:00 committed by GitHub
parent 18e3e7abe2
commit 61f822e613
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 51 additions and 51 deletions

View File

@ -37,7 +37,7 @@ clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_mo
```
:::info Service ID
Make sure that you have executed `clearml-servings`'s
Make sure that you have executed `clearml-serving`'s
[initial setup](clearml_serving.md#initial-setup), in which you create a Serving Service.
The Serving Service's ID is required to register a model, and to execute `clearml-serving`'s `metrics` and `config` commands
:::
@ -92,7 +92,7 @@ or with the `clearml-serving` CLI.
```
You now have a new Model named `manual sklearn model` in the `serving examples` project. The CLI output prints
the UID of the new model, which you will use it to register a new endpoint.
the UID of the new model, which you will use to register a new endpoint.
In the [ClearML web UI](../webapp/webapp_overview.md), the new model is listed under the **Models** tab of its project.
You can also download the model file itself directly from the web UI.
@ -105,7 +105,7 @@ or with the `clearml-serving` CLI.
:::info Model Storage
You can also provide a different storage destination for the model, such as S3/GS/Azure, by passing
`--destination="s3://bucket/folder"`, `gs://bucket/folder`, `azure://bucket/folder`. There is no need to provide a unique
path tp the destination argument, the location of the model will be a unique path based on the serving service ID and the
path to the destination argument, the location of the model will be a unique path based on the serving service ID and the
model name
:::
@ -116,7 +116,7 @@ model name
The ClearML Serving Service supports automatic model deployment and upgrades, which is connected with the model
repository and API. When the model auto-deploy is configured, new model versions will be automatically deployed when you
`publish` or `tag` a new model in the ClearML model repository. This automation interface allows for simpler CI/CD model
deployment process, as a single API automatically deploy (or remove) a model from the Serving Service.
deployment process, as a single API automatically deploys (or removes) a model from the Serving Service.
#### Automatic Model Deployment Example
@ -142,7 +142,7 @@ deployment process, as a single API automatically deploy (or remove) a model fro
### Canary Endpoint Setup
Canary endpoint deployment add a new endpoint where the actual request is sent to a preconfigured set of endpoints with
Canary endpoint deployment adds a new endpoint where the actual request is sent to a preconfigured set of endpoints with
pre-provided distribution. For example, let's create a new endpoint "test_model_sklearn_canary", you can provide a list
of endpoints and probabilities (weights).
@ -195,7 +195,7 @@ Example:
ClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used
to visualize and create live dashboards.
The default docker-compose installation is preconfigured with Prometheus and Grafana, do notice that by default data/ate
The default docker-compose installation is preconfigured with Prometheus and Grafana. Notice that by default data/ate
of both containers is *not* persistent. To add persistence, we recommend adding a volume mount.
You can also add many custom metrics on the input/predictions of your models. Once a model endpoint is registered,

View File

@ -45,7 +45,7 @@ The structure of your pipeline will be derived from looking at this `parents` ar
Now we do the same for the final step. However, remember the empty hyperparameters we saw before? We still have to overwrite these. We can use the `parameter_override` argument to do just that.
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
For example, we can tell the first step to use the global pipeline parameter raw data url like so. But we can also reference output artifacts from a previous step by using its name, and we can of course also just overwrite a parameter with a normal value. Finally, we can even pass along the unique task ID of a previous step, so you can get the task object based on that ID and access anything and everything within that task.
And thats it! We now have our first pipeline!

View File

@ -48,7 +48,7 @@ This is followed by details from the cleanup.
an `APIClient` object that establishes a session with the ClearML Server, and accomplishes the cleanup by calling:
* [`Tasks.get_all`](../../references/api/tasks.md#post-tasksget_all) to get a list of Tasks to delete, providing the following parameters:
* `system_tags` - Get only Tasks tagged as `archived`.
* `status_changed` - Get Tasks whose last status change is older than then delete threshold (in seconds).
* `status_changed` - Get Tasks whose last status change is older than the delete threshold (in seconds).
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
## Configuration

View File

@ -2,11 +2,11 @@
title: The Dataviews Table
---
[Dataviews](../dataviews.md) appear in the same Project as the experiment that stored the Dataview in the ClearML Enterprise platform,
as well as the **DATAVIEWS** tab in the **All Projects** page.
The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
Use it to view, create, and edit Dataviews in the info panel.
Use it to view and create Dataviews, and access their info panels.
The table lists independent Dataview objects. To see Dataviews logged by a task, go
to the specific task's **DATAVIEWS** tab (see [Experiment Dataviews](webapp_exp_track_visual.md)).
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
@ -84,7 +84,7 @@ The same information can be found in the bottom menu, in a tooltip that appears
## Creating a Dataview
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which open a
Create a new Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which opens a
**NEW DATAVIEW** window.
![New Dataview window](../../img/webapp_dataview_new.png)

View File

@ -38,7 +38,7 @@ def main(pickle_url, mock_parameter='mock'):
the following format: `{'section_name':['param_name']]}`. For example, the pipeline in the code above will store the
`pickle_url` parameter in the `General` section and `mock_parameter` in the `Mock` section. By default, arguments will
be stored in the `Args` section.
* `pool_frequency` - The pooling frequency (in minutes) for monitoring experiments / states.
* `pool_frequency` - The polling frequency (in minutes) for monitoring experiments / states.
* `add_pipeline_tags` - If `True`, add `pipe: <pipeline_task_id>` tag to all steps (Tasks) created by this pipeline
(this is useful to create better visibility in projects with multiple pipelines, and for easy selection) (default:
`False`).

View File

@ -10,7 +10,7 @@ This release is not backwards compatible
**Breaking Changes**
* `preprocess` and `postprocess` class functions get 3 arguments
* Add support for per-request state storage, passing information between the pre/post processing functions
* Add support for per-request state storage, passing information between the pre/post-processing functions
**Features & Bug Fixes**

View File

@ -37,7 +37,7 @@ The workers details panel includes the following two tabs:
* Current Experiment - The experiment currently being executed by the worker
* Experiment Runtime - How long the currently executing experiment has been running
* Experiment iteration - The last reported training iteration for the experiment
* **QUEUES** - information about the queues that the worker is assigned to:
* **QUEUES** - Information about the queues that the worker is assigned to:
* Queue - The name of the Queue
* Next experiment - The next experiment available in this queue
* In Queue - The number of experiments currently enqueued