Update docs (#563)

This commit is contained in:
pollfly 2023-05-17 11:38:28 +03:00 committed by GitHub
parent 62be2cc493
commit 0a96748deb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 277 additions and 53 deletions

View File

@ -25,7 +25,7 @@ The diagram above demonstrates a typical flow where an agent executes a task:
1. Set up the python environment and required packages.
1. The task's script/code is executed.
While the agent is running, it continuously reports system metrics to the ClearML Server (These can be monitored in the **Workers and Queues** page).
While the agent is running, it continuously reports system metrics to the ClearML Server (These can be monitored in the **Orchestration** page).
Continue using ClearML Agent once it is running on a target machine. Reproduce experiments and execute
automated workflows in one (or both) of the following ways:

View File

@ -52,7 +52,7 @@ The diagram above demonstrates a typical flow where an agent executes a task:
1. The task's script/code is executed.
While the agent is running, it continuously reports system metrics to the ClearML Server. You can monitor these metrics
in the [**Workers and Queues**](../webapp/webapp_workers_queues.md) page.
in the [**Orchestration**](../webapp/webapp_workers_queues.md) page.
## Resource Management
Installing an Agent on machines allows it to monitor all the machine's status (GPU / CPU / Memory / Network / Disk IO).

View File

@ -198,6 +198,7 @@ If a FrameGroup doesn't have the selected preview source, the preview displays t
## Statistics
The **Statistics** tab displays a dataset version's label usage stats.
* Dataset total count - number of annotations, annotated frames, and total frames
* Each label is listed along with the number of times it was used in the version
* The pie chart visualizes these stats. Hover over a chart slice and its associated label and usage
percentage will appear at the center of the chart.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 122 KiB

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 243 KiB

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

View File

@ -102,6 +102,14 @@ The autoscaler dashboard shows:
shows polling results of the autoscalers associated queues, including the number of tasks enqueued, and updates EC2
instances being spun up/down.
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::
## Generating AWS IAM Credentials
The autoscaler app accesses your AWS account with the credentials you provide.

View File

@ -40,4 +40,11 @@ Once a project dashboard instance is launched, its dashboard displays the follow
* Workers Table - List of active workers
* Failed Experiments - Failed experiments and their time of failure summary
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::
![App dashboard](../../img/apps_dashboard.png)

View File

@ -89,6 +89,13 @@ The autoscaler dashboard shows:
shows polling results of the autoscalers associated queues, including the number of tasks enqueued, and updates VM
instances being spun up/down.
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::
## Generating GCP Credentials

View File

@ -62,3 +62,10 @@ The GPU Compute dashboard shows:
* Number of current running cloud instances
* Instance History - Number of running cloud instances over time
* Console - The log shows updates of cloud instances being spun up/down.
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::

View File

@ -87,3 +87,10 @@ The HPO dashboard shows:
* Summary - Experiment summary table: experiment execution information, objective metric and parameter values.
* Budget - Available iterations and tasks budget (percentage, out of the values defined in the HPO instance's advanced configuration)
* Resources - Number of workers servicing the HPO execution queue, and the number of currently running optimization tasks
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::

View File

@ -54,4 +54,10 @@ The Task Scheduler dashboard shows:
* Scheduler Log - Application console log containing everything printed to stdout and stderr. The log
includes when the scheduler syncs, and when it launches tasks for execution.
:::tip EMBEDDING CLEARML VISUALIZATION
You can embed plots from the app instance dashboard into [ClearML Reports](../webapp_reports.md). These visualizations
are updated live as the app instance(s) updates. The Enterprise Plan and Hosted Service support embedding resources in
external tools (e.g. Notion). Hover over the plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />
to copy the embed code, and navigate to a report to paste the embed code.
:::

View File

@ -38,7 +38,8 @@ versions to remove them from the archive. You can also permanently delete versio
On the right side of the dataset version panel, view the **VERSION INFO** which shows:
* Version name
* Dataset ID
* Version file size
* Parent task name (click to navigate to the parent task's page)
* Version file size (original and compressed)
* Number of files
* Number of links
* Changes from previous version

View File

@ -45,18 +45,24 @@ The panel displays the steps name, task type, and status, as well as its para
To return to viewing the runs information, click the pipeline graph, outside any of the steps.
### Run and Step Log
### Run and Step Details Panel
Click on **DETAILS** on the top left of the info panel to view a runs full console log. The log contains everything printed
to stdout and stderr.
Click on **DETAILS** on the top left of the info panel to view the pipeline controller's details panel. To view a steps
details panel, click **DETAILS** and then click on a step node, or hover over a step node and click <img src="/docs/latest/icons/ico-console.svg" alt="details" className="icon size-md space-sm" />.
To view a steps console log, click **DETAILS** and then click on a step.
The details panel includes three tabs:
* **Preview** - View debug samples and plots attached to the pipeline controller or step
![Step console](../../img/webapp_pipeline_step_console.png)
![preview](../../img/webapp_pipeline_step_debug.png)
For pipelines steps generated from functions using either [`PipelineController.add_function_step`](../../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
* **Console** - The console log for the pipeline controller or steps: contains everything printed to stdout and stderr.
![console](../../img/webapp_pipeline_step_console.png)
* **Code** - For pipeline steps generated from functions using either [`PipelineController.add_function_step`](../../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
or [`PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent),
you can also view the selected steps code. On the top center
of the console panel, click **Code**.
you can view the selected steps code.
![Step code](../../img/webapp_pipeline_step_code.png)
![code](../../img/webapp_pipeline_step_code.png)
Click <img src="/docs/latest/icons/ico-max-panel.svg" alt="Expand" className="icon size-md space-sm" /> on the details panel header to view the panel in full screen.

View File

@ -0,0 +1,64 @@
---
title: Comparing Models
---
The ClearML Web UI provides features for comparing models, allowing to locate, visualize, and analyze model differences.
You can view the differences in model details, configuration, scalar values, and more.
## Selecting Models to Compare
To select models to compare:
1. Go to a models table that includes the models to be compared.
1. Select the models to compare. Once multiple models are selected, the batch action bar appears.
1. In the batch action bar, click **COMPARE**.
The comparison page opens in the DETAILS tab, showing a column for each model.
## Modifying Model Selection
You can modify the model selection while comparing.
1. Click **+ Add Model** in the top left corner of any of the comparison pages. This will open up a window with a model
table with the currently compared models at the top.
1. Find the models to add by sorting and [filtering](webapp_model_table.md#filtering-columns) the models with the
appropriate column header controls. Alternatively, use the search bar to find models by name.
1. Select models to include in the comparison (and / or clear the selection of any models you wish to remove).
1. Click **APPLY**.
## Comparison Modes
The comparison tabs provides the following views:
* Side-by-side textual comparison
* Merged plot comparison
* Side-by-side graphic comparison
### Side-by-side Textual Comparison
In the **Details**, **Network**, and **Scalars** (Values mode) tabs, you can view differences in the models' nominal
values. **Details** displays the models' general information, labels, and metadata. **Network** displays the models'
configuration. **Scalars** (in Values mode) displays the models scalar values (min, max, or last). Each model's
information is displayed in a column, so each field is lined up side-by-side.
The model on the left is used as the base model, to which the other models are compared. You can set a new base model
in one of the following ways:
* Click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base experiment" className="icon size-md space-sm" />
on the top right of the model that will be the new base.
* Click on the new base model and drag it all the way to the left
The differences between the models are highlighted. You can obscure identical fields by switching on the
**Hide Identical Fields** toggle.
![Text comparison](../img/webapp_compare_models_text.png)
### Graphic Comparison
The **Scalars** (Graph mode) and **Plots** tabs display plots attached to the models. The **Scalars** tab compares
scalar values as time series line charts. The **Plots** tab compares the last reported iteration sample of each
metric/variant combination per compared model.
Line, scatter, and bar graphs are compared by a single plot per metric/variant into which the plots of all compared
models are combined.
![Merged plots](../img/webapp_compare_models_merge_plots.png)
The rest of the plots which cant be merged are displayed separately for each model.
![Side-by-side plots](../img/webapp_compare_models_side_plots.png)
For better plot analysis, see [Plot Controls](webapp_exp_track_visual.md#plot-controls).

View File

@ -32,6 +32,7 @@ The models table contains the following columns:
| **TASK** | The experiment (Task) name that created the model. | String |
| **UPDATED** | Elapsed time since the model was updated. Hover over the elapsed time to view the date and time.| Date-time |
| **DESCRIPTION** | The model description (not shown by default). | String |
| *Metrics* |Add metrics column (last, minimum, and/or maximum values). Available options depend upon the models in the table. | Varies according to models in table |
| *Metadata* | User defined metadata key column. Available options depend upon the models in the table. | String |
@ -48,11 +49,11 @@ Customize the table using any of the following:
* Changing table columns
* Show / hide columns - Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
**>** mark or clear the checkboxes of columns to show or hide.
* Add custom columns - Click **+ ADD CUSTOM METADATA COLUMN** to add metadata columns to the main column list. Added
columns are by default displayed in the table. You can remove the metadata columns from the main column list or the
column addition window.
* Filter columns - By ML framework, tags, user
* Sort columns - By metadata, ML framework, description, and last update elapsed time.
* Add custom columns - Click **+ METRICS** or **+ METADATA** to add metric / metadata columns to the
main column list. Added columns are by default displayed in the table. You can remove the custom metadata columns
from the main column list or the column addition window.
* Filter columns - By metadata, metric, ML framework, tags, user
* Sort columns - By metadata, metric, ML framework, description, and last update elapsed time.
:::note
The following models-table customizations are saved on a **per-project** basis:
@ -71,9 +72,11 @@ all the models in the project. The customizations of these two views are saved s
The following table describes the actions that can be done from the models table, including the states that
allow each feature. Model states are *Draft* (editable) and *Published* (read-only).
Access these actions with the context menu in any of the following ways:
* In the models table, right-click a model, or hover over a model and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
Access these actions in any of the following ways:
* In the models table, right-click a model, or hover over a model and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> to
open the context menu
* In a model's info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
* Through the batch action bar, available at screen bottom when multiple models are selected
| ClearML Action | Description | States Valid for the Action |
|---|---|--|
@ -84,7 +87,8 @@ Access these actions with the context menu in any of the following ways:
| Delete | Action available in the archive. Permanently delete the model. This will also remove the model weights file. Note that experiments using deleted models will no longer be able to run. | Any state |
| Add Tag | Tag models with color-coded labels to assist in organizing work. See [tagging models](#tagging-models). | Any state |
| Download | Download a model. The file format depends upon the framework. | *Published* |
| Move to Project | To organize work and improve collaboration, move a model to another project. | Any state |
| Move to Project | Move a model to another project. | Any state |
| Compare | Compare selected models (see [Comparing Models](webapp_model_comparing.md)). | Any state |
| Custom action | The ClearML Enterprise Server provides a mechanism to define your own custom actions, which will appear in the context menu. See [Custom UI Context Menu Actions](../deploying_clearml/clearml_server_config.md#custom-ui-context-menu-actions). | Any state |
Some actions mentioned in the chart above can be performed on multiple models at once.

View File

@ -7,6 +7,8 @@ In the models table, double-click on a model to view and/or modify the following
* Model configuration
* Model label enumeration
* Model metadata
* Model scalars and other plots
Models in *Draft* status are editable, so you can modify their configuration, label enumeration, and metadata.
*Published* models are read-only, so only their metadata can be modified.
@ -78,17 +80,29 @@ Use the search bar to look for experiments based on their name, ID, or descripti
## Scalars
The **SCALARS** tab displays all scalar plots attached to a model. Scalar values are presented as time series line
charts. To see the series for a metric in high resolution, view it in full screen mode by hovering over the graph and
plots. To see the series for a metric in high resolution, view it in full screen mode by hovering over the graph and
clicking <img src="/docs/latest/icons/ico-maximize.svg" alt="Maximize plot icon" className="icon size-sm space-sm" />.
Reported single value scalars are aggregated into a table plot displaying scalar names and values.
To embed scalar plots in your [Reports](webapp_reports.md), hover over a plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />,
which will copy to clipboard the embed code to put in your Reports. In contrast to static screenshots, embedded resources
are retrieved when the report is displayed allowing your reports to show the latest up-to-date data.
For better plot analysis, see [Plot Controls](webapp_exp_track_visual.md#plot-controls).
Reported single value scalars are aggregated into a table plot displaying scalar names and values.
![Model scalars](../img/webapp_model_scalars.png)
## Plots
The **PLOTS** tab displays plots attached to a model. For better plot analysis, see [Plot Controls](webapp_exp_track_visual.md#plot-controls).
The **PLOTS** tab displays plots attached to a model.
To embed plots in your [Reports](webapp_reports.md), hover over a plot and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Embed code" className="icon size-md space-sm" />,
which will copy to clipboard the embed code to put in your Reports. In contrast to static screenshots, embedded resources
are retrieved when the report is displayed allowing your reports to show the latest up-to-date data.
For better plot analysis, see [Plot Controls](webapp_exp_track_visual.md#plot-controls).
![Model plots](../img/webapp_model_plots.png)

View File

@ -27,7 +27,7 @@ The WebApp's sidebar provides access to the following modules:
* [Datasets](datasets/webapp_dataset_page.md) <img src="/docs/latest/icons/ico-side-bar-datasets.svg" alt="Datasets" className="icon size-md space-sm" /> - View and manage your datasets.
* [Pipelines](pipelines/webapp_pipeline_page.md) <img src="/docs/latest/icons/ico-pipelines.svg" alt="Pipelines" className="icon size-md space-sm" /> - View and manage your pipelines.
* [Reports](webapp_reports.md) <img src="/docs/latest/icons/ico-reports.svg" alt="Reports" className="icon size-md space-sm" /> - View and manage your reports.
* [Workers and Queues](webapp_workers_queues.md) <img src="/docs/latest/icons/ico-workers.svg" alt="Workers and Queues" className="icon size-md space-sm" /> - The resource monitoring and queues management page.
* [Orchestration](webapp_workers_queues.md) <img src="/docs/latest/icons/ico-workers.svg" alt="Workers and Queues" className="icon size-md space-sm" /> - Autoscale, monitor, and manage your resource usage and workers queues.
* [Applications](applications/apps_overview.md) <img src="/docs/latest/icons/ico-applications.svg" alt="ClearML Apps" className="icon size-md space-sm" /> - ClearML's GUI applications for no-code workflow execution.
## UI Top Bar

View File

@ -15,6 +15,8 @@ The Settings page consists of the following sections:
* [Configuration vault](#configuration-vault) (ClearML Enterprise Server) - Define global ClearML client settings
that are applied to all ClearML and ClearML Agent instances (which use the workspace's access
credentials)
* [Administrator Vaults](#administrator-vaults) (ClearML Enterprise Server) - Manage user-group level configuration
vaults to apply ClearML client settings to all members of the user groups
* [Users & Groups](#users--groups) - Manage the users that have access to a workspace
* [Access Rules](#access-rules) (ClearML Enterprise Server) - Manage per-resource access privileges
* [Usage & Billing](#usage--billing) (ClearML Hosted Service) - View current usage information and billing details
@ -159,6 +161,41 @@ Fill in values using any of ClearML supported configuration formats: HOCON / JSO
![Configuration vault](../img/settings_configuration_vault.png)
## Administrator Vaults
:::info Enterprise Feature
This feature is available under the ClearML Enterprise plan
:::
Administrators can define multiple [configuration vaults](#configuration-vault) which will each be applied to designated
[user groups](#user-groups). Use configuration vaults to extend and/or override entries in the local ClearML [configuration file](../configs/clearml_conf.md)
where a ClearML task is executed. Configuration vault values will be applied to tasks run by members of the designated user groups.
To apply its contents, a vault should be enabled. New entries will extend the configuration in the local ClearML [configuration file](../configs/clearml_conf.md).
Existing configuration file entries will be overridden by the vault values.
**To create a vault:**
1. Click **+ Add Vault**
1. Fill in vault details:
1. Vault name - Name that appears in the Administrator Vaults table
1. User Group - Specify the User Group that the vault affects
1. Format - Specify the configuration format: HOCON / JSON / YAML.
1. Fill in the configuration values (click <img src="/docs/latest/icons/ico-info.svg" alt="Info" className="icon size-md space-sm" />
to view configuration file reference). To import and existing configuration file, click <img src="/docs/latest/icons/ico-import.svg" alt="Import" className="icon size-md space-sm" />.
1. Click **Save**
The **Administrator Vaults** table lists all currently defined vaults, and the following details:
* Active - Toggle to enable / disable the vault
* Name - Vault name
* Group - User groups to apply this vault to
* ID - Vault ID (click to copy)
* Vault Content - Vault content summary
* Update - Last update time
Hover over a vault in the table to Download, Edit, or Delete a vault.
![Admin vaults](../img/settings_admin_vaults.png)
## Users & Groups
ClearML Hosted Service users can add users to their workspace.

View File

@ -29,7 +29,7 @@ Reports are editable Markdown documents, supporting:
* Code blocks
* Text and image hyperlinks
* Embedded images uploaded from your computer
* Embedded ClearML task content
* Embedded ClearML task, model, and [app](applications/apps_overview.md) content
![Report](../img/webapp_report.png)
@ -39,13 +39,14 @@ download a PDF copy, or simply copy the MarkDown content and reuse in your edito
Access ClearML reports through the [Reports Page](#reports-page).
## Embedding ClearML Visualizations
You can embed plots and images from your experiments into your reports: scalar graphs and other plots, and debug samples
from an individual experiment or from an experiment comparison page. These visualizations are updated live as the
experiment(s) updates.
You can embed plots and images from your ClearML objects (experiments, models, and apps) into your reports: scalar
graphs and other plots, and debug samples
from an individual object or from an object comparison page. These visualizations are updated live as the
object(s) updates.
To add a graphic resource:
1. Go to the resource you want to embed in your report (a plot or debug sample from an individual experiment or
experiment comparison)
1. Go to the resource you want to embed in your report (a plot or debug sample from an individual object or
object comparison)
2. Hover over the resource and click <img src="/docs/latest/icons/ico-plotly-embed-code.svg" alt="Generate embed code" className="icon size-md space-sm" />.
![Reports step 2](../img/reports_step_2.png)
@ -59,6 +60,9 @@ experiment comparison)
![Reports step 3](../img/reports_step_3.png)
Once embedded in the report, you can return to the resource's original location (e.g. comparison page, experiment/model/app page)
by clicking <img src="/docs/latest/icons/ico-resource-return.svg" alt="Return to resource" className="icon size-md" />.
### Customizing Embed Code
You can customize embed codes to make more elaborate queries for what you want to display in your reports.
@ -66,7 +70,7 @@ A standard embed code is formatted like this:
```
<iframe
src="<web_server>/widgets/?type=sample&tasks=<task_id>&metrics=<metric_name>&variants=Plot%20as%20an%20image&company=<company/workspace_id>"
src="<web_server>/widgets/?type=sample&objectType=task&objects=<object_id>&metrics=<metric_name>&variants=Plot%20as%20an%20image&company=<company/workspace_id>"
width="100%" height="400"
></iframe>
```
@ -80,9 +84,14 @@ The query is formatted like a standard query string: `<parameter>=<parameter_val
delimited with a `&`: `<parameter_1>=<parameter_value_1>&<parameter_2>=<parameter_value_2>`.
The query string usually includes the following parameters:
* `objectType` - The type of object to fetch. The options are `task` or `model` (`task` also includes ClearML app instances).
* `objects` - Object IDs (i.e. task or model IDs depending on specified ObjectType). Specify multiple IDs like this:
`objects=<id>&objects=<id>&objects=<id>`. Alternatively, you can input a query, and the matching objects' specified
resources will be displayed. See [Dynamic Queries](#dynamic-queries) below.
* `type` - The type of resource to fetch. The options are:
* `plot`
* `scalar`
* `single` (single-scalar values table)
* `sample` (debug sample)
* `parcoords` (hyperparameter comparison plots) - for this option, you need to also specify the following parameters:
* `metrics` - Unique metric/variant ID formatted like `metric_id.variant_id` (find with your browser's inspect. See note [below](#event_id))
@ -91,8 +100,7 @@ The query string usually includes the following parameters:
* `min_value`
* `max_value`
* `value` (last value)
* `tasks` - Task IDs. Specify multiple IDs like this: `tasks=<id>&tasks=<id>&tasks=<id>`. Alternatively, you can
specify a task query which will use its results as the tasks to display. See [Dynamic Task Queries](#dynamic-task-queries) below.
* `models` - Model IDs. Specify multiple IDs like this: `models=<id>&models=<id>&models<id>`.
* `metrics` - Metric name
* `variants` - Variants name
* `company` - Workspace ID. Applicable to the ClearML hosted service, for embedding content from a different workspace
@ -103,9 +111,10 @@ For strings, make sure to use the appropriate URL encoding. For example, if the
write `Metric%20Name`
:::
### Dynamic Task Queries
You can create more complex queries by specifying task criteria (e.g. tags, statuses, projects, etc.) instead of
specific task IDs, with parameters from the [`tasks.get_all`](../references/api/tasks.md#post-tasksget_all) API call.
### Dynamic Queries
You can create more complex queries by specifying object criteria (e.g. tags, statuses, projects, etc.) instead of
specific task IDs, with parameters from the [`tasks.get_all`](../references/api/tasks.md#post-tasksget_all) or
[`models.get_all`](../references/api/models.md#post-modelsget_all) API calls.
For these parameters, use the following syntax:
* `key=value` for non-array fields
@ -113,11 +122,15 @@ For these parameters, use the following syntax:
Delimit the fields with `&`s.
**Examples:**
#### Examples:
The following are examples of dynamic queries. All the examples use `objectType=task`, but `objectType=model` can also be
used.
* Request the scalars plot of a specific metric variant for the latest experiment in a project:
```
src="<web_server>/widgets/?type=scalar&metrics=<metric_name>&variants=<variant>&project=<project_id>&page_size=1&page=0&order_by[]=-last_update
src="<web_server>/widgets/?objectType=task&type=scalar&metrics=<metric_name>&variants=<variant>&project=<project_id>&page_size=1&page=0&order_by[]=-last_update
```
Notice that the `project` parameter is specified. In order to get the most recent single experiment,
`page_size=1&page=0&order_by[]=-last_update` is added. `page_size` specifies how many results are returned in each
@ -127,26 +140,26 @@ Delimit the fields with `&`s.
* Request the scalars plot of a specific metric variant for the experiments with a specific tag:
```
src="<web_server>/widgets/?type=scalar&metrics=<metric_name>&variants=<variant>&tags[]=__$or,<tag>
src="<web_server>/widgets/?objectType=task&type=scalar&metrics=<metric_name>&variants=<variant>&tags[]=__$or,<tag>
```
A list of tags that the experiment should contain is specified in the `tags` argument. You can also specify tags that
exclude experiments. See tag filter syntax examples [here](../clearml_sdk/task_sdk.md#tag-filters).
* Request the `training/accuracy` scalar plot of the 5 experiments with the best accuracy scores
```
src="<web_server>?type=scalar&metrics=training&variants=accuracy&project=4043a1657f374e9298649c6ba72ad233&page_size=5&page=0&order_by[]=-last_metrics.<metric_event_id>.<variant_event_id>.value"
src="<web_server>?objectType=task&type=scalar&metrics=training&variants=accuracy&project=4043a1657f374e9298649c6ba72ad233&page_size=5&page=0&order_by[]=-last_metrics.<metric_event_id>.<variant_event_id>.value"
```
<a id="event_id"></a>
:::tip Event IDs
The `tasks.get_all` API calls parameters sometimes need event IDs, instead of names. To find event IDs:
1. Go to the relevant Experiments table > Open the **Developer Tools** window (inspect) > click **Network**.
The `tasks.get_all` and `models.get_all` API calls' parameters sometimes need event IDs, instead of names. To find event IDs:
1. Go to the relevant Experiments/Model table > Open the **Developer Tools** window (inspect) > click **Network**.
1. Execute the action you want the embed code to do (e.g. sort by update time, sort by accuracy).
1. Click on the API call `task.get_all_ex` that appears in the **Network** tab.
1. Click on the API call `tasks.get_all_ex`/`models.get_all_ex` that appears in the **Network** tab.
1. Click on the **Payload** panel.
1. Click on the relevant parameter to see the relevant event's ID. For example, if you sorted by experiment accuracy,
you will see the metrics event ID under the `order_by` parameter.
you will see the metric's event ID under the `order_by` parameter.
:::
@ -162,7 +175,7 @@ top-level projects are displayed. Click on a project card to view the project's
![Report page](../img/webapp_report_page.png)
## Project Cards
In Project view, project cards display a projects summarized report information:
In Project view, project cards display a project's summarized report information:
<div class="max-w-50">
@ -201,7 +214,7 @@ of a report card to open its context menu and access report actions:
</div>
* **Rename** - Change the reports name
* **Rename** - Change the report's name
* **Share** - Copy URL to share report
* **Add Tag** - Add labels to the report to help easily classify groups of reports.
* **Move to** - Move the report into another project. If the target project does not exist, it is created on-the-fly.

View File

@ -1,14 +1,49 @@
---
title: Workers and Queues
title: Orchestration
---
With the **Workers and Queues** page, users can:
With the **Orchestration** page, you can:
* Use Cloud autoscaling apps to define your compute resource budget, and have the apps automatically manage your resource
consumption as needed-with no code (available under the ClearML Pro plan)
* Monitor resources (CPU and GPU, memory, video memory, and network usage) used by the experiments / Tasks that workers
execute
* View workers and the queues they listen to
* Create and rename queues; delete empty queues; monitor queue utilization
* Reorder, move, and remove experiments from queues
* Manage worker queues
* Create and rename queues
* Delete empty queues
* Monitor queue utilization
* Reorder, move, and remove experiments from queues
## Autoscalers
:::info Pro Plan Offering
The ClearML Autoscaler apps are available under the ClearML Pro plan
:::
Use the **AUTOSCALERS** tab to access ClearML's cloud autoscaling applications:
* GPU Compute (powered by Genesis Cloud)
* AWS Autoscaler
* GCP Autoscaler
The autoscalers automatically spin up or down cloud instances as needed and according to a budget that you set, so you
pay only for the time that you actually use the machines.
The **AWS** and **GCP** autoscaler applications will manage instances on your behalf in your cloud account. When
launching an app instance, you will provide your cloud service credentials so the autoscaler can access your account.
The **GPU Compute** application provides on-demand GPU instances powered by Genesis. All you need to do is define your
compute resource budget, and you're good to go.
Once you launch an autoscaler app instance, you can monitor the autoscaler's activity and your cloud usage in the instance's
dashboard.
For more information about how autoscalers work, see the [Cloud Autoscaling Overview](../cloud_autoscaling/autoscaling_overview.md).
For more information about a specific autoscaler, see [GPU Compute](applications/apps_gpu_compute.md), [AWS Autoscaler](applications/apps_aws_autoscaler.md),
and/or [GCP Autoscaler](applications/apps_gpu_compute.md).
![Cloud autoscalers](../img/webapp_orchestration_autoscalers.png)
## Workers

View File

@ -69,8 +69,7 @@ module.exports = {
'webapp/webapp_exp_comparing']
},
{
'Models': ['webapp/webapp_model_table', 'webapp/webapp_model_viewing']
'Models': ['webapp/webapp_model_table', 'webapp/webapp_model_viewing', 'webapp/webapp_model_comparing']
},
'webapp/webapp_exp_sharing'
]

View File

@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24">
<path style="fill:none" d="M0 0h24v24H0z"/>
<path data-name="al-ico-info-circle" d="M12 2a10 10 0 1 0 10 10A10 10 0 0 0 12 2m1.5 16h-3v-6a1.5 1.5 0 0 1 3 0zM12 9a1.5 1.5 0 1 1 1.5-1.5A1.5 1.5 0 0 1 12 9" style="fill:#8492c2"/>
</svg>

After

Width:  |  Height:  |  Size: 326 B

View File

@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16">
<path style="fill:none" d="M0 0h16v16H0z"/>
<path data-name="external" d="M-23420.668-693.668A1.332 1.332 0 0 1-23422-695v-10.667a1.332 1.332 0 0 1 1.334-1.332h5.332v1.332h-5.332V-695h10.666v-5.333h1.336V-695a1.335 1.335 0 0 1-1.336 1.332zm4.861-7.136 4.863-4.863h-1.725V-707h4v4h-1.331v-1.724l-4.859 4.859z" transform="translate(23423.334 708.333)" style="fill:#8693be;stroke:transparent"/>
</svg>

After

Width:  |  Height:  |  Size: 491 B