]
-```
+```
Lists all datasets in the system that match the search request.
Datasets can be searched by project, name, ID, and tags.
-
+
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|ids|A list of dataset IDs|
|
-|project|The project name of the datasets|
|
-|name|A dataset name or a partial name to filter datasets by|
|
-|tags|A list of dataset user tags|
|
+|ids|A list of dataset IDs|
|
+|project|The project name of the datasets|
|
+|name|A dataset name or a partial name to filter datasets by|
|
+|tags|A list of dataset user tags|
|
@@ -251,7 +251,7 @@ All API commands should be imported with
`from clearml import Dataset`
-#### `Dataset.get(dataset_id=DS_ID).get_local_copy()`
+#### `Dataset.get(dataset_id=DS_ID).get_local_copy()`
Returns a path to dataset in cache, and downloads it if it is not already in cache.
@@ -259,84 +259,84 @@ Returns a path to dataset in cache, and downloads it if it is not already in cac
|Name|Description|Optional|
|---|---|---|
-|use_soft_links|If True, use soft links. Default: False on Windows, True on Posix systems|
|
-|raise_on_error|If True, raise exception if dataset merging failed on any file|
|
+|use_soft_links|If True, use soft links. Default: False on Windows, True on Posix systems|
|
+|raise_on_error|If True, raise exception if dataset merging failed on any file|
|
-#### `Dataset.get(dataset_id=DS_ID).get_mutable_local_copy()`
+#### `Dataset.get(dataset_id=DS_ID).get_mutable_local_copy()`
-Downloads the dataset to a specific folder (non-cached). If the folder already has contents, specify whether to overwrite
+Downloads the dataset to a specific folder (non-cached). If the folder already has contents, specify whether to overwrite
its contents with the dataset contents.
-
+
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|target_folder|Local target folder for the writable copy of the dataset|
|
-|overwrite|If True, recursively delete the contents of the target folder before creating a copy of the dataset. If False (default) and target folder contains files, raise exception or return None|
|
-|raise_on_error|If True, raise exception if dataset merging failed on any file|
|
+|target_folder|Local target folder for the writable copy of the dataset|
|
+|overwrite|If True, recursively delete the contents of the target folder before creating a copy of the dataset. If False (default) and target folder contains files, raise exception or return None|
|
+|raise_on_error|If True, raise exception if dataset merging failed on any file|
|
#### `Dataset.create()`
-Create a new dataset.
+Create a new dataset.
-Parent datasets can be specified, and the new dataset inherits all of its parent's content. Multiple dataset parents can
-be listed. Merging of parent datasets is done based on the list's order, where each parent can override overlapping files
+Parent datasets can be specified, and the new dataset inherits all of its parent's content. Multiple dataset parents can
+be listed. Merging of parent datasets is done based on the list's order, where each parent can override overlapping files
in the previous parent dataset.
-
+
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|dataset_name|Name of the new dataset|
|
-|dataset_project|The project containing the dataset. If not specified, infer project name from parent datasets. If there is no parent dataset, then this value is required|
|
-|parent_datasets|Expand a parent dataset by adding / removing files|
|
-|use_current_task|If True, the dataset is created on the current Task. Default: False|
|
+|dataset_name|Name of the new dataset|
|
+|dataset_project|The project containing the dataset. If not specified, infer project name from parent datasets. If there is no parent dataset, then this value is required|
|
+|parent_datasets|Expand a parent dataset by adding / removing files|
|
+|use_current_task|If True, the dataset is created on the current Task. Default: False|
|
-#### `Dataset.add_files()`
-
+#### `Dataset.add_files()`
+
Add files or folder into the current dataset.
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|path|Add a folder / file to the dataset|
|
-|wildcard|Add only a specific set of files based on wildcard matching. Wildcard matching can be a single string or a list of wildcards, for example: `~/data/*.jpg`, `~/data/json`|
|
-|local_base_folder|Files will be located based on their relative path from local_base_folder|
|
-|dataset_path|Where in the dataset the folder / files should be located|
|
-|recursive|If True, match all wildcard files recursively|
|
-|verbose| If True, print to console files added / modified|
|
+|path|Add a folder / file to the dataset|
|
+|wildcard|Add only a specific set of files based on wildcard matching. Wildcard matching can be a single string or a list of wildcards, for example: `~/data/*.jpg`, `~/data/json`|
|
+|local_base_folder|Files will be located based on their relative path from local_base_folder|
|
+|dataset_path|Where in the dataset the folder / files should be located|
|
+|recursive|If True, match all wildcard files recursively|
|
+|verbose| If True, print to console files added / modified|
|
#### `Dataset.upload()`
Start file uploading, the function returns when all files are uploaded.
-
+
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|show_progress|If True, show upload progress bar|
|
-|verbose|If True, print verbose progress report|
|
-|output_url|Target storage for the compressed dataset (default: file server). Examples: `s3://bucket/data`, `gs://bucket/data` , `azure://bucket/data`, `/mnt/share/data` |
|
-|compression|Compression algorithm for the Zipped dataset file (default: ZIP_DEFLATED)|
|
+|show_progress|If True, show upload progress bar|
|
+|verbose|If True, print verbose progress report|
|
+|output_url|Target storage for the compressed dataset (default: file server). Examples: `s3://bucket/data`, `gs://bucket/data` , `azure://bucket/data`, `/mnt/share/data` |
|
+|compression|Compression algorithm for the Zipped dataset file (default: ZIP_DEFLATED)|
|
#### `Dataset.finalize()`
-Closes the dataset and marks it as *Completed*. After a dataset has been closed, it can no longer be modified.
+Closes the dataset and marks it as *Completed*. After a dataset has been closed, it can no longer be modified.
Before closing a dataset, its files must first be uploaded.
**Parameters**
|Name|Description|Optional|
|---|---|---|
-|verbose|If True, print verbose progress report|
|
-|raise_on_error|If True, raise exception if dataset finalizing failed|
|
+|verbose|If True, print verbose progress report|
|
+|raise_on_error|If True, raise exception if dataset finalizing failed|
|
diff --git a/docs/webapp/webapp_archiving.md b/docs/webapp/webapp_archiving.md
index 7f3bed13..c4af6eaf 100644
--- a/docs/webapp/webapp_archiving.md
+++ b/docs/webapp/webapp_archiving.md
@@ -2,13 +2,13 @@
title: Archiving
---
-Archive experiments and models to improve the organization of active work. Archived experiments and models do not appear
-in the active (main) experiments and models tables. They only appear in the archive. Experiments can be restored from the
+Archive experiments and models to improve the organization of active work. Archived experiments and models do not appear
+in the active (main) experiments and models tables. They only appear in the archive. Experiments can be restored from the
archive.
-When archiving an experiment:
-
-* If it is enqueued to execute (its status is *Pending*), the experiment is automatically dequeued (its status becomes
+When archiving an experiment:
+
+* If it is enqueued to execute (its status is *Pending*), the experiment is automatically dequeued (its status becomes
*Draft*).
* If it is shared (**ClearML Hosted Service** only), the experiment becomes private.
@@ -17,26 +17,26 @@ When archiving an experiment:
* Archive an experiment or model from either the:
* Experiments or models table - Right click the experiment or model **>** **Archive**.
- * Info panel or full screen details view - Click
(menu) **>** **Archive**.
-
+ * Info panel or full screen details view - Click
(menu) **>** **Archive**.
+
* Archive multiple experiments or models from the:
- * Experiments or models table - Multi-select or individually select the checkboxes of the experiments to archive **>** In the footer menu that appears, click **ARCHIVE**.
-
+ * Experiments or models table - Multi-select or individually select the checkboxes of the experiments to archive **>** In the footer menu that appears, click **ARCHIVE**.
+
**To restore experiments or models:**
-1. Go to the experiment table of the archived experiment's project, or to the **All Projects** experiments table.
+1. Go to the experiment table of the archived experiment's project, or to the **All Projects** experiments table.
1. Click **OPEN ARCHIVE** on the top left of the page.
1. Select the experiment(s) or model(s):
- * Restore an experiment or model from either the:
-
+ * Restore an experiment or model from either the:
+
* Experiments or models table - Right click the experiment or model **>** **Restore**.
- * Info panel or full screen details view - Click
+ * Info panel or full screen details view - Click
(menu) **>** **Restore from archive**.
-
+
* Restore multiple experiments or models from the:
-
- * Experiments or models table - Multi-select or individually select the checkboxes of the experiments to restore **>** Click **Restore** in the footer menu that appears.
\ No newline at end of file
+
+ * Experiments or models table - Multi-select or individually select the checkboxes of the experiments to restore **>** Click **Restore** in the footer menu that appears.
diff --git a/docs/webapp/webapp_exp_comparing.md b/docs/webapp/webapp_exp_comparing.md
index a4e0daab..0ba6a93a 100644
--- a/docs/webapp/webapp_exp_comparing.md
+++ b/docs/webapp/webapp_exp_comparing.md
@@ -1,27 +1,27 @@
---
title: Comparing Experiments
---
-It is always useful to be able to do some forensics on what causes an experiment to succeed and to better understand
-performance issues.
+It is always useful to be able to do some forensics on what causes an experiment to succeed and to better understand
+performance issues.
The **ClearML Web UI** provides a deep experiment comparison, allowing to locate, visualize, and analyze differences including:
* [Details](#details)
- [Artifacts](#artifacts) - Input model, output model, and model design.
- [Execution](#execution-details) - Installed packages and source code.
- [Configuration](#configuration) - Configuration objects used by the experiment.
-* [Hyper Parameters](#hyperparameters)
+* [Hyper Parameters](#hyperparameters)
- [Values (table) view](#values-mode) - Key/value of all the arguments used by the experiments.
- - [Parallel coordinates view](#parallel-coordinates-mode) - Impact of each argument on a selected metric
+ - [Parallel coordinates view](#parallel-coordinates-mode) - Impact of each argument on a selected metric
the experiments reported (see [task.connect_configuration](../references/sdk/task.md#connect_configuration)).
-* [Scalars](#scalars)
+* [Scalars](#scalars)
- Specific values and plots of scalar series (see [reporting scalars](../guides/reporting/scalar_reporting.md) / [automatic reporting](../fundamentals/logger.md#automatic-reporting))
* [Plots](#plots)
- - Plots are combined to have multiple lines from different experiments (for example multiple RoC curves laid on top
+ - Plots are combined to have multiple lines from different experiments (for example multiple RoC curves laid on top
of each other).
-* [Debug samples](#debug-samples)
- - Debug samples by each iteration
+* [Debug samples](#debug-samples)
+ - Debug samples by each iteration
- Examine samples with a viewer (for images and video), and a player (for audio) (see [reporting media](../guides/reporting/media_reporting.md)).
-
+
The **ClearML** experiment comparison provides [comparison features](#comparison-features) making it easy to compare experiments.
@@ -31,36 +31,36 @@ The **ClearML** experiment comparison provides [comparison features](#comparison
1. Go to an experiments table, which includes the experiments to be compared.
1. Select the experiments to compare, select the checkboxes individually or select the top checkbox for all experiments. After selecting the second checkbox, a bottom bar appears.
-1. In the bottom bar, click **COMPARE**. The comparison page appears, showing a column for each experiment and differences with a highlighted background color. The experiment on the left is the base experiment. Other experiments compare to the base experiment.
+1. In the bottom bar, click **COMPARE**. The comparison page appears, showing a column for each experiment and differences with a highlighted background color. The experiment on the left is the base experiment. Other experiments compare to the base experiment.
-## Details
+## Details
-The **DETAILS** tab includes deep comparisons of the following:
+The **DETAILS** tab includes deep comparisons of the following:
### Artifacts
* Input model and model design.
* Output model and model design.
- * Other artifacts, if any.
+ * Other artifacts, if any.
### Execution details
- * The Source code - repository, branch, commit ID, script file name, and working directory.
+ * The Source code - repository, branch, commit ID, script file name, and working directory.
* Uncommitted changes, sorted by file name.
* Installed Python packages and versions, sorted by package name.
### Configuration
* Configuration objects used by the experiment (see [configuration objects](../fundamentals/hyperparameters.md#connecting-objects)),
-sorted by sections.
+sorted by sections.
### To locate the source differences:
-* Click the **DETAILS** tab **>** Expand highlighted sections, or, in the header, click
- (Previous diff) or
(Next diff).
+* Click the **DETAILS** tab **>** Expand highlighted sections, or, in the header, click
+ (Previous diff) or
(Next diff).
-For example, in the image below, expanding **ARTIFACTS** **>** **Output Model** **>** **Model** shows that the model ID
+For example, in the image below, expanding **ARTIFACTS** **>** **Output Model** **>** **Model** shows that the model ID
and name are different.

@@ -81,8 +81,8 @@ The Values mode is a side-by-side comparison that shows hyperparameter value dif
1. In the dropdown menu (on the upper left, next to **+ Add Experiments**), choose **Values**.
1. To show only differences, move the **Hide Identical Fields** slider to on.
1. Locate differences by either:
- * Clicking
(Previous diff) or
-
(Next diff).
+ * Clicking
(Previous diff) or
+
(Next diff).
* Scrolling to see highlighted hyperparameters.
For example, expanding **General** shows that the `batch_size` and `epochs` differ between the experiments.
@@ -94,18 +94,18 @@ For example, expanding **General** shows that the `batch_size` and `epochs` diff
In the Parallel Coordinates mode, compare a metric to any combination of hyperparameters using a parallel coordinates plot.
**To compare by metric:**
-
+
1. Click the **HYPER PARAMETERS** tab.
1. In the dropdown menu (on the upper left, next to **+ Add Experiments**), choose **Parallel Coordinates**.
1. In **Performance Metric**, expand a metric or monitored resource, and then click a variant.
1. Select the metric values to use. Choose one of the following:
- * **LAST** - The final value, or the most recent value, for in-progress experiments.
+ * **LAST** - The final value, or the most recent value, for in-progress experiments.
* **MIN** - Minimal value.
* **MAX** - Maximal value.
1. In **Parameters**, select the hyperparameter checkboxes to compare.
1. To view one experiment on the plot, hover over the experiment name in the legend.
-For example, plot the metric/variant `epoch_accuracy`/`validation: epoch_accuracy` against the hyperparameters
+For example, plot the metric/variant `epoch_accuracy`/`validation: epoch_accuracy` against the hyperparameters
`batch_size` and `epochs`.

@@ -129,13 +129,13 @@ Visualize the comparison of scalars, which includes metrics and monitored resour
* **Last values** (the final or most recent value)
* **Min Values** (the minimal values)
* **Max Values** (the maximal values)
-1. Sort by variant.
+1. Sort by variant.

### Compare scalar series
-Compare scalar series in plots and analyze differences using **ClearML Web UI** plot tools.
+Compare scalar series in plots and analyze differences using **ClearML Web UI** plot tools.
**To compare scalar series:**
@@ -144,16 +144,16 @@ Compare scalar series in plots and analyze differences using **ClearML Web UI**
**To improve scalar series analysis:**
-* In **Group by**, select one of these option:
+* In **Group by**, select one of these option:
* **Metric** - all variants for a metric on the same plot.
* **Metric+Variant** - every variant appears on its own plot.
* Horizontal axis options:
* Iterations,
- * Relative time since the experiment began,
- * Wall (clock time).
+ * Relative time since the experiment began,
+ * Wall (clock time).
* Smooth a curve - move the **Smoothing** slider or type in a smoothing number from **0** to **0.999**.
* Use plot controls, which appear when you hover over the top of a plot.
-* Hide / show scalar plots - Click **Hide all** and
.
+* Hide / show scalar plots - Click **Hide all** and
.
* Filter scalars by full or partial scalar name.
This image shows scalars grouped by metric.
@@ -168,7 +168,7 @@ This image shows scalars grouped by metric and variant.
## Plots
-Visualize the comparison of any data that **ClearML** automatically captures or that is explicitly reported in experiments,
+Visualize the comparison of any data that **ClearML** automatically captures or that is explicitly reported in experiments,
in the **PLOTS** tab.
**To compare plots:**
@@ -176,10 +176,10 @@ in the **PLOTS** tab.
1. Click the **PLOTS** tab.
1. To improve your comparison, use either of the following:
- * To locate scalars, click **HIDE ALL**, and then
+ * To locate scalars, click **HIDE ALL**, and then
(show) to choose which scalars to see. Scalars can also be filtered by full or partial scalar name, using the search bar.
- * Use any of the plot controls that appear when hovering over the top of a plot, including:
- * Downloading the image
+ * Use any of the plot controls that appear when hovering over the top of a plot, including:
+ * Downloading the image
* Downloading the data as JSON
* Zooming
* Panning
@@ -189,7 +189,7 @@ in the **PLOTS** tab.
## Debug samples
-Compare debug samples at any iteration to verify that an experiment is running as expected. The most recent iteration appears
+Compare debug samples at any iteration to verify that an experiment is running as expected. The most recent iteration appears
first. Use the viewer / player to inspect images, audio, video samples and do any of the following:
* Move to the same sample in a different iteration (move the iteration slider).
* Show the next or previous iteration's sample.
@@ -203,17 +203,17 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
1. Locate debug samples by doing the following:
* Filter by metric. In the **Metric** list, choose a metric.
- * Show other iterations. Click
(Older images),
+ * Show other iterations. Click
(Older images),
(New images), or
(Newest images).
-
+

-
+
1. To open a debug sample (image, audio, or video) in the viewer or player, click the thumbnail.

-
-1. To move to the same sample in another iteration, click
- (previous),
(next), or move the slider.
+
+1. To move to the same sample in another iteration, click
+ (previous),
(next), or move the slider.
**To view a debug sample in the viewer / player:**
@@ -224,28 +224,28 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
* Move to the same sample in another iteration - Move the slider, or click **<** (previous) or **>** (next).
* Download the file - Click
.
* Zoom
- * For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
+ * For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
## Comparison features
To assist in experiment analysis, the comparison page supports:
-
+
* [Adding experiments to the comparison](#adding-experiments-to-the-comparison) using a partial name search.
* [Finding the next or previous difference](#finding-the-next-or-previous-difference).
* [Hiding identical fields](#hiding-identical-fields)
-* [Searching all text](#searching-all-text)
+* [Searching all text](#searching-all-text)
* [Choosing a different base experiment](#choosing-a-different-base-experiment)
* [Dynamic ordering](#dynamic-ordering-of-the-compared-experiments) of the compared experiments
* [Sharing experiments](#sharing-experiments)
-* Auto refresh
+* Auto refresh
### Adding experiments to the comparison
-Add an experiment to the comparison - Click **Add Experiment** and start typing an experiment name. An experiment search
-and select dialog appears showing matching experiments to choose from. To add an experiment, click **+**. To Remove
-an experiment, click
.
+Add an experiment to the comparison - Click **Add Experiment** and start typing an experiment name. An experiment search
+and select dialog appears showing matching experiments to choose from. To add an experiment, click **+**. To Remove
+an experiment, click
.

@@ -253,8 +253,8 @@ an experiment, click
, or
- the next difference
.
+* Find the previous difference
, or
+ the next difference
.
@@ -273,7 +273,7 @@ Search all text in the comparison.
### Choosing a different base experiment
Show differences in other experiments in reference to a new base experiment. To set a new base experiment, do one of the following:
-* Click on
on the top right of the experiment that will be the new base.
+* Click on
on the top right of the experiment that will be the new base.
* Click on
the new base experiment and drag it all the way to the left

@@ -282,18 +282,18 @@ Show differences in other experiments in reference to a new base experiment. To
### Dynamic ordering of the compared experiments
-To reorder the experiments being compared, press
on the top right of the experiment that
- needs to be moved, and drag the experiment to its new position.
+To reorder the experiments being compared, press
on the top right of the experiment that
+ needs to be moved, and drag the experiment to its new position.

### Removing an experiment from the comparison
-Remove an experiment from the comparison, by pressing
+Remove an experiment from the comparison, by pressing
on the top right of the experiment that needs to be removed.

### Sharing experiments
-To share a comparison table, copy the full URL from the address bar and send it to a teammate to collaborate. They will
-get the exact same page (including selected tabs etc.).
\ No newline at end of file
+To share a comparison table, copy the full URL from the address bar and send it to a teammate to collaborate. They will
+get the exact same page (including selected tabs etc.).
diff --git a/docs/webapp/webapp_exp_table.md b/docs/webapp/webapp_exp_table.md
index 24cb60b8..9ab5fb91 100644
--- a/docs/webapp/webapp_exp_table.md
+++ b/docs/webapp/webapp_exp_table.md
@@ -2,13 +2,13 @@
title: The Experiments Table
---
-The experiments table is a [customizable](#customizing-the-experiments-table) list of experiments associated with a project. From the experiments
-table, view experiment details, and work with experiments (reset, clone, enqueue, create [tracking leaderboards](../guides/ui/building_leader_board.md)
-to monitor experimentation, and more). The experiments table's auto-refresh allows users to continually monitor experiment progress.
+The experiments table is a [customizable](#customizing-the-experiments-table) list of experiments associated with a project. From the experiments
+table, view experiment details, and work with experiments (reset, clone, enqueue, create [tracking leaderboards](../guides/ui/building_leader_board.md)
+to monitor experimentation, and more). The experiments table's auto-refresh allows users to continually monitor experiment progress.
:::info
-To assist in focusing on active experimentation, experiments and models can be archived, so they will not appear
+To assist in focusing on active experimentation, experiments and models can be archived, so they will not appear
in the active experiments and models tables. See [Archiving](webapp_archiving).
:::
@@ -27,8 +27,8 @@ The experiments table default and customizable columns are described in the foll
| **PROJECT** | Name of experiment's project. | Default |
| **USER** | User who created or cloned the experiment. | Default (hidden) |
| **STARTED** | Elapsed time since the experiment started. To view the date and time of start, hover over the elapsed time. | Default |
-| **UPDATED** | Elapsed time since the last update to the experiment. To view the date and time of update, hover over the elapsed time. | Default |
-| **ITERATION** | Last or most recent iteration of the experiment. | Default |
+| **UPDATED** | Elapsed time since the last update to the experiment. To view the date and time of update, hover over the elapsed time. | Default |
+| **ITERATION** | Last or most recent iteration of the experiment. | Default |
| **DESCRIPTION** | A description of the experiment. For cloned experiments, the description indicates it was auto generated with a timestamp. | Default (hidden) |
| **RUN TIME** | The current / total running time of the experiment. | Default (hidden) |
| **_Metrics_** | Add metrics column (last, minimum, and / or maximum values). The metrics depend upon the experiments in the table. See [adding metrics](#to-add-metrics). | Customizable |
@@ -38,54 +38,54 @@ The experiments table default and customizable columns are described in the foll
## Customizing the experiments table
-The experiments table can be customized by:
+The experiments table can be customized by:
* Showing / hiding default columns
-* Adding metrics and hyperparameters
+* Adding metrics and hyperparameters
* Sorting
-* Filtering
+* Filtering
Use experiments table customization for various use cases, including:
-* Creating a [leaderboard](#creating-an-experiment-leaderboard) that will update in real time with experiment
+* Creating a [leaderboard](#creating-an-experiment-leaderboard) that will update in real time with experiment
performance, which can be shared and stored.
-* Sorting models by metrics - Models are associated with the experiments that created them. For each metric, use the last
+* Sorting models by metrics - Models are associated with the experiments that created them. For each metric, use the last
value, the minimal value, and / or the maximal value.
* Tracking hyperparameters - Track hyperparameters by adding them as columns, and applying filters and sorting.
-Changes are persistent (cached in the browser), and represented in the URL so customized settings can be saved in a browser
+Changes are persistent (cached in the browser), and represented in the URL so customized settings can be saved in a browser
bookmark and shared with other **ClearML** users to collaborate.

### Adding metrics and / or hyperparameters
-Add metrics and / or hyperparameters columns to the experiments table. The metrics and hyperparameters depend upon the
+Add metrics and / or hyperparameters columns to the experiments table. The metrics and hyperparameters depend upon the
experiments in the table.
#### To add metrics:
-* Click
**>** **+ METRICS** **>** Expand a metric **>** Select the **LAST** (value),
+* Click
**>** **+ METRICS** **>** Expand a metric **>** Select the **LAST** (value),
**MIN** (minimal value), and / or **MAX** (maximal value) checkboxes.
#### To add hyperparameters:
-* Click
**>** **+ HYPER PARAMETERS** **>** Expand a section **>** Select the
+* Click
**>** **+ HYPER PARAMETERS** **>** Expand a section **>** Select the
hyperparameter checkboxes.
### Using other customization features
**To use other customization features:**
-* Show / hide columns - Click
**>** select or clear the checkboxes of columns to show or hide.
+* Show / hide columns - Click
**>** select or clear the checkboxes of columns to show or hide.
* Filter columns - According to type of experiment, experiment status (state), or user
* Sort columns - According to metrics and hyperparameters, type of experiment, experiment name, start and last update elapsed time, and last iteration.
* Dynamic column ordering - Drag a column title to a different position.
* Column resizing - In the column heading, drag to a new size.
* Column autofit - In the column heading, double click a column separator.
-## ClearML actions from the experiments table
+## ClearML actions from the experiments table
-The following table describes the **ClearML** features that can be used from the experiments table, including the [states](../fundamentals/task.md#task-states-and-state-transitions)
+The following table describes the **ClearML** features that can be used from the experiments table, including the [states](../fundamentals/task.md#task-states-and-state-transitions)
that allow each feature.
| ClearML Action | Description | States Valid for the Action | State Transition |
@@ -107,21 +107,21 @@ that allow each feature.
## Creating an experiment leaderboard
-Filter & sort the experiments of any project to create a leaderboard that can be shared and stored. This leaderboard
-updates in real time with experiment performance and outputs.
+Filter & sort the experiments of any project to create a leaderboard that can be shared and stored. This leaderboard
+updates in real time with experiment performance and outputs.
Modify the experiment table in the following ways to create a customized leaderboard:
* Add experiment configuration ([hyperparameters](#to-add-hyperparameters))
* Edit and add experiments [properties](webapp_exp_track_visual.md#user-properties)
-* Add reported [metrics](#to-add-metrics), any time series reported metric can be selected, then select the last reported
+* Add reported [metrics](#to-add-metrics), any time series reported metric can be selected, then select the last reported
value, or the minimum / maximum reported value.
* Filter based on user (dropdown and select) or [experiment types](../fundamentals/task.md#task-types)
* Add specific [tags](webapp_exp_track_visual.md#tagging-experiments) and filter based on them
-Now the table can be sorted based on any of the columns (probably one of the performance metrics). Select to filter experiments
+Now the table can be sorted based on any of the columns (probably one of the performance metrics). Select to filter experiments
based on their name by using the search bar.
-The final dashboard can be shared by copying the URL from the address bar, this address will replicate the exact same dashboard on any browser.
+The final dashboard can be shared by copying the URL from the address bar, this address will replicate the exact same dashboard on any browser.
The dashboard can also be bookmarked for later use.
-
\ No newline at end of file
+
diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md
index e51bae70..b18f9a9f 100644
--- a/docs/webapp/webapp_exp_track_visual.md
+++ b/docs/webapp/webapp_exp_track_visual.md
@@ -2,48 +2,48 @@
title: Tracking Experiments and Visualizing Results
---
-While an experiment is running, and any time after it finishes, track it and visualize the results in the **ClearML Web UI**,
-including:
+While an experiment is running, and any time after it finishes, track it and visualize the results in the **ClearML Web UI**,
+including:
* [Execution details](#execution-details) - Code, the base Docker image used for **ClearML Agent**, output destination for artifacts, and the logging level.
* [Configuration](#configuration) - Hyperparameters, user properties, and configuration objects.
* [Artifacts](#artifacts) - Input model, output model, model snapshot locations, other artifacts.
* [General information](#general-information) - Information about the experiment, for example: the experiment start, create, and last update times and dates, user creating the experiment, and its description.
-* [Console](#console) - stdout, stderr, output to the console from libraries, and **ClearML** explicit reporting.
+* [Console](#console) - stdout, stderr, output to the console from libraries, and **ClearML** explicit reporting.
* [Scalars](#scalars) - Metric plots.
* [Plots](#other-plots) - Other plots and data, for example: Matplotlib, Plotly, and **ClearML** explicit reporting.
* [Debug samples](#debug-samples) - Images, audio, video, and HTML.
## Viewing modes
-The **ClearML Web UI** provides two viewing modes for experiment details:
+The **ClearML Web UI** provides two viewing modes for experiment details:
* The info panel
-
-* Full screen details mode.
-Both modes contain all experiment details. When either view is open, switch to the other mode by clicking
-(**View in experiments table / full screen**), or clicking
(**menu**) > **View in experiments
+* Full screen details mode.
+
+Both modes contain all experiment details. When either view is open, switch to the other mode by clicking
+(**View in experiments table / full screen**), or clicking
(**menu**) > **View in experiments
table / full screen**.
### Info panel
-The info panel keeps the experiment table in view so that [experiment actions](webapp_exp_table#clearml-actions-from-the-experiments-table)
-can be performed from the table (as well as the menu in the info panel).
+The info panel keeps the experiment table in view so that [experiment actions](webapp_exp_table#clearml-actions-from-the-experiments-table)
+can be performed from the table (as well as the menu in the info panel).
View a screenshot

-
+
### Full screen details view
-The full screen details view allows for easier viewing and working with experiment tracking and results. The experiments
+The full screen details view allows for easier viewing and working with experiment tracking and results. The experiments
table is not visible when the full screen details view is open. Perform experiment actions from the menu.
@@ -51,17 +51,17 @@ table is not visible when the full screen details view is open. Perform experime

-
+
## Execution details
-In the EXECUTION tab of an experiment's detail page, there are records of:
-* Source code
+In the EXECUTION tab of an experiment's detail page, there are records of:
+* Source code
* **ClearML Agent** configuration
* Output details
-* Uncommitted changes
+* Uncommitted changes
* Installed Python packages
@@ -71,48 +71,48 @@ The source code details of the EXECUTION tab of an experiment include:
* The experiment's repository
* Commit ID
* Script path
-* Working directory
-
-Additionally, there is information about the **ClearML Agent** configuration. The **ClearML Agent** base image is a pre-configured Docker
-that **ClearML Agent** will use to remotely execute this experiment (see [Building Docker containers](../clearml_agent.md#building-docker-containers)).
-
+* Working directory
+
+Additionally, there is information about the **ClearML Agent** configuration. The **ClearML Agent** base image is a pre-configured Docker
+that **ClearML Agent** will use to remotely execute this experiment (see [Building Docker containers](../clearml_agent.md#building-docker-containers)).
+
The output details include:
-* The output destination used for storing model checkpoints (snapshots) and artifacts (see also, [default_output_uri](../configs/clearml_conf#config_default_output_uri)
- in the configuration file, and [output_uri](../references/sdk/task.md#taskinit)
- in `Task.init` parameters).
-
+* The output destination used for storing model checkpoints (snapshots) and artifacts (see also, [default_output_uri](../configs/clearml_conf#config_default_output_uri)
+ in the configuration file, and [output_uri](../references/sdk/task.md#taskinit)
+ in `Task.init` parameters).
+
* The logging level for the experiment, which uses the standard Python [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels).
-
+
View a screenshot

-
+
-### Uncommitted changes
+### Uncommitted changes
View a screenshot

-
+
-
-### Installed Python packages and their versions
+
+### Installed Python packages and their versions
View a screenshot

-
+
@@ -123,7 +123,7 @@ All parameters and configuration objects appear in the **CONFIGURATION** tab.
### Hyperparameters
-:::important
+:::important
In older versions of **ClearML Server**, the **CONFIGURATION** tab was named **HYPER PARAMETERS**, and it contained all parameters. The renamed tab contains a **HYPER PARAMETER** section, and subsections for hyperparameter groups.
:::
@@ -132,18 +132,18 @@ Hyperparameters are grouped by their type and appear in **CONFIGURATION** **>**
#### Command line arguments
The **Args** section shows automatically logged `argparse` arguments, and all older experiments parameters, except TensorFlow Definitions. Hover over a parameter, and the type, description, and default value appear, if they were provided.
-
+
View a screenshot

-
+
-#### Environment variables
+#### Environment variables
If the `CLEARML_LOG_ENVIRONMENT` variable was set, the **Environment** section will show environment variables (see [this FAQ](../faq#track-env-vars)).
@@ -152,14 +152,14 @@ If the `CLEARML_LOG_ENVIRONMENT` variable was set, the **Environment** section w

-
+
-#### Custom parameter groups
+#### Custom parameter groups
-Custom sections shows parameter dictionaries, if the parameters were connected to the Task, using the `Task.connect` method,
+Custom sections shows parameter dictionaries, if the parameters were connected to the Task, using the `Task.connect` method,
with a `name` argument provided.
@@ -167,7 +167,7 @@ with a `name` argument provided.

-
+
@@ -180,15 +180,15 @@ The **TF_DEFINE** sections shows automatic TensorFlow logging.

-
+
-
+
Once an experiment is run and stored in **ClearML Server**, any of these hyperparameters can be [modified](webapp_exp_tuning.md#modifying-experiments).
### User properties
-User properties allow to store any descriptive information in a key-value pair format. They are editable in any experiment,
+User properties allow to store any descriptive information in a key-value pair format. They are editable in any experiment,
except experiments whose status is *Published* (read-only).
@@ -196,19 +196,19 @@ except experiments whose status is *Published* (read-only).

-
+
-
+
### Configuration objects
-**ClearML** tracks experiment (Task) model configuration objects, which appear in **Configuration Objects** **>** **General**.
-These objects include those that are automatically tracked, and those connected to a Task in code (see [Task.connect_configuration](../references/sdk/task.md#connect_configuration)).
-**ClearML** supports providing a name for a Task model configuration (see the [name](../references/sdk/task.md#connect_configuration)
+**ClearML** tracks experiment (Task) model configuration objects, which appear in **Configuration Objects** **>** **General**.
+These objects include those that are automatically tracked, and those connected to a Task in code (see [Task.connect_configuration](../references/sdk/task.md#connect_configuration)).
+**ClearML** supports providing a name for a Task model configuration (see the [name](../references/sdk/task.md#connect_configuration)
parameter in `Task.connect_configuration`.
-:::important
+:::important
In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab, **MODEL CONFIGURATION** section. Task model configurations now appear in the **Configuration Objects** section, in the **CONFIGURATION** tab.
:::
@@ -217,19 +217,19 @@ In older versions of **ClearML Server**, the Task model configuration appeared i

-
+
-
+
-
+
View a screenshot - Custom configuration object

-
+
-
+
@@ -242,7 +242,7 @@ Copy the location of models and artifacts stored in local files (`file://`) to t
### Models
-The input and output models appear in the **ARTIFACTS** tab. Models are associated with the experiment, but to see further model details,
+The input and output models appear in the **ARTIFACTS** tab. Models are associated with the experiment, but to see further model details,
including design, label enumeration, and general information, go to the **MODELS** tab, by clicking the model name, which is a hyperlink to those details.
**To retrieve a model:**
@@ -250,8 +250,8 @@ including design, label enumeration, and general information, go to the **MODELS
1. In the **ARTIFACTS** tab **>** **MODELS** **>** **Input Model** or **Output Model**, click the model name hyperlink.
1. In the model details **>** **GENERAL** tab **>** **MODEL URL**, either:
- * Download the model
, if it is stored in remote storage.
- * Copy its location to the clipboard
,
+ * Download the model
, if it is stored in remote storage.
+ * Copy its location to the clipboard
,
if it is in a local file.
@@ -260,7 +260,7 @@ including design, label enumeration, and general information, go to the **MODELS

-
+
@@ -271,8 +271,8 @@ including design, label enumeration, and general information, go to the **MODELS
1. In the **ARTIFACTS** tab **>** **DATA AUDIT** or **OTHER** **>** Select an artifact **>** Either:
- * Download the artifact
, if it is stored in remote storage.
- * Copy its location to the clipboard
,
+ * Download the artifact
, if it is stored in remote storage.
+ * Copy its location to the clipboard
,
if it is in a local file.
#### Data audit
@@ -285,9 +285,9 @@ Artifacts which are uploaded and dynamically tracked by **ClearML** appear in th

-
+
-
+
#### Other
@@ -298,31 +298,31 @@ Other artifacts, which are uploaded but not dynamically tracked after the upload

-
+
-
+
## General information
-General experiment details appear in the **INFO** tab. This includes information describing the stored experiment:
+General experiment details appear in the **INFO** tab. This includes information describing the stored experiment:
* The parent experiment
* Project name
* Creation, start, and last update dates and times
* User who created the experiment
* Experiment state (status)
* Whether the experiment is archived
-
+
View a screenshot

-
+
-
+
@@ -334,7 +334,7 @@ General experiment details appear in the **INFO** tab. This includes information
### Console
-The complete experiment log containing everything printed to stdout and strerr appears in the **CONSOLE** tab. The full log
+The complete experiment log containing everything printed to stdout and strerr appears in the **CONSOLE** tab. The full log
is downloadable. To view the end of the log, click **Jump to end**.
@@ -342,50 +342,50 @@ is downloadable. To view the end of the log, click **Jump to end**.

-
+
-
+
### Scalars
-All scalars that **ClearML** automatically logs, as well as those explicitly reported in code, appear in **RESULTS** **>** **SCALARS**.
+All scalars that **ClearML** automatically logs, as well as those explicitly reported in code, appear in **RESULTS** **>** **SCALARS**.
#### Scalar plot tools
-Use the scalar tools to improve analysis of scalar metrics. In the info panel, click
to use the tools. In the full screen details view, the tools
+Use the scalar tools to improve analysis of scalar metrics. In the info panel, click
to use the tools. In the full screen details view, the tools
are on the left side of the window. The tools include:
-* **Group by** - select one of the following:
- * **Metric** - all variants for a metric on the same plot
-
+* **Group by** - select one of the following:
+ * **Metric** - all variants for a metric on the same plot
+
View a screenshot

-
+
* **None** - Group by metric-variant combination (individual metric-variant plots).
-
+
View a screenshot

-
+
-* Show / hide plots - Click **HIDE ALL**, and then click
+* Show / hide plots - Click **HIDE ALL**, and then click
on those you want to see.
-* **Horizontal axis** modes (scalars, only) - Select one of the following:
+* **Horizontal axis** modes (scalars, only) - Select one of the following:
* **ITERATIONS**
- * **RELATIVE** - time since experiment began
+ * **RELATIVE** - time since experiment began
* **WALL** - local clock time
-* Curve smoothing (scalars, only) - In **Smoothing** **>** Move the slider or type a smoothing factor between **0** and **0.999**.
+* Curve smoothing (scalars, only) - In **Smoothing** **>** Move the slider or type a smoothing factor between **0** and **0.999**.
#### Plot controls
@@ -394,7 +394,7 @@ Each plot supports plot controls allowing you better analyze the results. The ta
|Icon|Description|
|---|---|
-|  | Download plots as PNG files. |
+|  | Download plots as PNG files. |
|  | Pan around plot. Click , click the plot, and then drag. |
|  | To examine an area, draw a dotted box around it. Click  and then drag. |
|  | To examine an area, draw a dotted lasso around it. Click  and then drag. |
@@ -413,8 +413,8 @@ Each plot supports plot controls allowing you better analyze the results. The ta
### Other plots
-Other plots include data reported by libraries, visualization tools, and **ClearML** explicit reporting. These may include
-2D and 3D plots, tables (Pandas and CSV files), and Plotly plots. Other plots appear in **RESULTS** **>** **PLOTS**.
+Other plots include data reported by libraries, visualization tools, and **ClearML** explicit reporting. These may include
+2D and 3D plots, tables (Pandas and CSV files), and Plotly plots. Other plots appear in **RESULTS** **>** **PLOTS**.
Individual plots can be shown / hidden or filtered by title.
@@ -422,7 +422,7 @@ Individual plots can be shown / hidden or filtered by title.

-
+
@@ -439,7 +439,7 @@ View debug samples by metric at any iteration. The most recent iteration appears
* Move to the same sample in a different iteration (move the iteration slider).
* Show the next or previous iteration's sample.
-* Download the file
.
+* Download the file
.
* Zoom.
* View the sample's iteration number, width, height, and coordinates.
@@ -449,7 +449,7 @@ View debug samples by metric at any iteration. The most recent iteration appears

-
+
@@ -460,7 +460,7 @@ View debug samples by metric at any iteration. The most recent iteration appears

-
+
@@ -471,7 +471,7 @@ View debug samples by metric at any iteration. The most recent iteration appears
1. Locate debug samples by doing the following:
* Filter by metric. In the **Metric** list, choose a metric.
- * Show other iterations. Click
(Older images),
(New images), or
(Newest images).
+ * Show other iterations. Click
(Older images),
(New images), or
(Newest images).
**To view a debug sample in the viewer / player:**
@@ -480,25 +480,25 @@ View debug samples by metric at any iteration. The most recent iteration appears
1. Do any of the following:
* Move to the same sample in another iteration - Move the slider, or click **<** (previous) or **>** (next).
- * Download the file - Click
.
+ * Download the file - Click
.
* Zoom
- * For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
+ * For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
## Tagging experiments
-Tags are user-defined, color-coded labels that can be added to experiments (and models), allowing to easily identify and
-group experiments. Tags can show any text. For example, add tags for the type of remote machine experiments were executed
+Tags are user-defined, color-coded labels that can be added to experiments (and models), allowing to easily identify and
+group experiments. Tags can show any text. For example, add tags for the type of remote machine experiments were executed
on, label versions of experiments, or apply team names to organize experimentation.
* To add tags and change tag colors:
- 1. Click the experiment **>** Hover over the tag area **>** **+ADD TAG** or
(menu)
+ 1. Click the experiment **>** Hover over the tag area **>** **+ADD TAG** or
(menu)
1. Do one of the following:
* Add a new tag - Type the new tag name **>** **(Create New)**.
* Add an existing tag - Click a tag.
- * Change a tag's colors - Click **Tag Colors** **>** Click the tag icon **>** **Background** or **Foreground** **>** Pick a color **>** **OK** **>** **CLOSE**.
+ * Change a tag's colors - Click **Tag Colors** **>** Click the tag icon **>** **Background** or **Foreground** **>** Pick a color **>** **OK** **>** **CLOSE**.
* To remove a tag - Hover over the tag **>** **X**.
@@ -506,4 +506,4 @@ on, label versions of experiments, or apply team names to organize experimentati
## Locating the experiment (Task) ID
-* In the info panel, in the top area, to the right of the Task name, click **ID**. The Task ID appears.
\ No newline at end of file
+* In the info panel, in the top area, to the right of the Task name, click **ID**. The Task ID appears.
diff --git a/docs/webapp/webapp_exp_tuning.md b/docs/webapp/webapp_exp_tuning.md
index 09a414c8..f03ed180 100644
--- a/docs/webapp/webapp_exp_tuning.md
+++ b/docs/webapp/webapp_exp_tuning.md
@@ -8,43 +8,43 @@ Tune experiments and edit an experiment's execution details, then execute the tu
1. Locate the experiment. Open the experiment's Project page from the Home page or the main Projects page.
- * On the Home page,
+ * On the Home page,
* Click on an experiment from RECENT EXPERIMENTS
* In RECENT PROJECTS **>** click on a project card **>** click experiment
* In RECENT PROJECTS **>** click **VIEW ALL** **>** click the project card **>** click experiment
* On the Projects page, click project card, or the **All projects** card **>** click experiment
-
+
1. Clone the experiment. In the experiments table:
- 1. Click **Clone**, and a **Clone experiment** box will pop up.
- 1. In the **Project** textbox, select or create a project. To search for another project, start typing the project name.
+ 1. Click **Clone**, and a **Clone experiment** box will pop up.
+ 1. In the **Project** textbox, select or create a project. To search for another project, start typing the project name.
To create a new project, type new experiment name and click **Create New**.
1. Enter an optional description.
1. Click **CLONE**.
The cloned experiment's status is now *Draft*.
-
-1. Edit the experiment. See [modifying experiments](#modifying-experiments).
-
-1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>**
- **ENQUEUE**.
- The experiment's status becomes *Pending*. When the worker assigned to the queue fetches the Task (experiment), the
+1. Edit the experiment. See [modifying experiments](#modifying-experiments).
+
+1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>**
+ **ENQUEUE**.
+
+ The experiment's status becomes *Pending*. When the worker assigned to the queue fetches the Task (experiment), the
status becomes *Running*. The experiment can now be tracked and its results visualized.
-
+
## Modifying experiments
-Experiments whose status is *Draft* are editable (see the [user properties](#user-properties) exception). In the **ClearML
+Experiments whose status is *Draft* are editable (see the [user properties](#user-properties) exception). In the **ClearML
Web UI**, edit any of the following
* [Source code](#source-code)
-* [Output destination for artifacts](#output-destination)
+* [Output destination for artifacts](#output-destination)
* [Base Docker image](#base-docker-image)
* [Log level](#log-level)
* [Hyperparameters](#hyperparameters) - Parameters, TensorFlow Definitions, command line options, environment variables, and user-defined properties
:::note
-User parameters are editable in any experiment, except experiments whose status is *Published* (read-only).
+User parameters are editable in any experiment, except experiments whose status is *Published* (read-only).
:::
* [Configuration objects](#configuration-objects) - Task model description
@@ -57,7 +57,7 @@ User parameters are editable in any experiment, except experiments whose status
#### Source code
-Select source code by changing any of the following:
+Select source code by changing any of the following:
* Repository, commit (select by ID, tag name, or choose the last commit in the branch), script, and /or working directory.
* Installed Python packages and / or versions - Edit or clear (remove) them all.
@@ -65,7 +65,7 @@ Select source code by changing any of the following:
**To select different source code:**
-* In the **EXECUTION** tab, hover over a section **>** **EDIT** or (**DISCARD DIFFS** for **UNCOMMITTED CHANGES**) **>**
+* In the **EXECUTION** tab, hover over a section **>** **EDIT** or (**DISCARD DIFFS** for **UNCOMMITTED CHANGES**) **>**
edit **>** **SAVE**.
@@ -75,14 +75,14 @@ Select a pre-configured Docker that **ClearML Agent** will use to remotely execu
**To add, change, or delete a base Docker image:**
-* In **EXECUTION** **>** **AGENT CONFIGURATION** **>** **BASE DOCKER IMAGE** **>** hover **>** **EDIT** **>**
+* In **EXECUTION** **>** **AGENT CONFIGURATION** **>** **BASE DOCKER IMAGE** **>** hover **>** **EDIT** **>**
Enter the base Docker image.
#### Output destination
-Set an output destination for model checkpoints (snapshots) and other artifacts. Examples of supported types of destinations
+Set an output destination for model checkpoints (snapshots) and other artifacts. Examples of supported types of destinations
and formats for specifying locations include:
* A shared folder: `/mnt/share/folder`
@@ -91,20 +91,20 @@ and formats for specifying locations include:
* Azure Storage: `azure://company.blob.core.windows.net/folder/`
**To add, change, or delete an artifact output destination:**
-
-* In **EXECUTION** **>** **OUTPUT** > **DESTINATION** **>** hover **>** **EDIT** **>** edit **>** **SAVE**.
+
+* In **EXECUTION** **>** **OUTPUT** > **DESTINATION** **>** hover **>** **EDIT** **>** edit **>** **SAVE**.
:::note
-Also set the output destination for artifacts in code (see the `output_uri` parameter of the
-[Task.init](../references/sdk/task.md#classmethod-initproject_namenone-task_namenone-task_typetasktypestraining-training-tagsnone-reuse_last_task_idtrue-continue_last_taskfalse-output_urinone-auto_connect_arg_parsertrue-auto_connect_frameworkstrue-auto_resource_monitoringtrue-auto_connect_streamstrue)
-method), and in the **ClearML** configuration file for all experiments (see [default_output_uri](../configs/clearml_conf#config_default_output_uri)
+Also set the output destination for artifacts in code (see the `output_uri` parameter of the
+[Task.init](../references/sdk/task.md#classmethod-initproject_namenone-task_namenone-task_typetasktypestraining-training-tagsnone-reuse_last_task_idtrue-continue_last_taskfalse-output_urinone-auto_connect_arg_parsertrue-auto_connect_frameworkstrue-auto_resource_monitoringtrue-auto_connect_streamstrue)
+method), and in the **ClearML** configuration file for all experiments (see [default_output_uri](../configs/clearml_conf#config_default_output_uri)
on the **ClearML** Configuration Reference page).
:::
#### Log level
-Set a logging level for the experiment (see the standard Python [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels)).
+Set a logging level for the experiment (see the standard Python [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels)).
**To add, change, or delete a log level:**
@@ -116,14 +116,14 @@ Set a logging level for the experiment (see the standard Python [logging levels]
#### Hyperparameters
-:::important
-In older versions of **ClearML Server**, the **CONFIGURATION** tab was named **HYPER PARAMETERS**, and it contained all
+:::important
+In older versions of **ClearML Server**, the **CONFIGURATION** tab was named **HYPER PARAMETERS**, and it contained all
parameters. The renamed tab contains a **HYPER PARAMETER** section, and subsections for hyperparameter groups.
:::
Add, change, or delete hyperparameters, which are organized in the **ClearML Web UI** in the following sections:
-* **Args** - Command line arguments and all older experiments parameters, except TensorFlow definitions (logged from code,
+* **Args** - Command line arguments and all older experiments parameters, except TensorFlow definitions (logged from code,
`argparse` argument automatic logging).
* **TF_DEFINE** - TensorFlow definitions (from code, TF_DEFINEs automatic logging).
@@ -137,19 +137,19 @@ Add, change, or delete hyperparameters, which are organized in the **ClearML Web
**To add, change, or delete hyperparameters:**
-* In the **CONFIGURATIONS** tab **>** **HYPER PARAMETERS** > **General** **>** hover **>** **EDIT** **>** add, change,
+* In the **CONFIGURATIONS** tab **>** **HYPER PARAMETERS** > **General** **>** hover **>** **EDIT** **>** add, change,
or delete keys and /or values **>** **SAVE**.
-#### User properties
+#### User properties
-User properties allow storing any descriptive information in key-value pair format. They are editable in any experiment,
+User properties allow storing any descriptive information in key-value pair format. They are editable in any experiment,
except experiments whose status is *Published* (read-only).
**To add, change, or delete user properties:**
-* In **CONFIGURATIONS** **>** **USER PROPERTIES** > **Properties** **>** hover **>** **EDIT** **>** add, change, or delete
+* In **CONFIGURATIONS** **>** **USER PROPERTIES** > **Properties** **>** hover **>** **EDIT** **>** add, change, or delete
keys and /or values **>** **SAVE**.
@@ -157,31 +157,31 @@ except experiments whose status is *Published* (read-only).
#### Configuration objects
:::important
-In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab **>** **MODEL
+In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab **>** **MODEL
CONFIGURATION** section. Task model configurations now appear in **CONFIGURATION** **>** **Configuration Objects**.
:::
**To add, change, or delete the Task model configurations:**
-* In **CONFIGURATIONS** **>** **CONFIGURATION OBJECTS** **>** **GENERAL** **>** hover **>** **EDIT** or **CLEAR** (if the
+* In **CONFIGURATIONS** **>** **CONFIGURATION OBJECTS** **>** **GENERAL** **>** hover **>** **EDIT** or **CLEAR** (if the
configuration is not empty).
### Artifacts
### Initial weights input model
-Edit model configuration and label enumeration, choose a different initial input weight model for the same project or any
-other project, or remove the model.
+Edit model configuration and label enumeration, choose a different initial input weight model for the same project or any
+other project, or remove the model.
:::note
-The models are editable in the **MODELS** tab, not the **EXPERIMENTS** tab. Clicking the model name hyperlink shows the
+The models are editable in the **MODELS** tab, not the **EXPERIMENTS** tab. Clicking the model name hyperlink shows the
model in the **MODELS** tab.
:::
**To select a different model:**
1. In **ARTIFACTS** **>** **Input Model** **>** Hover and click **EDIT**.
-1. If a model is associated with the experiment, click
.
+1. If a model is associated with the experiment, click
.
1. In the **SELECT MODEL** dialog, select a model from the current project or any other project.
**To edit a model's configuration or label enumeration:**
@@ -189,15 +189,15 @@ model in the **MODELS** tab.
1. Click the model name hyperlink. The model details appear in the **MODELS** tab.
1. Edit the model configuration or label enumeration.
- * Model configuration - In the **NETWORK** tab **>** Hover and click **EDIT**. **>** CLick **EDIT** or **CLEAR** (to
+ * Model configuration - In the **NETWORK** tab **>** Hover and click **EDIT**. **>** CLick **EDIT** or **CLEAR** (to
remove the configuration
-
- Users can also search for the configuration (hover over the configuration textbox, the search box appears) and copy the
- configuration to the clipboard (hover and click
).
- * Label enumeration - In the **LABELS** tab **>** Hover and click **EDIT** **>** Add, change, or delete label
+ Users can also search for the configuration (hover over the configuration textbox, the search box appears) and copy the
+ configuration to the clipboard (hover and click
).
+
+ * Label enumeration - In the **LABELS** tab **>** Hover and click **EDIT** **>** Add, change, or delete label
enumeration key-value pairs.
-
-**To remove a model from an experiment:**
-* Hover and click **EDIT** **>** Click
\ No newline at end of file
+**To remove a model from an experiment:**
+
+* Hover and click **EDIT** **>** Click
diff --git a/docs/webapp/webapp_model_modifying.md b/docs/webapp/webapp_model_modifying.md
index 8875cbd2..079d9a81 100644
--- a/docs/webapp/webapp_model_modifying.md
+++ b/docs/webapp/webapp_model_modifying.md
@@ -2,14 +2,14 @@
title: Modifying Models
---
-In the models table, modify models that have a status of *Draft* (status *Published* is read-only). Modify the model
-configuration and label enumeration.
+In the models table, modify models that have a status of *Draft* (status *Published* is read-only). Modify the model
+configuration and label enumeration.
-## Model configuration
+## Model configuration
**To edit the model configuration:**
-* In the **MODELS** tab, click a model **>** **NETWORK** **>** Hover over **MODEL CONFIGURATION** **>** **CLEAR**
+* In the **MODELS** tab, click a model **>** **NETWORK** **>** Hover over **MODEL CONFIGURATION** **>** **CLEAR**
(to delete the design) or **EDIT** **>** If editing, an editor textbox appears **>** edit **>** **OK**.

@@ -19,7 +19,7 @@ configuration and label enumeration.
For each class, label enumeration contains the class name (key) and value.
**To add, change, or delete label enumeration classes:**
-* In the **MODELS** tab, click a model **>** **LABELS** **>** Hover over **LABELS** **>** **EDIT** **>** **+**, edit a
- key or value, or
(delete) **>** **SAVE**.
+* In the **MODELS** tab, click a model **>** **LABELS** **>** Hover over **LABELS** **>** **EDIT** **>** **+**, edit a
+ key or value, or
(delete) **>** **SAVE**.

diff --git a/docs/webapp/webapp_model_table.md b/docs/webapp/webapp_model_table.md
index 39d79ebc..09621b0d 100644
--- a/docs/webapp/webapp_model_table.md
+++ b/docs/webapp/webapp_model_table.md
@@ -2,14 +2,14 @@
title: The Models Table
---
-The models table is a [customizable](#customizing-the-models-table) list of models associated with the experiments in a project. From the models table,
+The models table is a [customizable](#customizing-the-models-table) list of models associated with the experiments in a project. From the models table,
view model details, and modify, publish, archive, tag, and move models to other projects.

## Models table columns
-The models table contains the following columns:
+The models table contains the following columns:
| Column | Description | Type |
|---|---|---|
@@ -27,21 +27,21 @@ The models table contains the following columns:
## Customizing the models table
-The models table is customizable. Changes are persistent (cached in the browser) and represented in the URL, so customized settings
+The models table is customizable. Changes are persistent (cached in the browser) and represented in the URL, so customized settings
can be saved in a browser bookmark and shared with other **ClearML** users to collaborate.
Customize any combination of the following:
* Dynamic column ordering - Drag a column title to a different position.
-* Show / hide columns - Click
- **>** select or clear the checkboxes of columns to show or hide.
+* Show / hide columns - Click
+ **>** select or clear the checkboxes of columns to show or hide.
* Filter columns - Type of experiment, experiment status (state), user
* Sort columns - Metrics and hyperparameters, type of experiment, experiment name, start and last update elapsed time, and last iteration.
* Column autofit - In the column heading, double click a resizer (column separator).
## ClearML Actions from the models table
-The following table describes the **ClearML** features that can be used from the models table, including the states that
+The following table describes the **ClearML** features that can be used from the models table, including the states that
allow each feature. Model states are *Draft* (editable) and *Published* (read-only).
| ClearML Action | Description | States Valid for the Action |
@@ -55,16 +55,16 @@ allow each feature. Model states are *Draft* (editable) and *Published* (read-on
## Tagging models
-Tags are user-defined, color-coded labels that can be added to models (and experiments), allowing to easily identify and
-group of experiments. A tag can show any text, for any purpose. For example, add tags for the type of remote machine
+Tags are user-defined, color-coded labels that can be added to models (and experiments), allowing to easily identify and
+group of experiments. A tag can show any text, for any purpose. For example, add tags for the type of remote machine
experiments execute on, label versions of experiments, or apply team names to organize experimentation.
* To Add tags and to change tag colors:
- 1. Click the experiment **>** Hover over the tag area **>** **+ADD TAG** or
+ 1. Click the experiment **>** Hover over the tag area **>** **+ADD TAG** or
(menu)
1. Do one of the following:
* Add a new tag - Type the new tag name **>** **(Create New)**.
* Add an existing tag - Click a tag.
- * Change a tag's colors - Click **Tag Colors** **>** Click the tag icon **>** **Background** or **Foreground**
- **>** Pick a color **>** **OK** **>** **CLOSE**.
+ * Change a tag's colors - Click **Tag Colors** **>** Click the tag icon **>** **Background** or **Foreground**
+ **>** Pick a color **>** **OK** **>** **CLOSE**.
* To remove a tag - Hover over the tag **>** **X**.
diff --git a/docs/webapp/webapp_overview.md b/docs/webapp/webapp_overview.md
index ebb5625e..58722050 100644
--- a/docs/webapp/webapp_overview.md
+++ b/docs/webapp/webapp_overview.md
@@ -2,21 +2,21 @@
title: Overview
---
-The **ClearML Web UI** is the graphical user interface for the **ClearML** platform, which includes:
-* Experiment management
-* Browsing
-* Resource utilization monitoring
-* Profile management
+The **ClearML Web UI** is the graphical user interface for the **ClearML** platform, which includes:
+* Experiment management
+* Browsing
+* Resource utilization monitoring
+* Profile management
* Direct access to the **ClearML** community (Slack Channel, Youtube, and GitHub).
-
+

The **ClearML Web UI** is composed of the following pages:
-* The [Home](webapp_home.md) Page - The dashboard for recent activity, and quick access to experiments and and projects.
-* The Projects Page - The main experimentation page. It is a main projects page where specific projects can be opened.
-
- Each project page contains customizable [experiments](webapp_exp_table.md) and [models](webapp_model_table.md) tables
+* The [Home](webapp_home.md) Page - The dashboard for recent activity, and quick access to experiments and and projects.
+* The Projects Page - The main experimentation page. It is a main projects page where specific projects can be opened.
+
+ Each project page contains customizable [experiments](webapp_exp_table.md) and [models](webapp_model_table.md) tables
with the following options:
* [Track experiments and visualize results](webapp_exp_track_visual.md)
* [Reproduce experiments](webapp_exp_reproducing.md)
@@ -27,17 +27,17 @@ The **ClearML Web UI** is composed of the following pages:
* [View](webapp_model_viewing.md) and [modify](webapp_model_modifying.md) models
* The [Workers and Queues](webapp_workers_queues.md) Page - The resource monitoring and queues management page.
-* The [Profile Page](webapp_profile.md) - Manage a **ClearML** user account:
- * Create **ClearML** credentials
+* The [Profile Page](webapp_profile.md) - Manage a **ClearML** user account:
+ * Create **ClearML** credentials
* Provide Cloud Storage Access credentials for the **ClearML Web UI**
* If using the **ClearML Hosted Service**, invite users and switch workspaces
In addition, from the **ClearML Web UI**, use these buttons to access the **ClearML** community:
-* The **ClearML**
Slack channel. Ask questions about **ClearML**.
+* The **ClearML**
Slack channel. Ask questions about **ClearML**.
* The **ClearML**
YouTube Channel. View our tutorials, presentations, and discussions.
-* The **ClearML**
GitHub repository.
+* The **ClearML**
GitHub repository.
-For more information, see the [Community page](../community.md).
\ No newline at end of file
+For more information, see the [Community page](../community.md).
diff --git a/docs/webapp/webapp_profile.md b/docs/webapp/webapp_profile.md
index 1df7fe12..d312c6d4 100644
--- a/docs/webapp/webapp_profile.md
+++ b/docs/webapp/webapp_profile.md
@@ -12,8 +12,8 @@ Use the Profile page to manage a **ClearML** user account, including:
## Setting user preferences
-The **HiDPI browser scale override** adjusts scaling on High-DPI monitors to improve the Web UI experience. It is enabled
-by default, but can be disabled.
+The **HiDPI browser scale override** adjusts scaling on High-DPI monitors to improve the Web UI experience. It is enabled
+by default, but can be disabled.
Users that use their own **ClearML Server** can choose whether to send anonymous usage data to Allegro AI.
@@ -28,25 +28,25 @@ Users that use their own **ClearML Server** can choose whether to send anonymous
* **Secret / SAS** - The secret key or shared access signature for Azure Storage.
* **Region** - The region for AWS S3.
* **Host (Endpoint)** - The host for non-AWS S3 servers.
-
+
## Creating ClearML credentials
-**ClearML** credentials include:
+**ClearML** credentials include:
* Access key
-* Secret key
-* Web server
-* API server
+* Secret key
+* Web server
+* API server
* File servers host URLs
-
-**ClearML Hosted Service** users need credentials for each workspace they use. Users with their own self-hosted **ClearML Server**
+
+**ClearML Hosted Service** users need credentials for each workspace they use. Users with their own self-hosted **ClearML Server**
need only one set of credentials.
-**ClearML** credentials can be created for a current workspace. To create **ClearML** credentials for another workspace,
+**ClearML** credentials can be created for a current workspace. To create **ClearML** credentials for another workspace,
switch to it.
**To create ClearML credentials:**
-1. Click the Profile button
+1. Click the Profile button
(upper right corner).
1. In **WORKSPACES**, use the current workspace or select another (self-hosted **ClearML Server** users have one workspace).
@@ -55,25 +55,25 @@ switch to it.
## Switching workspaces
-:::note
-Switching workspaces does not apply to users of a self-hosted **ClearML Server**
+:::note
+Switching workspaces does not apply to users of a self-hosted **ClearML Server**
:::
-**ClearML Hosted Service** users who are members of multiple teams can switch from one workspace to another.
+**ClearML Hosted Service** users who are members of multiple teams can switch from one workspace to another.
**Switch workspaces in one of the following ways:**
-* Profile button - Click
(upper right corner on any page) **>**
+* Profile button - Click
(upper right corner on any page) **>**
Click the workspace to switch to.
* Profile page - In the **WORKSPACES** section, click **SWITCH TO WORKSPACE** **>** Click the workspace to switch to.
## Inviting new teammates
-:::note
+:::note
Inviting new teammates does not apply to users of a self-hosted **ClearML Server**.
:::
-**ClearML Hosted Service** users can invite other users to collaborate in their workspace. On the Profile page, the **WORKSPACES**
+**ClearML Hosted Service** users can invite other users to collaborate in their workspace. On the Profile page, the **WORKSPACES**
section shows the current members of the team, and whether the team has reached its maximum number of members.
@@ -81,9 +81,9 @@ section shows the current members of the team, and whether the team has reached
1. Create an invitation hyperlink with one of these options:
- * Profile button - Click
+ * Profile button - Click
**>** **Invite a User** **>** Copy the invitation hyperlink.
-
+
* Profile page - In **WORKSPACES** **>** **Members** **>** Click **INVITE USER** **>** Copy the invitation hyperlink.
1. Send the invitation hyperlink to an invitee.
diff --git a/docs/webapp/webapp_workers_queues.md b/docs/webapp/webapp_workers_queues.md
index 1fc8864b..a362f4bb 100644
--- a/docs/webapp/webapp_workers_queues.md
+++ b/docs/webapp/webapp_workers_queues.md
@@ -4,7 +4,7 @@ title: Workers and Queues
With the **Workers and Queues** page, users can:
-* Monitor resources (CPU and GPU, memory, video memory, and network usage) used by the experiments / Tasks that workers
+* Monitor resources (CPU and GPU, memory, video memory, and network usage) used by the experiments / Tasks that workers
execute
* View workers and the queues they listen to
* Create and rename queues; delete empty queues; monitor queue utilization
@@ -16,20 +16,20 @@ With the **Workers and Queues** page, users can:
**To monitor resource utilization:**
-1. In the **WORKERS** tab, click a worker. The chart refreshes showing resource utilization over time for that worker. The
- worker **INFO** slides open, showing information about the worker:
- * Name
+1. In the **WORKERS** tab, click a worker. The chart refreshes showing resource utilization over time for that worker. The
+ worker **INFO** slides open, showing information about the worker:
+ * Name
* Current experiment
* Current runtime
- * Last iteration
+ * Last iteration
* Last update time.
1. Select a metric and time frame:
1. In the list of resources (top left side), select **CPU and GPU Usage**, **Memory Usage**, **Video Memory Usage**, or **Network Usage**.
-
+
1. In the period list (top right side), select **3 Hours**, **6 Hours**, **12 Hours**, **1 Day**, **1 Week**, or **1 Month**.
-
+

@@ -40,7 +40,7 @@ Optimize worker use by monitoring worker utilization in the **Workers** tab.
**To monitor worker utilization:**
-* Open the **Workers** tab in the **Workers & Queues** page. The worker utilization chart
+* Open the **Workers** tab in the **Workers & Queues** page. The worker utilization chart
appears. Hover over any data point and see average workers and total workers.
@@ -49,15 +49,15 @@ Optimize worker use by monitoring worker utilization in the **Workers** tab.
**To monitor all queues:**
-* Open the **Queues** tab in the **Workers & Queues** page. The queue utilization chart appears and shows
- average wait time (seconds) and number of experiments queued for all queues.
+* Open the **Queues** tab in the **Workers & Queues** page. The queue utilization chart appears and shows
+ average wait time (seconds) and number of experiments queued for all queues.
* Hover over any data point and see average wait time and number of experiments.

**To monitor a queue:**
-1. In the queues list (below the plot on the left), click a queue.
+1. In the queues list (below the plot on the left), click a queue.
1. The chart refreshes, showing metrics for the selected queue. The info panel slides open with two tabs:
1. To see the enqueued experiments on the queue, click the **EXPERIMENTS** tab.
2. To view information about the workers listening to the queue, click the **WORKERS** tab.
@@ -73,8 +73,8 @@ In the **Queues** tab, do any of the following:
* Rename a queue - Click **RENAME** > Type a queue name **>** **RENAME**, or click **DELETE**.
* Delete a queue - Click **Delete**.
* Do any of the following by right clicking an experiment in a queue's **EXPERIMENTS** tab (lower right):
- * Reorder experiments in a queue - Drag an experiment to a new position in the queue, or click
+ * Reorder experiments in a queue - Drag an experiment to a new position in the queue, or click
(menu) and then select **Move to top** or **Move to bottom**.
- * Move to a new queue - Click
(menu) **>** **Move to queue...** **>** Select a queue **>** **ENQUEUE**.
- * Remove an experiment - Click
(menu) **>** **Move to queue...** **>** Select a queue **>** **ENQUEUE**.
-
+ * Move to a new queue - Click
(menu) **>** **Move to queue...** **>** Select a queue **>** **ENQUEUE**.
+ * Remove an experiment - Click
(menu) **>** **Move to queue...** **>** Select a queue **>** **ENQUEUE**.
+