diff --git a/docs/fundamentals/task.md b/docs/fundamentals/task.md
index 54aeff4e..be85c1c1 100644
--- a/docs/fundamentals/task.md
+++ b/docs/fundamentals/task.md
@@ -53,7 +53,7 @@ The captured [code execution information](../webapp/webapp_exp_track_visual.md#e
* Python environment
* [Execution configuration](#execution-configuration) and hyperparameters
-The captured [execution output](../webapp/webapp_exp_track_visual.md#experiment-results) includes:
+The captured [execution output](../webapp/webapp_exp_track_visual.md#task-results) includes:
* [Console output](../webapp/webapp_exp_track_visual.md#console)
* [Scalars](../webapp/webapp_exp_track_visual.md#scalars)
* [Plots](../webapp/webapp_exp_track_visual.md#plots)
diff --git a/docs/img/webapp_tracking_33.png b/docs/img/webapp_tracking_33.png
index fd5f74f7..8db578bf 100644
Binary files a/docs/img/webapp_tracking_33.png and b/docs/img/webapp_tracking_33.png differ
diff --git a/docs/img/webapp_tracking_34.png b/docs/img/webapp_tracking_34.png
index 84f5c437..b670d54b 100644
Binary files a/docs/img/webapp_tracking_34.png and b/docs/img/webapp_tracking_34.png differ
diff --git a/docs/img/webapp_tracking_34b.png b/docs/img/webapp_tracking_34b.png
new file mode 100644
index 00000000..8a8d4463
Binary files /dev/null and b/docs/img/webapp_tracking_34b.png differ
diff --git a/docs/webapp/webapp_exp_table.md b/docs/webapp/webapp_exp_table.md
index 977ca67a..0778fce9 100644
--- a/docs/webapp/webapp_exp_table.md
+++ b/docs/webapp/webapp_exp_table.md
@@ -233,7 +233,7 @@ to open the context menu
| Abort All Children | Manually terminate all *Running* tasks which have this task as a parent | *Running* or *Aborted* | None for parent task, *Aborted* for child tasks |
| Retry | Enqueue a failed task in order to rerun it. Make sure you have resolved the external problem which previously prevented the task’s completion. | *Failed* | *Pending* |
| Publish | Publish a task to prevent changes to its tracking data, inputs, and outputs. Published tasks and their models are read-only. *Published* tasks cannot be enqueued, but they can be cloned, and their clones can be edited, tuned, and enqueued. | *Completed*, *Aborted*, or *Failed*. | *Published* |
-| Add Tag | Tag tasks with color-coded labels to assist you in organizing your work. See [tagging tasks](webapp_exp_track_visual.md#tagging-experiments). | Any state | None |
+| Add Tag | Tag tasks with color-coded labels to assist you in organizing your work. See [tagging tasks](webapp_exp_track_visual.md#tagging-tasks). | Any state | None |
| Clone | Make an exact, editable copy of a task (for example, to reproduce a task, but keep the original). | *Draft* | Newly cloned task is *Draft* |
| Move to Project | Move a task to another project. | Any state | None |
| Compare | Compare selected tasks (see [Comparing Tasks](webapp_exp_comparing.md)) | Any state | None |
diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md
index f2a158d0..9631ea5d 100644
--- a/docs/webapp/webapp_exp_track_visual.md
+++ b/docs/webapp/webapp_exp_track_visual.md
@@ -1,14 +1,14 @@
---
-title: Tracking Experiments and Visualizing Results
+title: Tracking Tasks and Visualizing Results
---
-While an experiment is running, and any time after it finishes, track it and visualize the results in the ClearML Web UI,
+While a task is running, and any time after it finishes, track it and visualize the results in the ClearML Web UI,
including:
-* [Execution details](#execution) - Code, the base Docker image used for [ClearML Agent](../clearml_agent.md), output destination for artifacts, and the logging level.
+* [Execution details](#execution) - Code, the container image used for [ClearML Agent](../clearml_agent.md), output destination for artifacts, and the logging level.
* [Configuration](#configuration) - Hyperparameters, user properties, and configuration objects.
* [Artifacts](#artifacts) - Input model, output model, model snapshot locations, other artifacts.
-* [Info](#info) - Extended experiment information, such as the experiment start, create, and last update times and dates, user creating the experiment, and its description.
+* [Info](#info) - Extended task information, such as the start, create, and last update times and dates, user creating the task, and its description.
* [Console](#console) - stdout, stderr, output to the console from libraries, and ClearML explicit reporting.
* [Scalars](#scalars) - Metric plots.
* [Plots](#plots) - Other plots and data, for example: Matplotlib, Plotly, and ClearML explicit reporting.
@@ -16,39 +16,38 @@ including:
## Viewing Modes
-The ClearML Web UI provides two viewing modes for experiment details:
+The ClearML Web UI provides two viewing modes for task details:
-* The info panel
+* [Info panel](#info-panel)
+* [Full screen details mode](#full-screen-details-view)
-* Full screen details mode.
-
-Both modes contain all experiment details. When either view is open, switch to the other mode by clicking
-(**View in experiments table / full screen**), or clicking
(**menu**) > **View in experiments
+Both modes contain all task details. When either view is open, switch to the other mode by clicking
+(**View in task table / full screen**), or clicking
(**menu**) > **View in tasks
table / full screen**.
### Info Panel
-The info panel keeps the experiment table in view so that [experiment actions](webapp_exp_table.md#task-actions)
+The info panel keeps the task table in view so that [task actions](webapp_exp_table.md#task-actions)
can be performed from the table (as well as the menu in the info panel).

Click
to
-hide details in the experiment table, so only the experiment names and statuses are displayed
+hide details in the task table, so only the task names and statuses are displayed

### Full Screen Details View
-The full screen details view allows for easier viewing and working with experiment tracking and results. The experiments
-table is not visible when the full screen details view is open. Perform experiment actions from the menu.
+The full screen details view allows for easier viewing and working with task tracking and results. The task
+table is not visible when the full screen details view is open. Perform task actions from the menu.

## Execution
-An experiment's **EXECUTION** tab of lists the following:
+A task's **EXECUTION** tab of lists the following:
* Source code
* Uncommitted changes
* Installed Python packages
@@ -59,8 +58,8 @@ In full-screen mode, the source code and output details are grouped in the **DET
### Source Code
-The Source Code section of an experiment's **EXECUTION** tab includes:
-* The experiment's repository
+The Source Code section of a task's **EXECUTION** tab includes:
+* The task's repository
* Commit ID
* Script path
* Working directory
@@ -70,7 +69,7 @@ The Source Code section of an experiment's **EXECUTION** tab includes:
### Uncommitted Changes
-ClearML displays the git diff of the experiment in the Uncommitted Changes section.
+ClearML displays the git diff of the task in the Uncommitted Changes section.

@@ -88,12 +87,12 @@ using to set up an environment (`pip` or `conda`) are available. Select which re
### Container
The Container section list the following information:
-* Image - a pre-configured Docker that ClearML Agent will use to remotely execute this experiment (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md))
-* Arguments - add Docker arguments
-* Setup shell script - a bash script to be executed inside the Docker before setting up the experiment's environment
+* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md))
+* Arguments - add container arguments
+* Setup shell script - a bash script to be executed inside the container before setting up the task's environment
:::important
-To [rerun](webapp_exp_tuning.md) an experiment through the UI in the listed container, the ClearML Agent executing the experiment must be running in
+To [rerun](webapp_exp_tuning.md) a task through the UI in the listed container, the ClearML Agent executing the task must be running in
Docker mode:
```bash
@@ -119,7 +118,7 @@ All parameters and configuration objects appear in the **CONFIGURATION** tab.
### Hyperparameters
-Hyperparameters are grouped by their type and appear in **CONFIGURATION** **>** **HYPERPARAMETERS**. Once an experiment
+Hyperparameters are grouped by their type and appear in **CONFIGURATION** **>** **HYPERPARAMETERS**. Once a task
is run and stored in ClearML Server, any of these hyperparameters can be [modified](webapp_exp_tuning.md#modifying-experiments).
#### Command Line Arguments
@@ -157,14 +156,14 @@ The **TF_DEFINE** parameter group shows automatic TensorFlow logging.
### User Properties
-User properties allow to store any descriptive information in a key-value pair format. They are editable in any experiment,
-except experiments whose status is *Published* (read-only).
+User properties allow to store any descriptive information in a key-value pair format. They are editable in any task,
+except *Published* ones (read-only).

### Configuration Objects
-ClearML tracks experiment (Task) model configuration objects, which appear in **Configuration Objects** **>** **General**.
+ClearML tracks a task's model configuration objects, which appear in **Configuration Objects** **>** **General**.
These objects include those that are automatically tracked, and those connected to a Task in code (see [`Task.connect_configuration`](../references/sdk/task.md#connect_configuration)).

@@ -200,7 +199,7 @@ The task's input and output models appear in the **ARTIFACTS** tab. Each model e
* ID
* Configuration.
-Input models also display their creating experiment, which on-click navigates you to the experiment's page.
+Input models also display their creating task, which on-click navigates you to the task's page.

@@ -210,10 +209,10 @@ to navigate to its page in the **MODELS** tab (see [Model Details](webapp_model_
## Info
-The **INFO** tab shows extended experiment information:
-* [Latest experiment events log](#latest-events-log)
-* [Experiment description](#description)
-* [Experiment details](#experiment-details)
+The **INFO** tab shows extended task information:
+* [Latest task events log](#latest-events-log)
+* [Task description](#description)
+* [Task details](#task-details)
### Latest Events Log
@@ -237,19 +236,19 @@ ClearML maintains a system-wide, large but strict limit for task history items.
:::
### Description
-Add descriptive text to the experiment in the **Description** section. To modify the description, hover over the
+Add descriptive text to the task in the **Description** section. To modify the description, hover over the
description box and click **Edit**.
-### Experiment Details
-The **Experiment Details** section lists information describing the experiment:
+### Task Details
+The **Task Details** section lists information describing the task:
-* The parent experiment
+* The parent task
* Project name
* Creation, start, and last update dates and times
-* User who created the experiment
-* Experiment state (status)
-* Whether the experiment is archived
-* Runtime properties - Information about the machine running the experiment, including:
+* User who created the task
+* Task state (status)
+* Whether the task is archived
+* Runtime properties - Information about the machine running the task:
* Operating system
* CUDA driver version
* Number of CPU cores
@@ -259,21 +258,21 @@ The **Experiment Details** section lists information describing the experiment:
* Host name
* Processor
* Python version
-* Experiment Progress
+* Task Progress

-## Experiment Results
+## Task Results
:::tip Embedding ClearML Visualization
-You can embed experiment plots and debug samples into ClearML [Reports](webapp_reports.md). These visualizations are
-updated live as the experiment(s) updates. The Enterprise Plan and Hosted Service support embedding resources in external
+You can embed task plots and debug samples into ClearML [Reports](webapp_reports.md). These visualizations are
+updated live as the task(s) updates. The Enterprise Plan and Hosted Service support embedding resources in external
tools (e.g. Notion). See [Plot Controls](#plot-controls).
:::
### Console
-The complete experiment log containing everything printed to stdout and stderr appears in the **CONSOLE** tab. The full log
+The complete task log containing everything printed to stdout and stderr appears in the **CONSOLE** tab. The full log
is downloadable. To view the end of the log, click **Jump to end**.

@@ -289,7 +288,10 @@ Scalar series can be displayed in [graph view](#graph-view) (default) or in [met
#### Graph View
Scalar graph view (
)
-shows scalar series plotted as a time series line chart. The series are sub-sampled for
+shows scalar series plotted as a time series line chart. By default, a single plot is shown for each scalar metric,
+with all variants overlaid within.
+
+The series are sub-sampled for
display efficiency. For high resolution, view a series in full screen mode by hovering over the graph and clicking
.
:::info Full Screen Refresh
@@ -304,26 +306,24 @@ a `Summary` table.
Use the scalar tools to improve analysis of scalar metrics. In the info panel, click
to use the tools. In the full screen details view, the tools
are on the left side of the window. The tools include:
-* **Group by** - Select one of the following:
- * **Metric** - All variants for a metric on the same plot
-
on each plot you want to view.
-To embed scalar plots in your [Reports](webapp_reports.md), hover over a plot and click
,
-which will copy to clipboard the embed code to put in your Reports. In contrast to static screenshots, embedded resources
+To embed scalar plots in your [Reports](webapp_reports.md), hover over a plot and click Embed
,
+which will copy to clipboard the embed code to put in your Reports. To quickly get the embed codes for all plots of a
+specific metric, click Embed
+on the group section header (available when plots are [grouped by](#group_by) `None`).
+
+
+
+In contrast to static screenshots, embedded resources
are retrieved when the report is displayed allowing your reports to show the latest up-to-date data.
See additional [plot controls](#plot-controls) below.
@@ -367,7 +373,10 @@ Plotly plots. Individual plots can be shown / hidden or filtered by title.

-For each metric, the latest reported plot is displayed.
+Plots are grouped into sections by metric. To quickly get the embed codes for all plots of a specific metric, click Embed
+on the group section header.
+
+For each metric/variant combination, the latest reported plot is displayed.
When viewing a plot in full screen (
),
older iterations are available through the iteration slider (or using the up/down arrow keyboard shortcut). Go to the
@@ -416,7 +425,7 @@ These controls allow you to better analyze the results. Hover over a plot, and t
### Debug Samples
-Experiment outputs such as images, audio, and videos appear in **DEBUG SAMPLES**. These include data generated by
+Task outputs such as images, audio, and videos appear in **DEBUG SAMPLES**. These include data generated by
libraries and visualization tools, and explicitly reported using the [ClearML Logger](../fundamentals/logger.md).
You can view debug samples by metric in the reported iterations. Filter the samples by metric by selecting a metric from the
@@ -435,7 +444,7 @@ buttons (or using the left/right arrow keyboard shortcut).

-## Tagging Experiments
+## Tagging Tasks