From 7ba665764f40605af9d31f7400df1c1278a5df8a Mon Sep 17 00:00:00 2001
From: pollfly <75068813+pollfly@users.noreply.github.com>
Date: Sun, 12 Jan 2025 15:57:06 +0200
Subject: [PATCH] Add bash script support when creating task via UI (#998)
---
docs/faq.md | 2 +-
docs/webapp/webapp_exp_table.md | 283 +++++++++++++------------
docs/webapp/webapp_exp_track_visual.md | 2 +-
3 files changed, 145 insertions(+), 142 deletions(-)
diff --git a/docs/faq.md b/docs/faq.md
index b08136f2..fcad943c 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -165,7 +165,7 @@ add a [custom column](webapp/webapp_model_table.md#customizing-the-models-table)
that metric column. Available custom column options depend upon the models in the table and the metrics that have been
attached to them (see [Logging Metrics and Plots](clearml_sdk/model_sdk.md#logging-metrics-and-plots)).
-ClearML associates models with the experiments that created them, so you can also add a [custom column](webapp/webapp_exp_table.md#customizing-the-experiments-table)
+ClearML associates models with the experiments that created them, so you can also add a [custom column](webapp/webapp_exp_table.md#customizing-the-task-table)
in an experiments table and sort by that metric column.
diff --git a/docs/webapp/webapp_exp_table.md b/docs/webapp/webapp_exp_table.md
index 1c38b19c..977ca67a 100644
--- a/docs/webapp/webapp_exp_table.md
+++ b/docs/webapp/webapp_exp_table.md
@@ -1,78 +1,81 @@
---
-title: The Experiments Table
+title: The Task Table
---
-The experiments table is a [customizable](#customizing-the-experiments-table) list of experiments associated with a project. From the experiments
-table, view experiment details, and work with experiments (reset, clone, enqueue, create [tracking leaderboards](../guides/ui/building_leader_board.md)
-to monitor experimentation, and more). The experiments table's auto-refresh lets users continually monitor experiment progress.
+The task table is a [customizable](#customizing-the-task-table) list of tasks associated with a project. From the tasks
+table, view task details, and work with tasks (reset, clone, enqueue, create [tracking leaderboards](../guides/ui/building_leader_board.md)
+to monitor experimentation, and more). The task table's auto-refresh lets users continually monitor task progress.
-View the experiments in table view ,
+View the tasks in table view ,
details view , or
comparison view
-using the buttons on the top left of the page. Use the table view for a comparative view of your experiments according
-to columns of interest. Use the details view to access a selected experiment's details, while keeping the experiment list
-in view. Details view can also be accessed by double-clicking a specific experiment in the table view to open its details view.
-Use the [comparison view](#comparing-experiments) to compare your experiments' scalar and plot results (for a more in
-depth comparison, see [Comparing Experiments](webapp_exp_comparing.md)). This view compares
-the scalars/plots of currently selected experiments. If no experiments are selected, the first 100
-visible experiments in the table are compared.
+using the buttons on the top left of the page. Use the table view for a comparative view of your tasks according
+to columns of interest. Use the details view to access a selected task's details, while keeping the task list
+in view. Details view can also be accessed by double-clicking a specific task in the table view to open its details view.
+Use the [comparison view](#comparing-tasks) to compare your tasks' scalar and plot results (for a more in
+depth comparison, see [Comparing Tasks](webapp_exp_comparing.md)). This view compares
+the scalars/plots of currently selected tasks. If no tasks are selected, the first 100
+visible tasks in the table are compared.
-You can archive experiments so the experiments table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
-table to open the archive and view all archived experiments. From the archive, you can restore
-experiments to remove them from the archive. You can also permanently delete experiments.
+You can archive tasks so the table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
+table to open the archive and view all archived tasks. From the archive, you can restore
+tasks to remove them from the archive. You can also permanently delete tasks.
-You can download the experiments table as a CSV file by clicking
+You can download the task table as a CSV file by clicking
and choosing one of these options:
-* **Download onscreen items** - Download the values for experiments currently visible on screen
-* **Download all items** - Download the values for all experiments in this project that match the current active filters
+* **Download onscreen items** - Download the values for tasks currently visible on screen
+* **Download all items** - Download the values for all tasks in this project that match the current active filters
The downloaded data consists of the currently displayed table columns.
-![Experiment table](../img/webapp_experiment_table.png)
+![Task table](../img/webapp_experiment_table.png)
-## Creating Experiments
+## Creating Tasks
-You can create experiments by:
+You can create tasks by:
* Running code instrumented with ClearML (see [Task Creation](../clearml_sdk/task_sdk.md#task-creation))
-* [Cloning an existing experiment](webapp_exp_reproducing.md)
-* Through the UI interface: Input the experiment's details, including its source code and python requirements, and then
+* [Cloning an existing task](webapp_exp_reproducing.md)
+* Via CLI using [`clearml-task`](../apps/clearml_task.md)
+* Through the UI interface: Input the task's details, including its source code and python requirements, and then
run it through a [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) or save it as a *draft*.
-To create an experiment through the UI interface:
-1. Click `+ New Experiment`
-1. In the `Create Experiment` modal, input the following information:
- * **Code**
- * Experiment name
- * Git
+To create a task through the UI interface:
+1. Click `+ New Task`
+1. In the `Create Task` modal, input the following information:
+ * **Code** - What this task is going to run
+ * Task name
+ * Git - Optional fields for checking out the code from a git repository:
* Repository URL
* Version specification - one of the following:
* Tag
* Branch
* Commit ID
- * Execution Entry Point
- * Working Directory
- * One of the following
- * Script name
- * Module (see [python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
- * Add `Task.init` call - If selected, [`Task.init()`](../references/sdk/task.md#taskinit) call is added to the
- entry point. Select if it is not already called within your code
+ * Entry Point - The code to run
+ * Working Directory
+ * Script type - Python/Shell
+ * Binary - The binary executing the script (e.g. python3, bash etc).
+ * Type – How the code is provided
+ * Script - The name of the file to run using the above specified binary
+ * Module - The name of a python module to run (Python only, see [Python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
+ * Custom code - Directly provide the code to run. Write code, or upload a file:
+ * File name - The script in which your code is stored. Click `Upload` to upload an existing file.
+ * Content - The actual code. Click `Edit` to modify the script’s contents.
+ * Add `Task.init` call (Python only) - If selected, a [`Task.init()`](../references/sdk/task.md#taskinit) call is automatically added to
+ your script (Use if if you script does not yet make use of ClearML)
* **Arguments** (*optional*) - Add [hyperparameter](../fundamentals/hyperparameters.md) values.
- * **Environment** (*optional*) - Set up the experiment’s python execution environment using either of the following
- options:
- * Use Poetry specification - Requires specifying a docker image for the experiment to be executed in.
- * Manually specify the python environment configuration:
- * Python binary - The python executable to use
- * Preinstalled venv - A specific existing virtual environment to use. Requires specifying a docker image for the
- experiment to be executed in.
- * Python package specification:
- * Skip - Assume system packages are available. Requires specifying a docker image for the experiment to be
- executed in.
- * Use an existing `requirements.txt` file
- * Explicitly specify the required packages
- * **Docker** (*optional*) - Specify Docker container configuration for executing the experiment
- * Image - Docker image to use for running the experiment
- * Arguments - Add Docker arguments as a single string
- * Startup Script - Add a bash script to be executed inside the Docker before setting up the experiment's environment
+ * **Environment** (*optional*) - Set up the task’s execution environment
+ * Python - Python environment settings
+ * Use Poetry - Force Poetry instead of pip package manager. Disables additional python settings.
+ * Preinstalled venv - The name of a virtual environment available in the task’s execution environment to use when
+ running the task. Additionally, specify how to use the virtual environment:
+ * Skip - Try to automatically detect an available virtual environment, and use it as is.
+ * Use `requirements.txt` file - Install packages from a `requirements.txt` file into the specified virtual environment.
+ * Specify Packages - Install the specified packages into the specified virtual environment
+ * Environment Variables - Set these environment variables when running the task
+ * **Container** (*optional*) - Specify container configuration for executing the task
+ * Image - Image to use for running the task
+ * Arguments - Add container arguments as a single string
+ * Startup Script - Add a bash script to be executed inside the container before setting up the task's environment
:::important
For a task to run in the specified container, the ClearML Agent executing the task must be running in
@@ -85,39 +88,39 @@ To create an experiment through the UI interface:
:::
* **Run**
- * Queue - [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) where the experiment should be
+ * Queue - [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) where the task should be
enqueued for execution
- * Output Destination - A URI where experiment outputs should be stored (ClearML file server by default).
+ * Output Destination - A URI where task outputs should be stored (ClearML file server by default).
1. Once you have input all the information, click one of the following options
- * Save as Draft - Save the experiment as a new draft task.
- * Run - Enqueue the experiment for execution in the queue specified in the **Run** tab
+ * Save as Draft - Save the task as a new draft.
+ * Run - Enqueue the task for execution in the queue specified in the **Run** tab
-Once you have completed the experiment creation wizard, the experiment will be saved in your current project (where
-you clicked `+ New Experiment`). See what you can do with your experiment in [Experiment Actions](#experiment-actions).
+Once you have completed the task creation wizard, the task will be saved in your current project (where
+you clicked `+ New Task`). See what you can do with your task in [Task Actions](#task-actions).
-## Experiments Table Columns
+## Task Table Columns
-The experiments table default and customizable columns are described in the following table.
+The task table default and customizable columns are described in the following table.
| Column | Description | Type |
|---|---|---|
-| **TYPE** | Type of experiment. ClearML supports multiple [task types](../fundamentals/task.md#task-types) for experimentation, and a variety of workflows and use cases. | Default |
-| **NAME** | Experiment name. | Default |
-| **TAGS** | Descriptive, user-defined, color-coded tags assigned to experiments. Use tags to classify experiments, and filter the list. See [tagging experiments](webapp_exp_track_visual.md#tagging-experiments). | Default |
-| **STATUS** | Experiment state (status). See a list of the [task states and state transitions](../fundamentals/task.md#task-states). If you programmatically set task progress values, you will also see a progress indicator for Running, Failed, and Aborted tasks. See [here](../clearml_sdk/task_sdk.md#tracking-task-progress). | Default |
-| **PROJECT** | Name of experiment's project. | Default |
-| **USER** | User who created or cloned the experiment. | Default (hidden) |
-| **STARTED** | Elapsed time since the experiment started. To view the date and time of start, hover over the elapsed time. | Default |
-| **UPDATED** | Elapsed time since the last update to the experiment. To view the date and time of update, hover over the elapsed time. | Default |
-| **ITERATION** | Last or most recent iteration of the experiment. | Default |
-| **DESCRIPTION** | A description of the experiment. For cloned experiments, the description indicates it was auto generated with a timestamp. | Default (hidden) |
-| **RUN TIME** | The current / total running time of the experiment. | Default (hidden) |
-| **_Metrics_** | Add metrics column (last, minimum, and/or maximum values). The metrics depend upon the experiments in the table. See [adding metrics](#to-add-metrics). | Customizable |
-| **_Hyperparameters_** | Add hyperparameters. The hyperparameters depend upon the experiments in the table. See [adding hyperparameters](#to-add-hyperparameters). | Customizable |
+| **TYPE** | Type of task. ClearML supports multiple [task types](../fundamentals/task.md#task-types) for experimentation, and a variety of workflows and use cases. | Default |
+| **NAME** | Task name. | Default |
+| **TAGS** | Descriptive, user-defined, color-coded tags assigned to tasks. Use tags to classify tasks, and filter the list. See [tagging tasks](webapp_exp_track_visual.md#tagging-tasks). | Default |
+| **STATUS** | Task state (status). See a list of the [task states and state transitions](../fundamentals/task.md#task-states). If you programmatically set task progress values, you will also see a progress indicator for Running, Failed, and Aborted tasks. See [here](../clearml_sdk/task_sdk.md#tracking-task-progress). | Default |
+| **PROJECT** | Name of task's project. | Default |
+| **USER** | User who created or cloned the task. | Default (hidden) |
+| **STARTED** | Elapsed time since the task started. To view the date and time of start, hover over the elapsed time. | Default |
+| **UPDATED** | Elapsed time since the last update to the task. To view the date and time of update, hover over the elapsed time. | Default |
+| **ITERATION** | Last or most recent iteration of the task. | Default |
+| **DESCRIPTION** | A description of the task. For cloned tasks, the description indicates it was auto generated with a timestamp. | Default (hidden) |
+| **RUN TIME** | The current / total running time of the task. | Default (hidden) |
+| **_Metrics_** | Add metrics column (last, minimum, and/or maximum values). The metrics depend upon the tasks in the table. See [adding metrics](#to-add-metrics). | Customizable |
+| **_Hyperparameters_** | Add hyperparameters. The hyperparameters depend upon the tasks in the table. See [adding hyperparameters](#to-add-hyperparameters). | Customizable |
-## Customizing the Experiments Table
+## Customizing the Task Table
Customize the table using any of the following:
* Dynamic column order - Drag a column title to a different position.
@@ -130,13 +133,13 @@ Customize the table using any of the following:
main column list. Added columns are by default displayed in the table. You can remove the custom columns from the
main column list or the column addition windows.
* [Filter columns](#filtering-columns)
-* Sort columns - According to metrics and hyperparameters, type of experiment, experiment name, start and last update elapsed time, and last iteration.
+* Sort columns - According to metrics and hyperparameters, type of task, task name, start and last update elapsed time, and last iteration.
-Use experiments table customization for various use cases, including:
+Use task table customization for various use cases, including:
-* Creating a [leaderboard](#creating-an-experiment-leaderboard) that will update in real time with experiment
+* Creating a [leaderboard](#creating-a-task-leaderboard) that will update in real time with task
performance, which can be shared and stored.
-* Sorting models by metrics - Models are associated with the experiments that created them. For each metric, use the last
+* Sorting models by metrics - Models are associated with the tasks that created them. For each metric, use the last
value, the minimal value, and/or the maximal value.
* Tracking hyperparameters - Track hyperparameters by adding them as columns, and applying filters and sorting.
@@ -144,25 +147,25 @@ Changes are persistent (cached in the browser), and represented in the URL so cu
bookmark and shared with other ClearML users to collaborate.
:::note
-The following experiments-table customizations are saved on a **per-project** basis:
+The following task-table customizations are saved on a **per-project** basis:
* Columns order
* Column width
* Active sort order
* Active filters
* Custom columns
-If a project has subprojects, the experiments can be viewed by their subproject groupings or together with
-all the experiments in the project. The customizations of these two views are saved separately.
+If a project has subprojects, the tasks can be viewed by their subproject groupings or together with
+all the tasks in the project. The customizations of these two views are saved separately.
:::
### Adding Metrics and/or Hyperparameters
-![Experiment table customization gif](../img/gif/webapp_exp_table_cust.gif)
+![Task table customization gif](../img/gif/webapp_exp_table_cust.gif)
-Add metrics and/or hyperparameters columns to the experiments table. The metrics and hyperparameters depend upon the
-experiments in the table.
+Add metrics and/or hyperparameters columns to the task table. The metrics and hyperparameters depend upon the
+tasks in the table.
#### To Add Metrics:
@@ -175,7 +178,7 @@ experiments in the table.
hyperparameter checkboxes.
:::note Float Values Display
-By default, the experiments table displays rounded up float values. Hover over a float to view its precise value in the
+By default, the task table displays rounded up float values. Hover over a float to view its precise value in the
tooltip that appears. To view all precise values in a column, hover over a float and click .
:::
@@ -202,38 +205,38 @@ in the top right corner of the table.
-## Experiment Actions
+## Task Actions
-The following table describes the actions that can be done from the experiments table, including the [states](../fundamentals/task.md#task-states)
+The following table describes the actions that can be done from the task table, including the [states](../fundamentals/task.md#task-states)
that allow each operation.
Access these actions in any of the following ways:
-* In the experiments table, right-click an experiment or hover over an experiment and click
+* In the task table, right-click a task or hover over a task and click
to open the context menu
-* In an experiment info panel, click the menu button
-* Through the batch action bar: available at screen bottom when multiple experiments are selected
+* In a task info panel, click the menu button
+* Through the batch action bar: available at screen bottom when multiple tasks are selected
-| Action | Description | States Valid for the Action | State Transition |
-|---|---|---|---|
-| Details | Open the experiment's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the experiments list in view). Can also be accessed by double-clicking an experiment in the experiments table. | Any state | None |
-| View Full Screen | View experiment details in [full screen](webapp_exp_track_visual.md#full-screen-details-view). | Any state | None |
-| Manage Queue | If an experiment is *Pending* in a queue, view the utilization of that queue, manage that queue (remove experiments and change the order of experiments), and view information about the worker(s) listening to the queue. See the [Orchestration](webapp_workers_queues.md) page. | *Enqueued* | None |
-| View Worker | If an experiment is *Running*, view resource utilization, worker details, and queues to which a worker is listening. | *Running* | None |
-| Share | For **ClearML Hosted Service** users only, [share](webapp_exp_sharing.md) an experiment and its model with a **ClearML Hosted Service** user in another workspace. | Any state | None |
-| Archive | Move experiment to the project's archive. If it is shared (ClearML Hosted Service only), the experiment becomes private. | Any state | *Pending* to *Draft* |
-| Restore |Action available in the archive. Restore an experiment to the active experiments table.| Any State | None |
-| Delete | Action available in the archive. Delete an experiment, which will also remove all their logs, results, artifacts and debug samples. | Any State | N/A |
-| Enqueue | Add an experiment to a queue for a worker or workers (listening to the queue) to execute. | *Draft*, *Aborted* | *Pending* |
-| Dequeue | Remove an experiment from a queue. | *Pending* | *Draft* |
-| Reset | Delete the log and output from a previous run of an experiment (for example, before rerunning it). | *Completed*, *Aborted*, or *Failed* | *Draft* |
-| Abort | Manually terminate a *Running* experiment. | *Running* | *Aborted* |
-| Abort All Children | Manually terminate all *Running* experiments which have this task as a parent | *Running* or *Aborted* | None for parent experiment, *Aborted* for child experiments |
-| Retry | Enqueue a failed experiment in order to rerun it. Make sure you have resolved the external problem which previously prevented the experiment’s completion. | *Failed* | *Pending* |
-| Publish | Publish an experiment to prevent changes to its tracking data, inputs, and outputs. Published experiments and their models are read-only. *Published* experiments cannot be enqueued, but they can be cloned, and their clones can be edited, tuned, and enqueued. | *Completed*, *Aborted*, or *Failed*. | *Published* |
-| Add Tag | Tag experiments with color-coded labels to assist you in organizing your work. See [tagging experiments](webapp_exp_track_visual.md#tagging-experiments). | Any state | None |
-| Clone | Make an exact, editable copy of an experiment (for example, to reproduce an experiment, but keep the original). | *Draft* | Newly Cloned Experiment is *Draft* |
-| Move to Project | Move an experiment to another project. | Any state | None |
-| Compare | Compare selected experiments (see [Comparing Experiments](webapp_exp_comparing.md)) | Any state | None |
+| Action | Description | States Valid for the Action | State Transition |
+|---|---|---|-------------------------------------------------|
+| Details | Open the task's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the tasks list in view). Can also be accessed by double-clicking a task in the task table. | Any state | None |
+| View Full Screen | View task details in [full screen](webapp_exp_track_visual.md#full-screen-details-view). | Any state | None |
+| Manage Queue | If a task is *Pending* in a queue, view the utilization of that queue, manage that queue (remove tasks and change the order of tasks), and view information about the worker(s) listening to the queue. See the [Orchestration](webapp_workers_queues.md) page. | *Enqueued* | None |
+| View Worker | If a task is *Running*, view resource utilization, worker details, and queues to which a worker is listening. | *Running* | None |
+| Share | For **ClearML Hosted Service** users only, [share](webapp_exp_sharing.md) a task and its model with a **ClearML Hosted Service** user in another workspace. | Any state | None |
+| Archive | Move task to the project's archive. If it is shared (ClearML Hosted Service only), the task becomes private. | Any state | *Pending* to *Draft* |
+| Restore |Action available in the archive. Restore a task to the active task table.| Any State | None |
+| Delete | Action available in the archive. Delete a task, which will also remove all their logs, results, artifacts and debug samples. | Any State | N/A |
+| Enqueue | Add a task to a queue for a worker or workers (listening to the queue) to execute. | *Draft*, *Aborted* | *Pending* |
+| Dequeue | Remove a task from a queue. | *Pending* | *Draft* |
+| Reset | Delete the log and output from a previous run of a task (for example, before rerunning it). | *Completed*, *Aborted*, or *Failed* | *Draft* |
+| Abort | Manually terminate a *Running* task. | *Running* | *Aborted* |
+| Abort All Children | Manually terminate all *Running* tasks which have this task as a parent | *Running* or *Aborted* | None for parent task, *Aborted* for child tasks |
+| Retry | Enqueue a failed task in order to rerun it. Make sure you have resolved the external problem which previously prevented the task’s completion. | *Failed* | *Pending* |
+| Publish | Publish a task to prevent changes to its tracking data, inputs, and outputs. Published tasks and their models are read-only. *Published* tasks cannot be enqueued, but they can be cloned, and their clones can be edited, tuned, and enqueued. | *Completed*, *Aborted*, or *Failed*. | *Published* |
+| Add Tag | Tag tasks with color-coded labels to assist you in organizing your work. See [tagging tasks](webapp_exp_track_visual.md#tagging-experiments). | Any state | None |
+| Clone | Make an exact, editable copy of a task (for example, to reproduce a task, but keep the original). | *Draft* | Newly cloned task is *Draft* |
+| Move to Project | Move a task to another project. | Any state | None |
+| Compare | Compare selected tasks (see [Comparing Tasks](webapp_exp_comparing.md)) | Any state | None |
:::important Enterprise Feature
The ClearML Enterprise Server provides a mechanism to define your own custom actions, which will
@@ -241,43 +244,43 @@ appear in the context menu. Create a custom action by defining an HTTP request t
action. For more information see [Custom UI Context Menu Actions](../deploying_clearml/clearml_server_config.md#custom-ui-context-menu-actions).
:::
-Most of the actions mentioned in the chart above can be performed on multiple experiments at once.
-[Select multiple experiments](#selecting-multiple-experiments), then use either the context menu, or the batch action bar
+Most of the actions mentioned in the chart above can be performed on multiple tasks at once.
+[Select multiple tasks](#selecting-multiple-tasks), then use either the context menu, or the batch action bar
that appears at the bottom of the page, to perform
-operations on the selected experiments. Actions can be performed only on the experiments that match the action criteria
-(for example, only *Running* experiments can be aborted). The context menu shows the number
-of experiments that can be affected by each action. The same information can be found in the batch action bar, in a tooltip that
+operations on the selected tasks. Actions can be performed only on the tasks that match the action criteria
+(for example, only *Running* tasks can be aborted). The context menu shows the number
+of tasks that can be affected by each action. The same information can be found in the batch action bar, in a tooltip that
appears when hovering over an action icon.
-![Experiment table batch operations](../img/webapp_experiment_table_context_menu.png)
+![Task table batch operations](../img/webapp_experiment_table_context_menu.png)
-## Selecting Multiple Experiments
+## Selecting Multiple Tasks
-Select multiple experiments by clicking the checkbox on the left of each relevant experiment. Clear any existing selection
+Select multiple tasks by clicking the checkbox on the left of each relevant task. Clear any existing selection
by clicking the checkbox in the top left corner of the table.
Click the checkbox in the top left corner of the table to select all items currently visible.
An extended bulk selection tool is available through the down arrow next to the checkbox in the top left corner, enabling
selecting items beyond the items currently on-screen:
-* **All** - Select all experiments in the project
+* **All** - Select all tasks in the project
* **None** - Clear selection
-* **Filtered** - Select **all experiments in the project** that match the current active filters in the project
+* **Filtered** - Select **all tasks in the project** that match the current active filters in the project
-## Comparing Experiments
+## Comparing Tasks
-The comparison view compares experiment scalar and plot results (for a more in depth comparison, see [Comparing Experiments](webapp_exp_comparing.md)).
-When selected, the view presents a comparison of all [selected experiments](#selecting-multiple-experiments). If no
-experiments are selected, the first 100 visible experiments in the table are displayed in the comparison.
+The comparison view compares task scalar and plot results (for a more in depth comparison, see [Comparing Experiments](webapp_exp_comparing.md)).
+When selected, the view presents a comparison of all [selected tasks](#selecting-multiple-tasks). If no
+tasks are selected, the first 100 visible tasks in the table are displayed in the comparison.
In the dropdown menu, select to view **Scalars** or **Plots**.
-**Scalars** shows experiment scalar results as time series line graphs.
+**Scalars** shows task scalar results as time series line graphs.
![Merged comparison plots](../img/webapp_compare_view_1.png)
All single value scalars are plotted into a single clustered bar chart under the "Summary" title, where each cluster
-represents a reported metric, and each bar in the cluster represents an experiment.
+represents a reported metric, and each bar in the cluster represents a task.
![Single scalar comparison](../img/webapp_compare_view_3.png)
@@ -288,34 +291,34 @@ Click