mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-09 16:17:23 +00:00
Small edits (#676)
This commit is contained in:
parent
6b02f02b0d
commit
83fa8adcd5
@ -22,7 +22,7 @@ line arguments, Python module dependencies, and a requirements.txt file!
|
||||
1. `clearml-task` does its magic! It creates a new task on the [ClearML Server](../deploying_clearml/clearml_server.md),
|
||||
and, if so directed, enqueues it for execution by a ClearML Agent.
|
||||
1. While the Task is running on the remote machine, all its console outputs are logged in real-time, alongside your
|
||||
TensorBoard and matplotlib. You can track your script’s progress and results in the [ClearML Web UI](../webapp/webapp_overview.md)
|
||||
TensorBoard and matplotlib. You can track your script's progress and results in the [ClearML Web UI](../webapp/webapp_overview.md)
|
||||
(a link to your task details page in the ClearML Web UI is printed as ClearML Task creates the task).
|
||||
|
||||
## Execution Configuration
|
||||
|
@ -1125,7 +1125,7 @@ configuration option `agent.package_manager.system_site_packages` to `true`.
|
||||
#### How can I use the ClearML API to fetch data? <a className="tr_top_negative" id="api"></a>
|
||||
|
||||
You can use the `APIClient` class, which provides a Pythonic interface to access ClearML's backend REST API. Through
|
||||
an `APIClient` instance, you can access ClearML’s REST API services and endpoints.
|
||||
an `APIClient` instance, you can access ClearML's REST API services and endpoints.
|
||||
|
||||
To use `APIClient`, create an instance of it, then call the method corresponding to the desired REST API endpoint, with
|
||||
its respective parameters as described in the [REST API reference page](references/api/index.md).
|
||||
|
@ -70,7 +70,7 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks models
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks models
|
||||
and any snapshots created using PyTorch.
|
||||
|
||||

|
||||
|
@ -2,7 +2,7 @@
|
||||
title: Executable Experiment Containers
|
||||
---
|
||||
|
||||
This tutorial demonstrates using [`clearml-agent`](../../clearml_agent.md)’s [`build`](../../clearml_agent/clearml_agent_ref.md#build)
|
||||
This tutorial demonstrates using [`clearml-agent`](../../clearml_agent.md)'s [`build`](../../clearml_agent/clearml_agent_ref.md#build)
|
||||
command to package an experiment into an executable container. In this example, you will build a Docker image that, when
|
||||
run, will automatically execute the [keras_tensorboard.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
|
||||
script.
|
||||
@ -13,7 +13,7 @@ script.
|
||||
* [clearml](https://github.com/allegroai/clearml) repo cloned (`git clone https://github.com/allegroai/clearml.git`)
|
||||
|
||||
## Creating the ClearML Experiment
|
||||
1. Set up the experiment’s execution environment:
|
||||
1. Set up the experiment's execution environment:
|
||||
|
||||
```console
|
||||
cd clearml/examples/frameworks/keras
|
||||
|
@ -2,7 +2,7 @@
|
||||
title: Experiment Environment Containers
|
||||
---
|
||||
|
||||
This tutorial demonstrates using [`clearml-agent`](../../clearml_agent.md)’s [`build`](../../clearml_agent/clearml_agent_ref.md#build)
|
||||
This tutorial demonstrates using [`clearml-agent`](../../clearml_agent.md)'s [`build`](../../clearml_agent/clearml_agent_ref.md#build)
|
||||
command to build a Docker container replicating the execution environment of an existing task. ClearML Agents can make
|
||||
use of such containers to execute tasks without having to set up their environment every time.
|
||||
|
||||
@ -15,7 +15,7 @@ be used when running optimization tasks.
|
||||
* [clearml](https://github.com/allegroai/clearml) repo cloned (`git clone https://github.com/allegroai/clearml.git`)
|
||||
|
||||
## Creating the ClearML Experiment
|
||||
1. Set up the experiment’s execution environment:
|
||||
1. Set up the experiment's execution environment:
|
||||
|
||||
```console
|
||||
cd clearml/examples/frameworks/keras
|
||||
@ -47,7 +47,7 @@ clearml-agent build --id <TASK_ID> --docker --target new_docker
|
||||
If the container will not make use of a GPU, add the `--cpu-only` flag
|
||||
:::
|
||||
|
||||
This will create a container with the specified task’s execution environment in the `--target` folder.
|
||||
This will create a container with the specified task's execution environment in the `--target` folder.
|
||||
When the Docker build completes, the console output shows:
|
||||
|
||||
```console
|
||||
@ -76,7 +76,7 @@ Make use of the container you've just built by having a ClearML agent make use o
|
||||
:::
|
||||
|
||||
This agent will pull the enqueued task and run it using the `new_docker` image to create the execution environment.
|
||||
In the task’s **CONSOLE** tab, one of the first logs should be:
|
||||
In the task's **CONSOLE** tab, one of the first logs should be:
|
||||
|
||||
```console
|
||||
Executing: ['docker', 'run', ..., 'CLEARML_DOCKER_IMAGE=new_docker', ...].
|
||||
|
@ -32,11 +32,11 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
Clicking on the model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||

|
@ -29,12 +29,12 @@ Text printed to the console for training progress, as well as all other console
|
||||

|
||||
|
||||
## Artifacts
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models created using CatBoost.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||

|
||||
|
@ -25,7 +25,7 @@ ClearML automatically logs the configurations applied to LightGBM. They appear i
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using LightGBM.
|
||||
|
||||

|
||||
|
@ -49,6 +49,6 @@ The model info panel contains the model details, including:
|
||||
|
||||
## Console
|
||||
|
||||
All console output during the script’s execution appears in the experiment’s **CONSOLE** page.
|
||||
All console output during the script's execution appears in the experiment's **CONSOLE** page.
|
||||

|
||||
|
||||
|
@ -33,7 +33,7 @@ output_model = OutputModel(task=task)
|
||||
|
||||
## Label Enumeration
|
||||
The label enumeration dictionary is logged using the [`Task.connect_label_enumeration`](../../../references/sdk/task.md#connect_label_enumeration)
|
||||
method which will update the task’s resulting model information. The current running task is accessed using the
|
||||
method which will update the task's resulting model information. The current running task is accessed using the
|
||||
[`Task.current_task`](../../../references/sdk/task.md#taskcurrent_task) class method.
|
||||
|
||||
```python
|
||||
@ -44,7 +44,7 @@ Task.current_task().connect_label_enumeration(enumeration)
|
||||
```
|
||||
|
||||
:::note Directly Setting Model Enumeration
|
||||
You can set a model’s label enumeration directly using the [`OutputModel.update_labels`](../../../references/sdk/model_outputmodel.md#update_labels)
|
||||
You can set a model's label enumeration directly using the [`OutputModel.update_labels`](../../../references/sdk/model_outputmodel.md#update_labels)
|
||||
method
|
||||
:::
|
||||
|
||||
@ -81,20 +81,20 @@ if CONDITION:
|
||||
```
|
||||
|
||||
## WebApp
|
||||
The model appears in the task’s **ARTIFACTS** tab.
|
||||
The model appears in the task's **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view the
|
||||
model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view the
|
||||
model's details and access the model.
|
||||
|
||||

|
||||
|
||||
The model’s **NETWORK** tab displays its configuration.
|
||||
The model's **NETWORK** tab displays its configuration.
|
||||
|
||||

|
||||
|
||||
The model’s **LABELS** tab displays its label enumeration.
|
||||
The model's **LABELS** tab displays its label enumeration.
|
||||
|
||||

|
||||
|
||||
|
@ -17,7 +17,7 @@ ClearML automatically logs the audio samples which the example reports by callin
|
||||
|
||||
### Audio Samples
|
||||
|
||||
You can play the audio samples by double-clicking the audio thumbnail.
|
||||
You can play the audio samples by clicking the audio thumbnail.
|
||||
|
||||

|
||||
|
||||
|
@ -52,12 +52,12 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks models
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks models
|
||||
and any snapshots created using PyTorch.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||

|
@ -40,12 +40,12 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using PyTorch.
|
||||
|
||||

|
||||
|
||||
Clicking on a model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on a model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||

|
@ -35,12 +35,12 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using PyTorch.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||

|
||||
|
@ -17,7 +17,7 @@ The test loss and validation loss plots appear in the experiment's page in the C
|
||||
Resource utilization plots, which are titled **:monitor: machine**, also appear in the **SCALARS** tab. All of these
|
||||
plots are automatically captured by ClearML.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## Hyperparameters
|
||||
@ -29,12 +29,12 @@ ClearML automatically logs command line options defined with argparse and Tensor
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
Clicking on a model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model’s details and access the model.
|
||||
Clicking on a model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
|
||||
the model's details and access the model.
|
||||
|
||||
## Console
|
||||
|
||||
|
@ -16,12 +16,12 @@ in the ClearML web UI, under **PLOTS**.
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model's details and access the model.
|
||||
|
||||
|
||||

|
@ -33,13 +33,13 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using PyTorch.
|
||||
|
||||

|
||||
|
||||
Clicking on the model’s name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model’s details and access the model.
|
||||
Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model's details and access the model.
|
||||
|
||||

|
||||
|
||||
|
@ -30,13 +30,13 @@ All console output appears in **CONSOLE**.
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using TensorFlow.
|
||||
|
||||

|
||||
|
||||
Clicking on a model’s name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model’s details and access the model.
|
||||
Clicking on a model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model's details and access the model.
|
||||
|
||||
|
||||

|
@ -29,6 +29,6 @@ To view the model details, click the model name in the **ARTIFACTS** page, which
|
||||
|
||||
## Console
|
||||
|
||||
All console output during the script’s execution appears in the experiment’s **CONSOLE** page.
|
||||
All console output during the script's execution appears in the experiment's **CONSOLE** page.
|
||||
|
||||

|
@ -15,7 +15,7 @@ classification dataset using XGBoost
|
||||
|
||||
## Plots
|
||||
|
||||
The feature importance plot and tree plot appear in the project's page in the **ClearML web UI**, under
|
||||
The feature importance plot and tree plot appear in the experiment's page in the **ClearML web UI**, under
|
||||
**PLOTS**.
|
||||
|
||||

|
||||
@ -31,12 +31,12 @@ All other console output appear in **CONSOLE**.
|
||||
|
||||
## Artifacts
|
||||
|
||||
Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
|
||||
models and any snapshots created using XGBoost.
|
||||
|
||||

|
||||
|
||||
Clicking on the model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model’s details and access the model.
|
||||
Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
|
||||
view the model's details and access the model.
|
||||
|
||||

|
@ -50,7 +50,7 @@ The sections below describe in more detail what happens in the controller task a
|
||||
1. Build the pipeline (see [PipelineController.add_step](../../references/sdk/automation_controller_pipelinecontroller.md#add_step)
|
||||
method for complete reference):
|
||||
|
||||
The pipeline’s [first step](#step-1---downloading-the-datae) uses the pre-existing task
|
||||
The pipeline's [first step](#step-1---downloading-the-datae) uses the pre-existing task
|
||||
`pipeline step 1 dataset artifact` in the `examples` project. The step uploads local data and stores it as an artifact.
|
||||
|
||||
```python
|
||||
@ -62,11 +62,11 @@ The sections below describe in more detail what happens in the controller task a
|
||||
```
|
||||
|
||||
The [second step](#step-2---processing-the-data) uses the pre-existing task `pipeline step 2 process dataset` in
|
||||
the `examples` project. The second step’s dependency upon the first step’s completion is designated by setting it as
|
||||
the `examples` project. The second step's dependency upon the first step's completion is designated by setting it as
|
||||
its parent.
|
||||
|
||||
Custom configuration values specific to this step execution are defined through the `parameter_override` parameter,
|
||||
where the first step’s artifact is fed into the second step.
|
||||
where the first step's artifact is fed into the second step.
|
||||
|
||||
Special pre-execution and post-execution logic is added for this step through the use of `pre_execute_callback`
|
||||
and `post_execute_callback` respectively.
|
||||
@ -87,7 +87,7 @@ The sections below describe in more detail what happens in the controller task a
|
||||
```
|
||||
|
||||
The [third step](#step-3---training-the-network) uses the pre-existing task `pipeline step 3 train model` in the
|
||||
`examples` projects. The step uses Step 2’s artifacts.
|
||||
`examples` projects. The step uses Step 2's artifacts.
|
||||
|
||||
1. Run the pipeline.
|
||||
|
||||
@ -99,7 +99,7 @@ The sections below describe in more detail what happens in the controller task a
|
||||
|
||||
## Step 1 - Downloading the Data
|
||||
|
||||
The pipeline’s first step ([step1_dataset_artifact.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step1_dataset_artifact.py))
|
||||
The pipeline's first step ([step1_dataset_artifact.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step1_dataset_artifact.py))
|
||||
does the following:
|
||||
|
||||
1. Download data using [`StorageManager.get_local_copy`](../../references/sdk/storage.md#storagemanagerget_local_copy)
|
||||
@ -209,7 +209,7 @@ does the following:
|
||||
|
||||
## WebApp
|
||||
|
||||
When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and
|
||||
When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and
|
||||
pipeline page.
|
||||
|
||||
```
|
||||
@ -218,13 +218,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
|
||||
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
|
||||
```
|
||||
|
||||
The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
|
||||
The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
|
||||
configuration parameters and output.
|
||||
|
||||

|
||||
|
||||
To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open
|
||||
the pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open
|
||||
the pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
Click a step to see its summary information.
|
||||
|
||||
@ -232,7 +232,7 @@ Click a step to see its summary information.
|
||||
|
||||
### Console
|
||||
|
||||
Click **DETAILS** to view a log of the pipeline controller’s console output.
|
||||
Click **DETAILS** to view a log of the pipeline controller's console output.
|
||||
|
||||

|
||||
|
||||
|
@ -77,7 +77,7 @@ To run the pipeline, call the pipeline controller function.
|
||||
|
||||
## WebApp
|
||||
|
||||
When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and pipeline page.
|
||||
When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and pipeline page.
|
||||
|
||||
```
|
||||
ClearML Task: created new task id=bc93610688f242ecbbe70f413ff2cf5f
|
||||
@ -85,13 +85,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
|
||||
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
|
||||
```
|
||||
|
||||
The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
|
||||
The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
|
||||
configuration parameters and output.
|
||||
|
||||

|
||||
|
||||
To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
|
||||
pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
|
||||
pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
Click a step to see an overview of its details.
|
||||
|
||||
@ -99,11 +99,11 @@ Click a step to see an overview of its details.
|
||||
|
||||
## Console and Code
|
||||
|
||||
Click **DETAILS** to view a log of the pipeline controller’s console output.
|
||||
Click **DETAILS** to view a log of the pipeline controller's console output.
|
||||
|
||||

|
||||
|
||||
Click on a step to view its console output. You can also view the selected step’s code by clicking **CODE**
|
||||
Click on a step to view its console output. You can also view the selected step's code by clicking **CODE**
|
||||
on top of the console log.
|
||||
|
||||

|
||||
|
@ -66,7 +66,7 @@ logged as required packages for the pipeline execution step.
|
||||
)
|
||||
```
|
||||
|
||||
The second step in the pipeline uses the `step_two` function and uses as its input the first step’s output.This reference
|
||||
The second step in the pipeline uses the `step_two` function and uses as its input the first step's output.This reference
|
||||
implicitly defines the pipeline structure, making `step_one` the parent step of `step_two`.
|
||||
|
||||
Its return object will be stored as an artifact under the name `processed_data`.
|
||||
@ -82,7 +82,7 @@ logged as required packages for the pipeline execution step.
|
||||
)
|
||||
```
|
||||
|
||||
The third step in the pipeline uses the `step_three` function and uses as its input the second step’s output. This
|
||||
The third step in the pipeline uses the `step_three` function and uses as its input the second step's output. This
|
||||
reference implicitly defines the pipeline structure, making `step_two` the parent step of `step_three`.
|
||||
|
||||
Its return object will be stored as an artifact under the name `model`:
|
||||
@ -106,7 +106,7 @@ logged as required packages for the pipeline execution step.
|
||||
The pipeline will be launched remotely, through the `services` queue, unless otherwise specified.
|
||||
|
||||
## WebApp
|
||||
When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and pipeline page.
|
||||
When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and pipeline page.
|
||||
|
||||
```
|
||||
ClearML Task: created new task id=bc93610688f242ecbbe70f413ff2cf5f
|
||||
@ -114,13 +114,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
|
||||
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
|
||||
```
|
||||
|
||||
The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
|
||||
The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
|
||||
configuration parameters and output.
|
||||
|
||||

|
||||
|
||||
To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
|
||||
pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
|
||||
pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
Click a step to see an overview of its details.
|
||||
|
||||
@ -128,11 +128,11 @@ Click a step to see an overview of its details.
|
||||
|
||||
## Console and Code
|
||||
|
||||
Click **DETAILS** to view a log of the pipeline controller’s console output.
|
||||
Click **DETAILS** to view a log of the pipeline controller's console output.
|
||||
|
||||

|
||||
|
||||
Click on a step to view its console output. You can also view the selected step’s code by clicking **CODE**
|
||||
Click on a step to view its console output. You can also view the selected step's code by clicking **CODE**
|
||||
on top of the console log.
|
||||
|
||||

|
||||
|
@ -52,6 +52,6 @@ ClearML reports these images as debug samples in the **ClearML Web UI**, under t
|
||||
|
||||

|
||||
|
||||
Double-click a thumbnail, and the image viewer opens.
|
||||
Click a thumbnail, and the image viewer opens.
|
||||
|
||||

|
@ -8,7 +8,7 @@ example demonstrates using ClearML to log plots and images generated by Matplotl
|
||||
## Plots
|
||||
|
||||
The Matplotlib and Seaborn plots that are reported using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
|
||||
method appear in the experiment’s **PLOTS**.
|
||||
method appear in the experiment's **PLOTS**.
|
||||
|
||||

|
||||
|
||||
@ -17,6 +17,6 @@ method appear in the experiment’s **PLOTS**.
|
||||
## Debug Samples
|
||||
|
||||
Matplotlib figures can be logged as images by using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
|
||||
method, and passing `report_image=True`. The images are stored in the experiment’s **DEBUG SAMPLES**.
|
||||
method, and passing `report_image=True`. The images are stored in the experiment's **DEBUG SAMPLES**.
|
||||
|
||||

|
@ -38,7 +38,7 @@ Logger.current_logger().report_media(
|
||||
)
|
||||
```
|
||||
|
||||
The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the audio player opens.
|
||||
The reported audio can be viewed in the **DEBUG SAMPLES** tab. Click a thumbnail, and the audio player opens.
|
||||
|
||||

|
||||
|
||||
@ -55,6 +55,6 @@ Logger.current_logger().report_media(
|
||||
)
|
||||
```
|
||||
|
||||
The reported video can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the video player opens.
|
||||
The reported video can be viewed in the **DEBUG SAMPLES** tab. Click a thumbnail, and the video player opens.
|
||||
|
||||

|
||||
|
@ -25,7 +25,7 @@ output_model = OutputModel(task=task)
|
||||
|
||||
## Label Enumeration
|
||||
|
||||
Set the model’s label enumeration using the [`OutputModel.update_labels`](../../references/sdk/model_outputmodel.md#update_labels)
|
||||
Set the model's label enumeration using the [`OutputModel.update_labels`](../../references/sdk/model_outputmodel.md#update_labels)
|
||||
method.
|
||||
|
||||
```python
|
||||
@ -43,14 +43,14 @@ output_model.update_weights(register_uri=model_url)
|
||||
```
|
||||
|
||||
## WebApp
|
||||
The model appears in the task’s **ARTIFACTS** tab.
|
||||
The model appears in the task's **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
Clicking on the model name takes you to the [model’s page](../../webapp/webapp_model_viewing.md), where you can view the
|
||||
model’s details and access the model.
|
||||
Clicking on the model name takes you to the [model's page](../../webapp/webapp_model_viewing.md), where you can view the
|
||||
model's details and access the model.
|
||||
|
||||
The model’s **LABELS** tab displays its label enumeration.
|
||||
The model's **LABELS** tab displays its label enumeration.
|
||||
|
||||

|
||||
|
||||
|
@ -6,13 +6,13 @@ The [using_artifacts_example](https://github.com/allegroai/clearml/blob/master/e
|
||||
script demonstrates uploading a data file to a task as an artifact and then accessing and utilizing the artifact in a different task.
|
||||
|
||||
When the script runs it creates two tasks, `create artifact` and `use artifact from other task`, both of which are associated
|
||||
with the `examples` project. The first task creates and uploads the artifact, and the second task accesses the first task’s
|
||||
with the `examples` project. The first task creates and uploads the artifact, and the second task accesses the first task's
|
||||
artifact and utilizes it.
|
||||
|
||||
## Task 1: Uploading an Artifact
|
||||
|
||||
The first task uploads a data file as an artifact using the [`Task.upload_artifact`](../../references/sdk/task.md#upload_artifact)
|
||||
method, inputting the artifact’s name and the location of the file.
|
||||
method, inputting the artifact's name and the location of the file.
|
||||
|
||||
```python
|
||||
task1.upload_artifact(name='data file', artifact_object='data_samples/sample.json')
|
||||
@ -21,7 +21,7 @@ task1.upload_artifact(name='data file', artifact_object='data_samples/sample.jso
|
||||
The task is then closed, using the [`Task.close`](../../references/sdk/task.md#close) method, so another task can be
|
||||
initialized in the same script.
|
||||
|
||||
Artifact details (location and size) can be viewed in ClearML’s **web UI > experiment details > ARTIFACTS tab > OTHER section**.
|
||||
Artifact details (location and size) can be viewed in ClearML's **web UI > experiment details > ARTIFACTS tab > OTHER section**.
|
||||
|
||||

|
||||
|
||||
|
@ -151,9 +151,9 @@ Make sure a `clearml-agent` is assigned to that queue.
|
||||
## WebApp
|
||||
### Configuration
|
||||
|
||||
The values configured through the wizard are stored in the task’s hyperparameters and configuration objects by using the
|
||||
The values configured through the wizard are stored in the task's hyperparameters and configuration objects by using the
|
||||
[`Task.connect`](../../references/sdk/task.md#connect) and [`Task.set_configuration_object`](../../references/sdk/task.md#set_configuration_object)
|
||||
methods respectively. They can be viewed in the WebApp, in the task’s **CONFIGURATION** page under **HYPERPARAMETERS** and **CONFIGURATION OBJECTS > General**.
|
||||
methods respectively. They can be viewed in the WebApp, in the task's **CONFIGURATION** page under **HYPERPARAMETERS** and **CONFIGURATION OBJECTS > General**.
|
||||
|
||||
ClearML automatically logs command line arguments defined with argparse. View them in the experiments **CONFIGURATION**
|
||||
page under **HYPERPARAMETERS > General**.
|
||||
@ -161,11 +161,11 @@ page under **HYPERPARAMETERS > General**.
|
||||

|
||||
|
||||
The task can be reused to launch another autoscaler instance: clone the task, then edit its parameters for the instance
|
||||
types and budget configuration, and enqueue the task for execution (you’ll typically want to use a ClearML Agent running
|
||||
types and budget configuration, and enqueue the task for execution (you'll typically want to use a ClearML Agent running
|
||||
in [services mode](../../clearml_agent.md#services-mode) for such service tasks).
|
||||
|
||||
### Console
|
||||
|
||||
All other console output appears in the experiment’s **CONSOLE**.
|
||||
All other console output appears in the experiment's **CONSOLE**.
|
||||
|
||||

|
@ -6,7 +6,7 @@ The [cleanup service](https://github.com/allegroai/clearml/blob/master/examples/
|
||||
demonstrates how to use the `clearml.backend_api.session.client.APIClient` class to implement a service that deletes old
|
||||
archived tasks and their associated files: model checkpoints, other artifacts, and debug samples.
|
||||
|
||||
Modify the cleanup service’s parameters to specify which archived experiments to delete and when to delete them.
|
||||
Modify the cleanup service's parameters to specify which archived experiments to delete and when to delete them.
|
||||
|
||||
### Running the Cleanup Service
|
||||
|
||||
@ -52,14 +52,14 @@ an `APIClient` object that establishes a session with the ClearML Server, and ac
|
||||
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
|
||||
|
||||
## Configuration
|
||||
The experiment’s hyperparameters are explicitly logged to ClearML using the [`Task.connect`](../../references/sdk/task.md#connect)
|
||||
method. View them in the WebApp, in the experiment’s **CONFIGURATION** page under **HYPERPARAMETERS > General**.
|
||||
The experiment's hyperparameters are explicitly logged to ClearML using the [`Task.connect`](../../references/sdk/task.md#connect)
|
||||
method. View them in the WebApp, in the experiment's **CONFIGURATION** page under **HYPERPARAMETERS > General**.
|
||||
|
||||
The task can be reused. Clone the task, edit its parameters, and enqueue the task to run in ClearML Agent [services mode](../../clearml_agent.md#services-mode).
|
||||
|
||||

|
||||
|
||||
## Console
|
||||
All console output appears in the experiment’s **CONSOLE**.
|
||||
All console output appears in the experiment's **CONSOLE**.
|
||||
|
||||

|
||||
|
@ -79,17 +79,17 @@ The script supports the following additional command line options:
|
||||
|
||||
## Configuration
|
||||
|
||||
ClearML automatically logs command line options defined with argparse. They appear in the experiment’s **CONFIGURATION**
|
||||
ClearML automatically logs command line options defined with argparse. They appear in the experiment's **CONFIGURATION**
|
||||
page under **HYPERPARAMETERS > Args**.
|
||||
|
||||

|
||||
|
||||
The task can be reused to launch another monitor instance: clone the task, edit its parameters, and enqueue the task for
|
||||
execution (you’ll typically want to use a ClearML Agent running in [services mode](../../clearml_agent.md#services-mode)
|
||||
execution (you'll typically want to use a ClearML Agent running in [services mode](../../clearml_agent.md#services-mode)
|
||||
for such service tasks).
|
||||
|
||||
## Console
|
||||
All console output appears in the experiment’s **CONSOLE** page.
|
||||
All console output appears in the experiment's **CONSOLE** page.
|
||||
|
||||
## Additional Information about slack_alerts.py
|
||||
|
||||
|
@ -78,7 +78,7 @@ Upload the session's execution data that the Task captured offline to the ClearM
|
||||
```
|
||||
|
||||
You can also use the offline task to update the execution of an existing previously executed task by providing the
|
||||
previously executed task’s ID. To avoid overwriting metrics, you can specify the initial iteration offset with
|
||||
previously executed task's ID. To avoid overwriting metrics, you can specify the initial iteration offset with
|
||||
`iteration_offset`.
|
||||
|
||||
```python
|
||||
|
@ -50,7 +50,7 @@ Use [`Logger.report_matplotlib_figure()`](../references/sdk/logger.md#report_mat
|
||||
a matplotlib figure, and specify its title and series names, and iteration number:
|
||||
|
||||
|
||||
```
|
||||
```python
|
||||
logger = task.get_logger()
|
||||
|
||||
area = (40 * np.random.rand(N))**2
|
||||
|
@ -6,7 +6,7 @@ title: Project Dashboard
|
||||
The ClearML Project Dashboard App is available under the ClearML Pro plan
|
||||
:::
|
||||
|
||||
The Project Dashboard Application provides an overview of a project or workspace’s progress. It presents an aggregated
|
||||
The Project Dashboard Application provides an overview of a project or workspace's progress. It presents an aggregated
|
||||
view of task status and a chosen metric over time, as well as project GPU and worker usage. It also supports alerts/warnings
|
||||
on completed/failed Tasks via Slack integration.
|
||||
|
||||
@ -15,7 +15,7 @@ on completed/failed Tasks via Slack integration.
|
||||
values from the file, which can be modified before launching the app instance
|
||||
* **Dashboard Title** - Name of the project dashboard instance, which will appear in the instance list
|
||||
* **Monitoring** - Select what the app instance should monitor. The options are:
|
||||
* Project - Monitor a specific project. You can select an option to also monitor the specified project’s subprojects
|
||||
* Project - Monitor a specific project. You can select an option to also monitor the specified project's subprojects
|
||||
* Entire workspace - Monitor all projects in your workspace
|
||||
|
||||
:::caution
|
||||
@ -34,7 +34,7 @@ of the chosen metric over time.
|
||||
* Alert Iteration Threshold - Minimum number of task iterations to trigger Slack alerts (tasks that fail prior to the threshold will be ignored)
|
||||
* **Additional options**
|
||||
* Track manual (non agent-run) experiments as well - Select to include in the dashboard experiments that were not executed by an agent
|
||||
* Alert on completed experiments - Select to include completed tasks in alerts: in the dashboard’s Task Alerts section and in Slack Alerts.
|
||||
* Alert on completed experiments - Select to include completed tasks in alerts: in the dashboard's Task Alerts section and in Slack Alerts.
|
||||
* **Export Configuration** - Export the app instance configuration as a JSON file, which you can later import to create
|
||||
a new instance with the same configuration.
|
||||
|
||||
@ -48,7 +48,7 @@ Once a project dashboard instance is launched, its dashboard displays the follow
|
||||
* Experiments Summary - Number of tasks by status over time
|
||||
* Monitoring - GPU utilization and GPU memory usage
|
||||
* Metric Monitoring - An aggregated view of the values of a metric over time
|
||||
* Project’s Active Workers - Number of workers currently executing experiments in the monitored project
|
||||
* Project's Active Workers - Number of workers currently executing experiments in the monitored project
|
||||
* Workers Table - List of active workers
|
||||
* Task Alerts
|
||||
* Failed tasks - Failed experiments and their time of failure summary
|
||||
|
@ -6,7 +6,7 @@ title: Overview
|
||||
ClearML Applications are available under the ClearML Pro plan
|
||||
:::
|
||||
|
||||
Use ClearML’s GUI Applications to manage ML workloads and automatically run your recurring workflows without any coding.
|
||||
Use ClearML's GUI Applications to manage ML workloads and automatically run your recurring workflows without any coding.
|
||||
|
||||

|
||||
|
||||
@ -18,23 +18,23 @@ ClearML provides the following applications:
|
||||
* [**AWS Autoscaler**](apps_aws_autoscaler.md) - Optimize AWS EC2 instance usage according to a defined instance budget
|
||||
* [**GCP Autoscaler**](apps_gcp_autoscaler.md) - Optimize GCP instance usage according to a defined instance budget
|
||||
* [**Hyperparameter Optimization**](apps_hpo.md) - Find the parameter values that yield the best performing models
|
||||
* **Nvidia Clara** - Train models using Nvidia’s Clara framework
|
||||
* **Nvidia Clara** - Train models using Nvidia's Clara framework
|
||||
* [**Project Dashboard**](apps_dashboard.md) - High-level project monitoring with Slack alerts
|
||||
* [**Task Scheduler**](apps_task_scheduler.md) - Schedule tasks for one-shot and/or periodic execution at specified times (available under ClearML Enterprise Plan)
|
||||
* [**Trigger Manager**](apps_trigger_manager) - Define tasks to be run when predefined events occur (available under ClearML Enterprise Plan)
|
||||
|
||||
## App Pages Layout
|
||||
Each application’s page is split into two sections:
|
||||
Each application's page is split into two sections:
|
||||
* App Instance List - Launch new app instances and view previously launched instances. Click on an instance to view its
|
||||
dashboard. Hover over it to access the [app instance actions](#app-instance-actions).
|
||||
* App Instance Dashboard - The main section of the app page: displays the selected app instance’s status and results.
|
||||
* App Instance Dashboard - The main section of the app page: displays the selected app instance's status and results.
|
||||
|
||||

|
||||
|
||||
## Launching an App Instance
|
||||
|
||||
1. Choose the desired app
|
||||
1. Click the `Launch New` button <img src="/docs/latest/icons/ico-add.svg" alt="Add new" className="icon size-md space-sm" /> to open the app’s configuration wizard
|
||||
1. Click the `Launch New` button <img src="/docs/latest/icons/ico-add.svg" alt="Add new" className="icon size-md space-sm" /> to open the app's configuration wizard
|
||||
1. Fill in the configuration details
|
||||
1. **Launch**
|
||||
|
||||
|
@ -18,7 +18,7 @@ top-level projects are displayed. Click on a project card to view the project's
|
||||
Click on a dataset card to navigate to its [Version List](webapp_dataset_viewing.md), where you can view the
|
||||
dataset versions' lineage and contents.
|
||||
|
||||
Filter the datasets to find the one you’re looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
Filter the datasets to find the one you're looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
* My Work - Show only datasets that you created
|
||||
* Tags - Choose which tags to filter by from a list of tags used in the datasets.
|
||||
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR"
|
||||
@ -29,7 +29,7 @@ Filter the datasets to find the one you’re looking for more easily. These filt
|
||||
|
||||
## Project Cards
|
||||
|
||||
In Project view, project cards display a project’s summarized dataset information:
|
||||
In Project view, project cards display a project's summarized dataset information:
|
||||
|
||||
<div class="max-w-50">
|
||||
|
||||
@ -74,7 +74,7 @@ of a dataset card to open its context menu and access dataset actions.
|
||||
|
||||
</div>
|
||||
|
||||
* **Rename** - Change the dataset’s name
|
||||
* **Rename** - Change the dataset's name
|
||||
* **Add Tag** - Add label to the dataset to help easily classify groups of dataset.
|
||||
* **Delete** - Delete the dataset and all of its versions. To delete a dataset, all its versions must first be
|
||||
archived.
|
@ -24,7 +24,7 @@ Each node in the graph represents a dataset version, and shows the following det
|
||||
* Version size
|
||||
* Version update time
|
||||
* Version details button - Hover over the version and click <img src="/docs/latest/icons/ico-console.svg" alt="console" className="icon size-md space-sm" />
|
||||
to view the version’s [details panel](#version-details-panel)
|
||||
to view the version's [details panel](#version-details-panel)
|
||||
|
||||
:::tip archiving versions
|
||||
You can archive dataset versions so the versions list doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of
|
||||
@ -65,7 +65,7 @@ On the right side of the dataset version panel, view the **VERSION INFO** which
|
||||
|
||||
</div>
|
||||
|
||||
To view a version’s detailed information, click **Full details**, which will open the dataset version’s [task page](../webapp_exp_track_visual.md).
|
||||
To view a version's detailed information, click **Full details**, which will open the dataset version's [task page](../webapp_exp_track_visual.md).
|
||||
|
||||

|
||||
|
||||
@ -84,7 +84,7 @@ to view the version's details panel. The panel includes three tabs:
|
||||
|
||||

|
||||
|
||||
* **CONSOLE** - The dataset version’s console output
|
||||
* **CONSOLE** - The dataset version's console output
|
||||
|
||||

|
||||
|
||||
|
@ -11,9 +11,9 @@ view, all pipelines are shown side-by-side. In Project view, pipelines are organ
|
||||
top-level projects are displayed. Click on a project card to view the project's pipelines.
|
||||
|
||||
Click on a pipeline card to navigate to its [Pipeline Runs Table](webapp_pipeline_table.md), where you can view the
|
||||
pipeline structure, configuration, and outputs of all the pipeline’s runs, as well as create new runs.
|
||||
pipeline structure, configuration, and outputs of all the pipeline's runs, as well as create new runs.
|
||||
|
||||
Filter the pipelines to find the one you’re looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
Filter the pipelines to find the one you're looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
* My Work - Show only pipelines that you created
|
||||
* Tags - Choose which tags to filter by from a list of tags used in the pipelines.
|
||||
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR"
|
||||
@ -46,7 +46,7 @@ In List view, the pipeline cards display summarized pipeline information:
|
||||
</div>
|
||||
|
||||
* Pipeline name
|
||||
* Time since the pipeline’s most recent run
|
||||
* Time since the pipeline's most recent run
|
||||
* Run summary - Number of *Running*/*Pending*/*Completed*/*Failed* runs
|
||||
* Tags
|
||||
|
||||
@ -62,7 +62,7 @@ of a pipeline card to open its context menu and access pipeline actions.
|
||||
|
||||
</div>
|
||||
|
||||
* **Rename** - Change the pipeline’s name
|
||||
* **Rename** - Change the pipeline's name
|
||||
* **Add Tag** - Add label to the pipeline to help easily classify groups of pipelines.
|
||||
* **Delete** - Delete the pipeline: delete all its runs and any models/artifacts produced (a list of remaining artifacts
|
||||
is returned). To delete a pipeline, all its runs must first be archived.
|
@ -2,14 +2,14 @@
|
||||
title: The Pipeline Runs Table
|
||||
---
|
||||
|
||||
The pipeline runs table is a [customizable](#customizing-the-runs-table) list of the pipeline’s runs. Use it to
|
||||
view a run’s details, and manage runs (create, continue, or abort). The runs table's auto-refresh allows users
|
||||
The pipeline runs table is a [customizable](#customizing-the-runs-table) list of the pipeline's runs. Use it to
|
||||
view a run's details, and manage runs (create, continue, or abort). The runs table's auto-refresh allows users
|
||||
to continually monitor run progress.
|
||||
|
||||
View the runs table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
||||
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
|
||||
using the buttons on the top left of the page. Use the table view for a comparative view of your runs according to
|
||||
columns of interest. Use the details view to access a selected run’s details, while keeping the pipeline runs list in view.
|
||||
columns of interest. Use the details view to access a selected run's details, while keeping the pipeline runs list in view.
|
||||
Details view can also be accessed by double-clicking a specific pipeline run in the table view to open its details view.
|
||||
|
||||
You can archive pipeline runs so the runs table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
|
||||
@ -32,7 +32,7 @@ The models table contains the following columns:
|
||||
| Column | Description | Type |
|
||||
|---|---|---|
|
||||
| **RUN** | Pipeline run identifier | String |
|
||||
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)’s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)’s `version` parameter | Version string |
|
||||
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)’s `version` parameter | Version string |
|
||||
| **TAGS** | Descriptive, user-defined, color-coded tags assigned to run. | Tag |
|
||||
| **STATUS** | Pipeline run's status. See a list of the [task states and state transitions](../../fundamentals/task.md#task-states). For Running, Failed, and Aborted runs, you will also see a progress indicator next to the status. See [here](../../pipelines/pipelines.md#tracking-pipeline-progress). | String |
|
||||
| **USER** | User who created the run. | String |
|
||||
|
@ -2,7 +2,7 @@
|
||||
title: Pipeline Run Details
|
||||
---
|
||||
|
||||
The run details panel shows the pipeline’s structure and the execution status of every step, as well as the run’s
|
||||
The run details panel shows the pipeline's structure and the execution status of every step, as well as the run's
|
||||
configuration parameters and output.
|
||||
|
||||

|
||||
@ -15,7 +15,7 @@ Each step shows:
|
||||
* Step status
|
||||
* Step execution time
|
||||
* Step log button - Hover over the step and click <img src="/docs/latest/icons/ico-console.svg" alt="console" className="icon size-md space-sm" />
|
||||
to view the step’s [details panel](#run-and-step-details-panel)
|
||||
to view the step's [details panel](#run-and-step-details-panel)
|
||||
|
||||
While the pipeline is running, the steps’ details and colors are updated.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user