Small edits (#689)

This commit is contained in:
pollfly 2023-10-09 15:48:19 +03:00 committed by GitHub
parent 5dad105950
commit 3a4b10e43b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 95 additions and 97 deletions

View File

@ -356,7 +356,7 @@ ClearML Agent supports working with one of the following package managers:
* [`conda`](https://docs.conda.io/en/latest/)
* [`poetry`](https://python-poetry.org/)
To change the package manager used by the agent, edit the [`package_manager.type`](configs/clearml_conf.md#agentpackagemanager)
To change the package manager used by the agent, edit the [`package_manager.type`](configs/clearml_conf.md#agentpackage_manager)
field in the of the `clearml.conf`. If extra channels are needed for `conda`, add the missing channels in the
`package_manager.conda_channels` field in the `clearml.conf`.

View File

@ -34,7 +34,7 @@ most recent dataset in a project. The same is true with tags; if a tag is specif
In cases where you use a dataset in a task (e.g. consuming a dataset), you can easily track which dataset the task is
using by using `Dataset.get`'s `alias` parameter. Pass `alias=<dataset_alias_string>`, and the task using the dataset
will store the datasets ID in the `dataset_alias_string` parameter under the task's **CONFIGURATION > HYPERPARAMETERS >
will store the dataset's ID in the `dataset_alias_string` parameter under the task's **CONFIGURATION > HYPERPARAMETERS >
Datasets** section.

View File

@ -20,7 +20,7 @@ ClearML Data Management solves two important challenges:
Moreover, it can be difficult and inefficient to find on a git tree the commit associated with a certain version of a dataset.
Use ClearML Data to create, manage, and version your datasets. Store your files in any storage location of your choice
(S3 / GS / Azure / Network Storage) by setting the datasets upload destination (see [`--storage`](clearml_data_cli.md#upload)
(S3 / GS / Azure / Network Storage) by setting the dataset's upload destination (see [`--storage`](clearml_data_cli.md#upload)
CLI option or [`output_url`](clearml_data_sdk.md#uploading-files) parameter).
Datasets can be set up to inherit from other datasets, so data lineages can be created, and users can track when and how

View File

@ -8,7 +8,7 @@ See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced querya
:::
Datasets can be created, modified, and managed with ClearML Data's python interface. You can upload your dataset to any
storage service of your choice (S3 / GS / Azure / Network Storage) by setting the datasets upload destination (see
storage service of your choice (S3 / GS / Azure / Network Storage) by setting the dataset's upload destination (see
[`output_url`](#uploading-files) parameter of `Dataset.upload()`). Once you have uploaded your dataset, you can access
it from any machine.

View File

@ -97,8 +97,8 @@ trainset = datasets.CIFAR10(
)
```
In cases like this, where you use a dataset in a task, you can have the dataset's ID stored in the tasks
hyperparameters. Passing `alias=<dataset_alias_string>` stores the datasets ID in the
In cases like this, where you use a dataset in a task, you can have the dataset's ID stored in the task's
hyperparameters. Passing `alias=<dataset_alias_string>` stores the dataset's ID in the
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using.

View File

@ -118,7 +118,7 @@ You'll need to input the Dataset ID you received when created the dataset above
```bash
clearml-data add --files new_data.txt
```
Which should return this output:
The console should display this output:
```console
clearml-data - Dataset Management & Versioning CLI

View File

@ -3,7 +3,7 @@ title: APIClient
---
The `APIClient` class provides a Pythonic interface to access ClearML's backend REST API. It is a convenient low-level access tool.
Through an `APIClient` instance, you can access ClearMLs REST API services:
Through an `APIClient` instance, you can access ClearML's REST API services:
* [authentication](../references/api/login.md) - Authentication management, authorization and administration for the entire system
* [debug](../references/api/debug.md) - Debugging utilities
* [projects](../references/api/projects.md) - Support for defining Projects containing tasks, models, datasets, and/or pipelines
@ -15,7 +15,7 @@ Through an `APIClient` instance, you can access ClearMLs REST API services:
## Using APIClient
`APIClient` makes the ClearML Servers REST API endpoints available as Python methods.
`APIClient` makes the ClearML Server's REST API endpoints available as Python methods.
To use `APIClient`, create an instance of it then call the method corresponding to the desired REST API endpoint, with
its respective parameters as described in the [REST API reference page](../references/api/index.md).

View File

@ -10,7 +10,7 @@ and other workflows.
For installation instructions, see [Getting Started](../getting_started/ds/ds_first_steps.md#install-clearml).
:::
The ClearML Python Package collects the scripts entire execution information, including:
The ClearML Python Package collects the scripts' entire execution information, including:
* Git repository (branch, commit ID, and uncommitted changes)
* Working directory and entry point
* Hyperparameters

View File

@ -40,7 +40,7 @@ output_model.update_labels({'background': 0, 'label': 255})
```
### Updating Models
ClearML doesnt automatically log the snapshots of manually logged models. To update an experiments model use the
ClearML doesn't automatically log the snapshots of manually logged models. To update an experiment's model use the
[OutputModel.update_weights](../references/sdk/model_outputmodel.md#update_weights) method.
```python
@ -106,7 +106,7 @@ task.connect(input_model)
Retrieve a list of model objects by querying the system by model names, projects, tags, and more, using the
[`Model.query_models`](../references/sdk/model_model.md#modelquery_models) and/or
the [`InputModel.query_models`](../references/sdk/model_inputmodel.md#inputmodelquery_models) class methods. These
methods return a list of model objects that match the queries. The list is ordered according to the models last update
methods return a list of model objects that match the queries. The list is ordered according to the models' last update
time.
```python

View File

@ -14,7 +14,7 @@ but can be overridden by command-line arguments.
### General
|Name| Description |
|---|--------------------------------------------------------------------------------|
|**CLEARML_LOG_ENVIRONMENT** | List of Environment variable names. These environment variables will be logged in the ClearML tasks configuration hyperparameters `Environment` section. When executed by a ClearML agent, these values will be set in the tasks execution environment. |
|**CLEARML_LOG_ENVIRONMENT** | List of Environment variable names. These environment variables will be logged in the ClearML task's configuration hyperparameters `Environment` section. When executed by a ClearML agent, these values will be set in the task's execution environment. |
|**CLEARML_TASK_NO_REUSE** | Boolean. <br/> When set to `true`, a new task is created for every execution (see Task [reuse](../clearml_sdk/task_sdk.md#task-reuse)). |
|**CLEARML_CACHE_DIR** | Set the path for the ClearML cache directory, where ClearML stores all downloaded content. |
|**CLEARML_DOCKER_IMAGE** | Sets the default docker image to use when running an agent in [Docker mode](../clearml_agent.md#docker-mode). |

View File

@ -9,7 +9,7 @@ In order to log a script in multiple tasks, each task needs to be initialized us
method with the `task_name` and `project_name` parameters input. Before initializing an additional task in the same script, the
previous task must be manually shut down with the [`close`](../../references/sdk/task.md#close) method.
When the script is executed, it should return something like this:
When the script is executed, the console should display the following output:
```text
ClearML Task: created new task id=5c4d2d3674a94e35b10f04d9d2180l62

View File

@ -22,7 +22,7 @@ task.connect(dataview)
### Accessing a Task's Dataviews
Use the `Task.get_dataviews` method to access the Dataviews that are connected to a Task.
Use `Task.get_dataviews()` to access the Dataviews that are connected to a Task.
```python
task.get_dataviews()

View File

@ -14,8 +14,8 @@ can be executed locally, or on any machine using the [clearml-agent](../clearml_
![Pipeline UI](../img/pipelines_DAG.png)
The [Pipeline Run](../webapp/pipelines/webapp_pipeline_viewing.md) page in the web UI displays the pipelines structure
in terms of executed steps and their status, as well as the runs configuration parameters and output. See [pipeline UI](../webapp/pipelines/webapp_pipeline_page.md)
The [Pipeline Run](../webapp/pipelines/webapp_pipeline_viewing.md) page in the web UI displays the pipeline's structure
in terms of executed steps and their status, as well as the run's configuration parameters and output. See [pipeline UI](../webapp/pipelines/webapp_pipeline_page.md)
for more details.
ClearML pipelines are created from code using one of the following:
@ -47,7 +47,7 @@ the pipeline via the ClearML Web UI. See [Pipeline Runs](#pipeline-runs).
## Pipeline Features
### Artifacts and Metrics
Each pipeline step can log additional artifacts and metrics on the step task with the usual flows (TB, Matplotlib, or with
[ClearML Logger](../fundamentals/logger.md)). To get the instance of the steps Task during runtime, use the class method
[ClearML Logger](../fundamentals/logger.md)). To get the instance of the step's Task during runtime, use the class method
[Task.current_task](../references/sdk/task.md#taskcurrent_task).
Additionally, pipeline steps can directly report metrics or upload artifacts / models to the pipeline using these
@ -70,7 +70,7 @@ section)
By default, pipeline steps are not cached. Enable caching when creating a pipeline step (for example, see [@PipelineDecorator.component](pipelines_sdk_function_decorators.md#pipelinedecoratorcomponent)).
When a step is cached, the step code is hashed, alongside the steps parameters (as passed in runtime), into a single
When a step is cached, the step code is hashed, alongside the step's parameters (as passed in runtime), into a single
representing hash string. The pipeline first checks if a cached step exists in the system (archived Tasks will not be used
as a cached instance). If the pipeline finds an existing fully executed instance of the step, it will plug its output directly,
allowing the pipeline logic to reuse the step outputs.
@ -87,13 +87,13 @@ configuration, installed packages, uncommitted changes etc.).
You can rerun the pipeline via the [ClearML Web UI](../webapp/pipelines/webapp_pipeline_table.md). To launch a new run
for a pipeline, click **+ NEW RUN** on the top left of the pipeline runs page. This opens a **NEW RUN** modal, where you
can set the runs parameters and execution queue.
can set the run's parameters and execution queue.
![Pipeline params UI](../img/pipelines_new_run.png)
The new pipeline run will be executed through the execution queue by a ClearML agent. The agent will rebuild
the pipeline according to the configuration and DAG that was captured in the original run, and override the original
parameters value with those input in the **NEW RUN** modal.
parameters' value with those input in the **NEW RUN** modal.
One exception is for pipelines [created from functions](pipelines_sdk_tasks.md#steps-from-functions) (adding steps to a
pipeline controller using [`PipelineController.add_function_step()`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)):
@ -107,14 +107,14 @@ lets you modify the pipeline configuration via the UI, without changing the orig
### Pipeline Versions
Each pipeline must be assigned a version number to help track the evolution of your pipeline structure and parameters.
If you pass `auto_version_bump=True` when instantiating a PipelineController, the pipelines version automatically bumps up
If you pass `auto_version_bump=True` when instantiating a PipelineController, the pipeline's version automatically bumps up
if there is a change in the pipeline code. If there is no change, the pipeline retains its version number.
### Tracking Pipeline Progress
ClearML automatically tracks a pipelines progress percentage: the number of pipeline steps completed out of the total
ClearML automatically tracks a pipeline's progress percentage: the number of pipeline steps completed out of the total
number of steps. For example, if a pipeline consists of 4 steps, after the first step completes, ClearML automatically
sets its progress value to 25. Once a pipeline has started to run but is yet to successfully finish, the WebApp will
show the pipelines progress indication in the pipeline runs table, next to the runs status.
show the pipeline's progress indication in the pipeline runs table, next to the run's status.
## Examples

View File

@ -62,7 +62,7 @@ def main(pickle_url, mock_parameter='mock'):
`services` queue. To run the pipeline logic locally while the components are executed remotely, pass
`pipeline_execution_queue=None`
When the function is called, a corresponding ClearML Controller Task is created: its arguments are logged as the tasks
When the function is called, a corresponding ClearML Controller Task is created: its arguments are logged as the task's
parameters. When launching a new pipeline run from the [UI](../webapp/pipelines/webapp_pipeline_page.md), you can modify their values for the new run.
![Pipeline new run](../img/pipelines_new_run.png)
@ -94,7 +94,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
```
### Arguments
* `return_values` - The artifact names for the steps corresponding ClearML task to store the steps returned objects.
* `return_values` - The artifact names for the step's corresponding ClearML task to store the step's returned objects.
In the example above, a single object is returned and stored as an artifact named `data_frame`
* `name` (optional) - The name for the pipeline step. If not provided, the function name is used
* `cache` - If `True`, the pipeline controller checks if a step with the same code (including setup, see task [Execution](../webapp/webapp_exp_track_visual.md#execution)
@ -139,7 +139,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
* Callbacks - Control pipeline execution flow with callback functions
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a steps execution. See [here](pipelines_sdk_tasks.md#pre_execute_callback-and-post_execute_callback).
before and/or after a step's execution. See [here](pipelines_sdk_tasks.md#pre_execute_callback-and-post_execute_callback).
* `status_change_callback` - Callback function called when the status of a step changes. Use `node.job` to access the
`ClearmlJob` object, or `node.job.task` to directly access the Task object. The signature of the function must look like this:
```python
@ -151,7 +151,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
pass
```
Additionally, you can enable automatic logging of a steps metrics / artifacts / models to the pipeline task using the
Additionally, you can enable automatic logging of a step's metrics / artifacts / models to the pipeline task using the
following arguments:
* `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:

View File

@ -33,10 +33,10 @@ pipe.add_parameter(
```
* `name` - Parameter name
* `default` - Parameters default value (this value can later be changed in the UI)
* `default` - Parameter's default value (this value can later be changed in the UI)
* `description` - String description of the parameter and its usage in the pipeline
These parameters can be programmatically injected into a steps configuration using the following format: `"${pipeline.<parameter_name>}"`.
These parameters can be programmatically injected into a step's configuration using the following format: `"${pipeline.<parameter_name>}"`.
When launching a new pipeline run from the [UI](../webapp/pipelines/webapp_pipeline_table.md), you can modify their
values for the new run.
@ -56,7 +56,7 @@ config_file = pipe.connect_configuration(configuration=config_file_path, name="M
my_params = json.load(open(config_file,'rt'))
```
You can view the configuration in the pipelines task page's **CONFIGURATION** tab, in the section specified in the
You can view the configuration in the pipeline's task page's **CONFIGURATION** tab, in the section specified in the
`name` parameter.
@ -67,11 +67,10 @@ to the specified structure.
### Steps from Tasks
Creating a pipeline step from an existing ClearML task means that when the step is run, the task will be cloned, and a
new task will be launched through the configured execution queue (the original task is unmodified). The new tasks
new task will be launched through the configured execution queue (the original task is unmodified). The new task's
parameters can be [specified](#parameter_override).
Task steps are added using the [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
method:
Task steps are added using [`PipelineController.add_step()`](../references/sdk/automation_controller_pipelinecontroller.md#add_step):
```python
pipe.add_step(
@ -103,8 +102,8 @@ pipe.add_step(
See [add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step) for all arguments.
#### parameter_override
Use the `parameter_override` argument to modify the steps parameter values. The `parameter_override` dictionary key is
the task parameters full path, which includes the parameter section's name and the parameter name separated by a slash
Use the `parameter_override` argument to modify the step's parameter values. The `parameter_override` dictionary key is
the task parameter's full path, which includes the parameter section's name and the parameter name separated by a slash
(e.g. `'General/dataset_url'`). Passing `"${}"` in the argument value lets you reference input/output configurations
from other pipeline steps. For example: `"${<step_name>.id}"` will be converted to the Task ID of the referenced pipeline
step.
@ -116,7 +115,7 @@ Examples:
* Pipeline parameters (see adding pipeline parameters): `'${pipeline.<pipeline_parameter>}'`
#### configuration_overrides
You can override a steps configuration object by passing either a string representation of the content of the configuration
You can override a step's configuration object by passing either a string representation of the content of the configuration
object, or a configuration dictionary.
Examples:
@ -132,8 +131,7 @@ As each function is transformed into an independently executed step, it needs to
all package imports inside the function are automatically logged as required packages for the pipeline step.
:::
Function steps are added using the [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
method:
Function steps are added using [`PipelineController.add_function_step()`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step):
```python
pipe.add_function_step(
@ -154,11 +152,11 @@ pipe.add_function_step(
)
```
* `name` - The pipeline steps name. This name can be referenced in subsequent steps
* `name` - The pipeline step's name. This name can be referenced in subsequent steps
* `function` - A global function to be used as a pipeline step, which will be converted into a standalone task
* `function_kwargs` (optional) - A dictionary of function arguments and default values which are translated into task
hyperparameters. If not provided, all function arguments are translated into hyperparameters.
* `function_return` - The names for storing the pipeline steps returned objects as artifacts in its ClearML task.
* `function_return` - The names for storing the pipeline step's returned objects as artifacts in its ClearML task.
* `cache_executed_step` - If `True`, the controller will check if an identical task with the same code
(including setup, see task [Execution](../webapp/webapp_exp_track_visual.md#execution)
section) and input arguments was already executed. If found, the cached step's
@ -166,7 +164,7 @@ pipe.add_function_step(
* `parents` Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution
only after all the parent steps have been executed successfully.
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a steps execution. See [here](#pre_execute_callback-and-post_execute_callback).
before and/or after a step's execution. See [here](#pre_execute_callback-and-post_execute_callback).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).
See [add_function_step](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step) for all
@ -195,7 +193,7 @@ def step_created_callback(
pass
```
A `post_execute_callback` function is called when a step is completed. It lets you modify the steps status after completion.
A `post_execute_callback` function is called when a step is completed. It lets you modify the step's status after completion.
```python
def step_completed_callback(
@ -207,7 +205,7 @@ def step_completed_callback(
#### Models, Artifacts, and Metrics
You can enable automatic logging of a steps metrics /artifacts / models to the pipeline task using the following arguments:
You can enable automatic logging of a step's metrics /artifacts / models to the pipeline task using the following arguments:
* `monitor_metrics` (optional) - Automatically log the step's reported metrics also on the pipeline Task. The expected
format is one of the following:

View File

@ -35,7 +35,7 @@ For more information about how autoscalers work, see [Autoscalers Overview](../.
* Git Password / Personal Access Token
* **Max Idle Time** (optional) - Maximum time in minutes that an EC2 instance can be idle before the autoscaler spins it
down
* **Workers Prefix** (optional) - A Prefix added to workers names, associating them with this autoscaler
* **Workers Prefix** (optional) - A Prefix added to workers' names, associating them with this autoscaler
* **Polling Interval** (optional) - Time period in minutes at which the designated queue is polled for new tasks
* **Base Docker Image** (optional) - Default Docker image in which the ClearML Agent will run. Provide a Docker stored
in a Docker artifactory so instances can automatically fetch it
@ -106,22 +106,22 @@ The autoscaler dashboard shows:
* Queues and the resource type associated with them
* Number of current running instances
* Console: the application log containing everything printed to stdout and stderr appears in the console log. The log
shows polling results of the autoscalers associated queues, including the number of tasks enqueued, and updates EC2
shows polling results of the autoscaler's associated queues, including the number of tasks enqueued, and updates EC2
instances being spun up/down.
:::tip Console Debugging
To make the autoscaler console log show additional debug information, change an active app instances log level to DEBUG:
1. Go to the app instance tasks page > **CONFIGURATION** tab > **USER PROPERTIES** section
To make the autoscaler console log show additional debug information, change an active app instance's log level to DEBUG:
1. Go to the app instance task's page > **CONFIGURATION** tab > **USER PROPERTIES** section
1. Hover over the section > Click `Edit` > Click `+ADD PARAMETER`
1. Input `log_level` as the key and `DEBUG` as the value of the new parameter.
![Autoscaler debugging](../../img/webapp_autoscaler_debug_log.png)
The consoles log level will update in the autoscaler's next iteration.
The console's log level will update in the autoscaler's next iteration.
:::
* Instance log files - Click to access the app instance's logs. This takes you to the app instance task's ARTIFACTS tab,
which lists the app instances logs. In a logs `File Path` field, click <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-sm space-sm" />
which lists the app instance's logs. In a log's `File Path` field, click <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-sm space-sm" />
to download the complete log.

View File

@ -48,7 +48,7 @@ For more information about how autoscalers work, see [Autoscalers Overview](../.
* \+ Add Item - Define another resource type
* **Autoscaler Instance Name** (optional) - Name for the Autoscaler instance. This will appear in the instance list
* **Max Idle Time** (optional) - Maximum time in minutes that a VM instance can be idle before the autoscaler spins it down
* **Workers Prefix** (optional) - A Prefix added to workers names, associating them with this autoscaler
* **Workers Prefix** (optional) - A Prefix added to workers' names, associating them with this autoscaler
* **Polling Interval** (optional) - Time period in minutes at which the designated queue is polled for new tasks
* **Apply Task Owner Vault Configuration** - Select to apply values from the task owner's [ClearML vault](../webapp_profile.md#configuration-vault) when executing the task
* **Warn if more than one instance is executing the same task** - Select to print warning to console when multiple
@ -91,22 +91,22 @@ The autoscaler dashboard shows:
* Queues and the resource type associated with them
* Number of current running instances
* Console: the application log containing everything printed to stdout and stderr appears in the console log. The log
shows polling results of the autoscalers associated queues, including the number of tasks enqueued, and updates VM
shows polling results of the autoscaler's associated queues, including the number of tasks enqueued, and updates VM
instances being spun up/down
:::tip Console Debugging
To make the autoscaler console log show additional debug information, change an active app instances log level to DEBUG:
1. Go to the app instance tasks page > **CONFIGURATION** tab > **USER PROPERTIES** section
To make the autoscaler console log show additional debug information, change an active app instance's log level to DEBUG:
1. Go to the app instance task's page > **CONFIGURATION** tab > **USER PROPERTIES** section
1. Hover over the section > Click `Edit` > Click `+ADD PARAMETER`
1. Input `log_level` as the key and `DEBUG` as the value of the new parameter.
![Autoscaler debugging](../../img/webapp_autoscaler_debug_log.png)
The consoles log level will update in the autoscaler's next iteration.
The console's log level will update in the autoscaler's next iteration.
:::
* Instance log files - Click to access the app instance's logs. This takes you to the app instance task's ARTIFACTS tab,
which lists the app instances logs. In a logs `File Path` field, click <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-sm space-sm" />
which lists the app instance's logs. In a log's `File Path` field, click <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-sm space-sm" />
to download the complete log.
:::tip EMBEDDING CLEARML VISUALIZATION

View File

@ -21,7 +21,7 @@ For more information about how autoscalers work, see [Autoscalers Overview](../.
* **Machine Specification**
* GPU Type - NVIDIA GPU on the machine
* Number of GPUs - Number of GPUs in the cloud machine
* The rest of the machines available resources are dependent on the number and type of GPUs specified above:
* The rest of the machine's available resources are dependent on the number and type of GPUs specified above:
* vCPUs - Number of vCPUs in the cloud machine
* Memory - RAM available to the cloud machine
* Hourly Price - Machine's hourly rate
@ -65,14 +65,14 @@ The GPU Compute dashboard shows:
* Console - The log shows updates of cloud instances being spun up/down.
:::tip Console Debugging
To make the autoscaler console log show additional debug information, change an active app instances log level to DEBUG:
1. Go to the app instance tasks page > **CONFIGURATION** tab > **USER PROPERTIES** section
To make the autoscaler console log show additional debug information, change an active app instance's log level to DEBUG:
1. Go to the app instance task's page > **CONFIGURATION** tab > **USER PROPERTIES** section
1. Hover over the section > Click `Edit` > Click `+ADD PARAMETER`
1. Input `log_level` as the key and `DEBUG` as the value of the new parameter.
![Autoscaler debugging](../../img/webapp_autoscaler_debug_log.png)
The consoles log level will update in the autoscaler's next iteration.
The console's log level will update in the autoscaler's next iteration.
:::
:::tip EMBEDDING CLEARML VISUALIZATION

View File

@ -25,8 +25,8 @@ limits.
sample a different set of hyperparameters values
* **Optimization Configuration**
* Optimization Method - The optimization strategy to employ (e.g. random, grid, hyperband)
* Optimization Objective Metrics Title - Title of metric to optimize
* Optimization Objective Metrics Series - Metric series (variant) to optimize
* Optimization Objective Metric's Title - Title of metric to optimize
* Optimization Objective Metric's Series - Metric series (variant) to optimize
* Optimization Objective Trend - Choose the optimization target, whether to maximize or minimize the value of the
metric specified above
* **Execution Queue** - The [ClearML Queue](../../fundamentals/agents_and_queues.md#what-is-a-queue) to which
@ -39,7 +39,7 @@ limits.
* Step Size - Step size between samples
* Discrete Parameters - A set of values to sample
* Values - Comma separated list of values to sample
* Name - The original tasks configuration parameter name (including section name e.g. `Args/lr`) <br/>
* Name - The original task's configuration parameter name (including section name e.g. `Args/lr`) <br/>
:::tip Hydra Parameters
For experiments using Hydra, input parameters from the OmegaConf in the following format:
`Hydra/<param>`. Specify `<param>` using dot notation. For example, if your OmegaConf looks like this:

View File

@ -54,7 +54,7 @@ Access app instance actions, by right-clicking an instance, or through the menu
![App context menu](../../img/app_context_menu.png)
* **Rename** - Rename the instance
* **Configuration** - View an instances configuration
* **Configuration** - View an instance's configuration
* **Export Configuration** - Export the app instance configuration as a JSON file, which you can later import to create
a new instance with the same configuration
* **Stop** - Shutdown the instance

View File

@ -52,7 +52,7 @@ The app monitors your workspace for trigger events and will launch copies of the
## Dashboard
The Trigger Manager app instance's dashboard displays its console log. The log shows the instances activity: periodic
The Trigger Manager app instance's dashboard displays its console log. The log shows the instance's activity: periodic
polling, and events triggered
![Trigger dashboard](../../img/apps_trigger_manager_dashboard.png)

View File

@ -32,7 +32,7 @@ The models table contains the following columns:
| Column | Description | Type |
|---|---|---|
| **RUN** | Pipeline run identifier | String |
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)s `version` parameter | Version string |
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)'s `version` parameter | Version string |
| **TAGS** | Descriptive, user-defined, color-coded tags assigned to run. | Tag |
| **STATUS** | Pipeline run's status. See a list of the [task states and state transitions](../../fundamentals/task.md#task-states). For Running, Failed, and Aborted runs, you will also see a progress indicator next to the status. See [here](../../pipelines/pipelines.md#tracking-pipeline-progress). | String |
| **USER** | User who created the run. | String |
@ -91,9 +91,9 @@ The following table customizations are saved on a per-pipeline basis:
## Create Run
To launch a new run for a pipeline, click **+ NEW RUN** on the top left of the page. This opens a **NEW RUN** modal, where
you can set the runs parameters. By default, the fields are pre-filled with the last runs values.
you can set the run's parameters. By default, the fields are pre-filled with the last run's values.
Click **Advanced configurations** to change the runs execution queue.
Click **Advanced configurations** to change the run's execution queue.
![New run modal](../../img/webapp_pipeline_new_run.png)

View File

@ -17,7 +17,7 @@ Each step shows:
* Step log button - Hover over the step and click <img src="/docs/latest/icons/ico-console.svg" alt="console" className="icon size-md space-sm" />
to view the step's [details panel](#run-and-step-details-panel)
While the pipeline is running, the steps details and colors are updated.
While the pipeline is running, the steps' details and colors are updated.
## Run and Step Details
### Run and Step Info
@ -30,24 +30,24 @@ On the right side of the pipeline run panel, view the **RUN INFO** which shows:
![Run info](../../img/webapp_pipeline_run_info.png)
To view a runs complete information, click **Full details**, which will open the pipelines controller [task page](../webapp_exp_track_visual.md).
View each lists complete details in the pipeline tasks corresponding tabs:
To view a run's complete information, click **Full details**, which will open the pipeline's controller [task page](../webapp_exp_track_visual.md).
View each list's complete details in the pipeline task's corresponding tabs:
* **PARAMETERS** list > **CONFIGURATION** tab
* **METRICS** list > **SCALARS** tab
* **ARTIFACTS** and **MODELS** lists > **ARTIFACTS** tab
![Pipeline task info](../../img/webapp_pipeline_task_info.png)
To view a specific steps information, click the step on the execution graph, and the info panel displays its **STEP INFO**.
The panel displays the steps name, task type, and status, as well as its parameters, metrics, artifacts, and models.
To view a specific step's information, click the step on the execution graph, and the info panel displays its **STEP INFO**.
The panel displays the step's name, task type, and status, as well as its parameters, metrics, artifacts, and models.
![Step info](../../img/webapp_pipeline_step_info.png)
To return to viewing the runs information, click the pipeline graph, outside any of the steps.
To return to viewing the run's information, click the pipeline graph, outside any of the steps.
### Run and Step Details Panel
Click on **DETAILS** on the top left of the info panel to view the pipeline controller's details panel. To view a steps
Click on **DETAILS** on the top left of the info panel to view the pipeline controller's details panel. To view a step's
details panel, click **DETAILS** and then click on a step node, or hover over a step node and click <img src="/docs/latest/icons/ico-console.svg" alt="details" className="icon size-md space-sm" />.
The details panel includes three tabs:
@ -61,7 +61,7 @@ The details panel includes three tabs:
* **Code** - For pipeline steps generated from functions using either [`PipelineController.add_function_step`](../../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
or [`PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent),
you can view the selected steps code.
you can view the selected step's code.
![code](../../img/webapp_pipeline_step_code.png)

View File

@ -198,7 +198,7 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
<img src="/docs/latest/icons/ico-circle-newer.svg" alt="Right arrow" className="icon size-md space-sm" /> (new images), or <img src="/docs/latest/icons/ico-circle-newest.svg" alt="right arrow, newest image" className="icon size-md space-sm" /> (newest images).
* Click <img src="/docs/latest/icons/ico-disconnect.svg" alt="Sync selection" className="icon size-md space-sm" /> in
order to synchronize iteration and metric selection across experiments. For example, if you select a metric for
one experiments debug samples, the same metric will be automatically selected for the rest of the experiments in the comparison.
one experiment's debug samples, the same metric will be automatically selected for the rest of the experiments in the comparison.
![image](../img/webapp_compare_30.png)

View File

@ -9,7 +9,7 @@ to monitor experimentation, and more). The experiments table's auto-refresh allo
View the experiments table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your experiments according
to columns of interest. Use the details view to access a selected experiments details, while keeping the experiment list
to columns of interest. Use the details view to access a selected experiment's details, while keeping the experiment list
in view. Details view can also be accessed by double-clicking a specific experiment in the table view to open its details view.
You can archive experiments so the experiments table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the

View File

@ -43,7 +43,7 @@ The comparison tabs provides the following views:
In the **Details**, **Network**, and **Scalars** (Values mode) tabs, you can view differences in the models' nominal
values. **Details** displays the models' general information, labels, and metadata. **Network** displays the models'
configuration. **Scalars** (in Values mode) displays the models scalar values (min, max, or last). Each model's
configuration. **Scalars** (in Values mode) displays the models' scalar values (min, max, or last). Each model's
information is displayed in a column, so each field is lined up side-by-side.
The model on the left is used as the base model, to which the other models are compared. You can set a new base model
@ -67,7 +67,7 @@ models are combined.
![Merged plots](../img/webapp_compare_models_merge_plots.png)
The rest of the plots which cant be merged are displayed separately for each model.
The rest of the plots which can't be merged are displayed separately for each model.
![Side-by-side plots](../img/webapp_compare_models_side_plots.png)

View File

@ -8,7 +8,7 @@ view model details, and modify, publish, archive, tag, and move models to other
View the models table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
using the buttons on the top left of the page. Use the table view for a comparative view of your models according to
columns of interest. Use the details view to access a selected models details, while keeping the model list in view.
columns of interest. Use the details view to access a selected model's details, while keeping the model list in view.
Details view can also be accessed by double-clicking a specific model in the table view to open its details view.
You can archive models so the models table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the

View File

@ -123,7 +123,7 @@ pending invitations are displayed.
### Leaving a Workspace
You can leave any workspace youve previously joined (except your personal workspace).
You can leave any workspace you've previously joined (except your personal workspace).
When leaving a workspace, you lose access to its resources (tasks, models, etc.) and your previously created access
credentials to that workspace are revoked. Tasks and associated artifacts that you logged to that workspace will remain
@ -223,7 +223,7 @@ To remove a user from a workspace:
1. Hover over the user's row on the table
1. Click the <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" /> button
Removed users lose access to your workspaces resources (tasks, models, etc.) and their existing access credentials are
Removed users lose access to your workspace's resources (tasks, models, etc.) and their existing access credentials are
revoked. Tasks and associated artifacts logged to your workspace by a removed user will remain in your workspace. The
user can only rejoin your workspace when you re-invite them.

View File

@ -6,12 +6,12 @@ Use the Projects Page for project navigation and management.
Your projects are displayed like folders: click a folder to access its contents. The Projects Page shows the top-level
projects in your workspace. Projects that contain nested subprojects are identified by an extra nested project tab.
An exception is the **All Experiments** folder, which shows all projects and subprojects contents in a single, flat
An exception is the **All Experiments** folder, which shows all projects' and subprojects' contents in a single, flat
list.
![Projects page](../img/webapp_project_page.png)
If a project has any subprojects, clicking its folder will open its own project page. Access the projects top-level
If a project has any subprojects, clicking its folder will open its own project page. Access the projects' top-level
contents (i.e. experiments, models etc.) via the folder with the bracketed (`[ ]`) project name.
If a project does not contain any subprojects, clicking on its folder will open its experiment table (or [Project Overview](webapp_project_overview.md)

View File

@ -14,7 +14,7 @@ title: Reports
<br/>
With ClearMLs Reports you can write up notes, experiment findings, or really anything you want. You can create reports
With ClearML's Reports you can write up notes, experiment findings, or really anything you want. You can create reports
in any of your ClearML projects.
In addition to its main document, a report also contains a description field, which will appear in the report's card in
@ -76,7 +76,7 @@ A standard embed code is formatted like this:
```
The `src` parameter is made up of the following components:
* Your web servers URL (e.g. `app.clear.ml`)
* Your web server's URL (e.g. `app.clear.ml`)
* `/widget/` - The endpoint that serves the embedded data.
* The query parameters for your visualization (the path and query are separated by a question mark `?`)
@ -105,7 +105,7 @@ resources will be displayed. See [Dynamic Queries](#dynamic-queries) below.
* `timestamp` - Time from start
* `iso_time` - Wall time
* `metrics` - Metric name
* `variants` - Variants name
* `variants` - Variant's name
* `company` - Workspace ID. Applicable to the ClearML hosted service, for embedding content from a different workspace
* `light` - add parameter to switch visualization to light theme

View File

@ -58,14 +58,14 @@ The worker table shows the currently available workers and their current executi
* Training iteration.
Clicking on a worker will open the workers details panel and replace the graph with that workers resource utilization
information. The resource metric being monitored can be selected through the menu at the graphs top left corner:
Clicking on a worker will open the worker's details panel and replace the graph with that worker's resource utilization
information. The resource metric being monitored can be selected through the menu at the graph's top left corner:
* CPU and GPU Usage
* Memory Usage
* Video Memory Usage
* Network Usage.
The workers details panel includes the following two tabs:
The worker's details panel includes the following two tabs:
* **INFO** - worker information:
* Worker Name
* Update time - The last time the worker reported data
@ -97,7 +97,7 @@ The queue table shows the following queue information:
To create a new queue - Click **+ NEW QUEUE** (top left).
Hover over a queue and click <img src="/docs/latest/icons/ico-copy-to-clipboard.svg" alt="Copy" className="icon size-md space-sm" />
to copy the queues ID.
to copy the queue's ID.
![image](../img/4100.png)
@ -107,24 +107,24 @@ to access queue actions:
![Queue context menu](../img/webapp_workers_queues_context.png)
* Delete - Delete the queue. Any pending tasks will be dequeued.
* Rename - Change the queues name
* Rename - Change the queue's name
* Clear - Remove all pending tasks from the queue
* Custom action - The ClearML Enterprise Server provides a mechanism to define your own custom actions, which will
appear in the context menu. See [Custom UI Context Menu Actions](../deploying_clearml/clearml_server_config.md#custom-ui-context-menu-actions)
Clicking on a queue will open the queues details panel and replace the graphs with that queues statistics.
Clicking on a queue will open the queue's details panel and replace the graphs with that queue's statistics.
The queues details panel includes the following two tabs:
The queue's details panel includes the following two tabs:
* **EXPERIMENTS** - A list of experiments in the queue. You can reorder and remove enqueued experiments. See
[Controlling Queue Contents](#controlling-queue-contents).
* **WORKERS** - Information about the workers assigned to the queue:
* Name - Worker name
* IP - Workers IP
* IP - Worker's IP
* Currently Executing - The experiment currently being executed by the worker
### Controlling Queue Contents
Click on an experiments menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
Click on an experiment's menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
in the **EXPERIMENTS** tab to reorganize your queue:
![Queue experiment's menu](../img/workers_queues_experiment_actions.png)