Change headings to title caps (#62)

This commit is contained in:
pollfly 2021-09-09 13:17:46 +03:00 committed by GitHub
parent de82df937e
commit c2d8707572
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
77 changed files with 337 additions and 336 deletions

View File

@ -11,7 +11,7 @@ in the UI and send it for long-term training on a remote machine.
**If you are not that lucky**, this section is for you :)
## What does ClearML Session do?
## What Does ClearML Session Do?
`clearml-session` is a feature that allows to launch a session of JupyterLab and VS Code, and to execute code on a remote
machine that better meets resource needs. With this feature, local links are provided, which can be used to access
JupyterLab and VS Code on a remote machine over a secure and encrypted SSH connection. By default, the JupyterLab and
@ -74,18 +74,18 @@ After entering a `clearml-session` command with all specifications:
To run a session inside a Docker container, use the `--docker` flag and enter the docker image to use in the interactive
session.
### Installing requirements
### Installing Requirements
`clearml-session` can install required Python packages when setting up the remote environment. A `requirement.txt` file
can be attached to the command using `--requirements </file/location.txt>`.
Alternatively, packages can be manually specified, using `--packages "<package_name>"`
(for example `--packages "keras" "clearml"`), and they'll be automatically installed.
### Accessing a git repository
### Accessing a Git Repository
To access a git repository remotely, add a `--git-credentials` flag and set it to `true`, so the local .git-credentials
file will be sent to the interactive session. This is helpful if working on private git repositories, and it allows for seamless
cloning and tracking of git references, including untracked changes.
### Re-launching and shutting down sessions
### Re-launching and Shutting Down Sessions
If a `clearml-session` was launched locally and is still running on a remote machine, users can easily reconnect to it.
To reconnect to a previous session, execute `clearml-session` with no additional flags, and the option of reconnecting
to an existing session will show up:
@ -106,7 +106,7 @@ Connect to session [0-1] or 'N' to skip
To shut down a remote session, which will free the `clearml-agent` and close the CLI, enter "Shutdown". If a session
is shutdown, there is no option to reconnect to it.
### Connecting to an existing session
### Connecting to an Existing Session
If a `clearml-session` is running remotely, it's possible to continue working on the session from any machine.
When `clearml-session` is launched, it initializes a task with a unique ID in the ClearML Server.
@ -117,7 +117,7 @@ To connect to an existing session:
1. Click on the JupyterLab / VS Code link that is outputted, or connect directly to the SSH session
### Starting a debugging session
### Starting a Debugging Session
Previously executed experiments in the ClearML system can be debugged on a remote interactive session.
Input into `clearml-session` the ID of a Task to debug, then `clearml-session` clones the experiment's git repository and
replicates the environment on a remote machine. Then the code can be interactively executed and debugged on JupyterLab / VS Code.
@ -133,7 +133,7 @@ The Task must be connected to a git repository, since currently single script de
1. In JupyterLab / VS Code, access the experiment's repository in the `environment/task_repository` folder.
### Command line options
### Command Line Options
<div className="tbl-cmd">

View File

@ -128,7 +128,7 @@ Install ClearML Agent as a system Python package and not in a Python virtual env
1. Optionally, configure **ClearML** options for **ClearML Agent** (default docker, package manager, etc.). See the [ClearML Configuration Reference](configs/clearml_conf.md).
### Adding ClearML Agent to a configuration file
### Adding ClearML Agent to a Configuration File
In case a `clearml.conf` file already exists, add a few ClearML Agent specific configurations to it.<br/>
@ -297,7 +297,7 @@ In case a `clearml.conf` file already exists, add a few ClearML Agent specific c
## Execution
### Spinning up an Agent
### Spinning Up an Agent
#### Executing an Agent
To execute an agent, listening to a queue, run:
@ -345,17 +345,17 @@ clearml-agent daemon --detached --queue group_a group_b --order-fairness --gpus
It will make sure the agent will pull from the “group_a” queue, then from “group_b”, then back to “group_a”, etc. This ensures
that “group A” or ”group_b” will not be able to starve one another of resources.
### Explicit Task execution
### Explicit Task Execution
ClearML Agent can also execute specific tasks directly, without listening to a queue.
#### Execute a Task without queue
#### Execute a Task without Queue
Execute a Task with a `clearml-agent` worker without a queue.
```bash
clearml-agent execute --id <task-id>
```
#### Clone a Task and execute the cloned Task
#### Clone a Task and Execute the Cloned Task
Clone the specified Task and execute the cloned Task with a `clearml-agent` worker without a queue.
```bash
@ -584,7 +584,7 @@ compute resources provided by google colab and send experiments for execution on
Check out [this](guides/ide/google_colab.md) tutorial on how to run a ClearML Agent on Google Colab!
## Scheduling working hours
## Scheduling Working Hours
:::important
Available with the ClearML Enterprise offering
@ -603,7 +603,7 @@ Override worker schedules by:
* Setting runtime properties to force a worker on or off
* Tagging a queue on or off
### Running clearml-agent with a schedule (command line)
### Running clearml-agent with a Schedule (Command Line)
Set a schedule for a worker from the command line when running `clearml-agent`. Two properties enable setting working hours:
@ -632,7 +632,7 @@ For example:
* `"20-00,00-08 SUN"` - 8 PM to midnight and midnight to 8 AM on Sundays
* `"20-00 SUN", "00-08 MON"` - 8 PM on Sundays to 8 AM on Mondays (spans from before midnight to after midnight).
### Setting worker schedules in the configuration file
### Setting Worker Schedules in the Configuration File
Set a schedule for a worker using configuration file options. The options are:
@ -649,7 +649,7 @@ For example, set a worker's schedule from 5 PM to 8 PM on Sunday through Tuesday
agent.uptime: ["17-20 SUN-TUE", "13-22 WED"]
### Overriding worker schedules using runtime properties
### Overriding Worker Schedules Using Runtime Properties
Runtime properties override the command line uptime / downtime properties. The runtime properties are:
@ -671,7 +671,7 @@ For example, to force a worker on for 24 hours:
curl --user <key>:<secret> --header "Content-Type: application/json" --data '{"worker":"<worker_id>","runtime_properties":[{"key": "force", "value": "on", "expiry": 86400}]}' http://<api-server-hostname-or-ip>:8008/workers.set_runtime_properties
### Overriding worker schedules using queue tags
### Overriding Worker Schedules Using Queue Tags
Queue tags override command line and runtime properties. The queue tags are the following:

View File

@ -12,7 +12,7 @@ Or, tag your questions on [stackoverflow](https://stackoverflow.com/questions/ta
You can always find us at [clearml@allegro.ai](mailto:clearml@allegro.ai?subject=ClearML).
## Allegro AI resources
## Allegro AI Resources
Read the [Allegro Blogs](https://allegro.ai/blog/).

View File

@ -14,7 +14,7 @@ This reference page is organized by configuration file section:
An example configuration file is located [here](https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf),
in the **ClearML** GitHub repositories
## Editing your configuration file
## Editing Your Configuration File
To add, change, or delete options, edit your configuration file.
@ -29,7 +29,7 @@ To add, change, or delete options, edit your configuration file.
1. In the required section (sections listed on this page), add, modify, or remove required options.
1. Save configuration file.
## Environment variables
## Environment Variables
ClearML's configuration file uses [HOCON](https://github.com/lightbend/config/blob/main/HOCON.md) configuration format,
which supports environment variable reference.
@ -49,7 +49,7 @@ See [Note on Windows](https://github.com/lightbend/config/blob/main/HOCON.md#not
for information about using environment variables with Windows in the configuration file.
## Configuration file sections
## Configuration File Sections
### agent section

View File

@ -35,7 +35,7 @@ in the [Security](clearml_server_security.md) page.
The minimum recommended amount of RAM is 8 GB. For example, a t3.large or t3a.large EC2 instance type would accommodate the recommended RAM size.
### AWS community AMIs
### AWS Community AMIs
**To launch a ClearML Server AWS community AMI:**
@ -106,7 +106,7 @@ Once deployed, **ClearML Server** exposes the following services:
If needed, modify the default login behavior to match workflow policy, see [Configuring Web Login Authentication](clearml_server_config.md#web-login-authentication)
on the "Configuring Your Own ClearML Server" page.
## Storage configuration
## Storage Configuration
The pre-built **ClearML Server** storage configuration is the following:
@ -115,7 +115,7 @@ The pre-built **ClearML Server** storage configuration is the following:
* File Server: `/mnt/fileserver/`
## Backing up and restoring data and configuration
## Backing Up and Restoring Data and Configuration
:::note
If data is being moved between a **Trains Server** and a **ClearML Server** installation, make sure to use the correct paths
@ -147,13 +147,13 @@ sudo tar czvf ~/clearml_backup_config.tgz -C /opt/clearml/config .
```
## ClearML Server AWS community AMIs
## ClearML Server AWS Community AMIs
The following section contains a list of AMI Image IDs per region for the latest **ClearML Server** version.
### Latest version
### Latest Version
#### v1.1.1

View File

@ -20,11 +20,11 @@ For all configuration options, see the [ClearML Configuration Reference](../conf
We recommend using the latest version of **ClearML Server**.
:::
## ClearML Server deployment configuration
## ClearML Server Deployment Configuration
**ClearML Server** supports two deployment configurations: single IP (domain) and sub-domains.
### Single IP (domain) configuration
### Single IP (Domain) Configuration
Single IP (domain) with the following open ports:
@ -32,7 +32,7 @@ Single IP (domain) with the following open ports:
* API service on port `8008`
* File storage service on port `8081`
### Sub-domain configuration
### Sub-domain Configuration
Sub-domain configuration with default http/s ports (`80` or `443`):
@ -60,7 +60,7 @@ Accessing the **ClearML Web UI** with `app.clearml.mydomain.com` will automatica
**ClearML Server** features can be configured using either configuration files or environment variables.
### Configuration files
### Configuration Files
The **ClearML Server** uses the following configuration files:
@ -88,12 +88,12 @@ tasks {
:::
### Environment variables
### Environment Variables
The **ClearML Server** supports several fixed environment variables that affect its behavior,
as well as dynamic environment variable that can be used to override any configuration file setting.
#### Fixed environment variables
#### Fixed Environment Variables
General
@ -109,7 +109,7 @@ Database service overrides:
* `CLEARML_REDIS_SERVICE_PORT` allows overriding the port for the Redis service
#### Dynamic environment variables
#### Dynamic Environment Variables
Dynamic environment variables can be used to override any configuration setting that appears in the configuration files.
@ -144,11 +144,11 @@ the default secret for the system's apiserver component can be overridden by set
dynamic environment variable's key to lower-case before overriding configuration values with the environment variable value.
:::
## Configuration procedures
## Configuration Procedures
### Sub-domains and load balancers
### Sub-domains and Load Balancers
To illustrate this configuration, we provide the following example based on AWS load balancing:
@ -189,7 +189,7 @@ To illustrate this configuration, we provide the following example based on AWS
### Opening Elasticsearch, MongoDB, and Redis for external access
### Opening Elasticsearch, MongoDB, and Redis for External Access
For improved security, the ports for **ClearML Server** Elasticsearch, MongoDB, and Redis servers are not exposed by default;
they are only open internally in the docker network. If external access is needed, open these ports (but make sure to
@ -267,7 +267,7 @@ Without web login authentication, **ClearML Server** does not restrict access (b
1. Restart **ClearML Server**.
### Using hashed passwords
### Using Hashed Passwords
You can also use hashed passwords instead of plain-text passwords. To do that:
- Set `pass_hashed: true`
- Use a base64-encoded hashed password in the `password` field instead of a plain-text password. Assuming Jane's plain-text password is `123456`, use the following bash command to generate the base64-encoded hashed password:

View File

@ -35,7 +35,7 @@ and **ClearML Server** needs to be installed.
* Minimum free disk space of at least 30% plus two times the size of the data.
* Python version >=2.7 or >=3.6, and Python accessible from the command-line as `python`
### Migrating the data
### Migrating the Data
**To migrate the data:**
@ -125,13 +125,13 @@ and **ClearML Server** needs to be installed.
kubectl get jobs -n upgrade-elastic
### Finishing up
### Finishing Up
To finish up:
1. Verify the data migration
1. Conclude the upgrade.
#### Step 1. Verifying the data migration
#### Step 1. Verifying the Data Migration
Upon successful completion, the migration script renames the original **Trains Server** directory, which contains the now
migrated data, and prints a completion message:
@ -150,7 +150,7 @@ For help in resolving migration issues, check the **allegro-clearml** [Slack Cha
[GitHub Issues](https://github.com/allegroai/clearml-server/issues), and the **ClearML Server** sections of the [FAQ](../faq.md).
:::
#### Step 2. Completing the installation
#### Step 2. Completing the Installation
After verifying the data migration completed successfully, conclude the **ClearML Server** installation process.
@ -205,7 +205,7 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
If issues arise during the upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq#common-docker-upgrade-errors).
##### Other deployment formats
##### Other Deployment Formats
To conclude the upgrade for deployment formats other than Linux, follow their upgrade instructions:

View File

@ -20,7 +20,7 @@ for Firefox, go to Developer Tools > Storage > Cookies, and for Chrome, go to De
and delete all cookies under the **ClearML Server** URL.
:::
## Default ClearML Server service ports
## Default ClearML Server Service Ports
After deploying **ClearML Server**, the services expose the following node ports:
@ -28,7 +28,7 @@ After deploying **ClearML Server**, the services expose the following node ports
* API server on `8008`
* File Server on `8081`
## Default ClearML Server storage paths
## Default ClearML Server Storage Paths
The persistent storage configuration:
@ -85,7 +85,7 @@ The minimum requirements for **ClearML Server** are:
docker-compose -f /opt/clearml/docker-compose.yml up -d
## Backing up and restoring data and configuration
## Backing Up and Restoring Data and Configuration
The commands in this section are an example of how to back up and restore data and configuration .
@ -112,11 +112,11 @@ If the data and the configuration need to be restored:
The following section contains a list of Custom Image URLs (exported in different formats) for each released **ClearML Server** version.
### Latest version - v1.0.2
### Latest Version - v1.0.2
- [https://storage.googleapis.com/allegro-files/clearml-server/clearml-server.tar.gz](https://storage.googleapis.com/allegro-files/clearml-server/clearml-server.tar.gz)
### All release versions
### All Release Versions
- v1.0.2 - [https://storage.googleapis.com/allegro-files/clearml-server/clearml-server-1-0-2.tar.gz](https://storage.googleapis.com/allegro-files/clearml-server/clearml-server-1-0-2.tar.gz)
- v1.0.1 - [https://storage.googleapis.com/allegro-files/clearml-server/clearml-server-1-0-1.tar.gz](https://storage.googleapis.com/allegro-files/clearml-server/clearml-server-1-0-1.tar.gz)

View File

@ -39,7 +39,7 @@ instructions in the [Security](clearml_server_security.md) page.
:::
### Step 1: Modify Elasticsearch default values in the Docker configuration file
### Step 1: Modify Elasticsearch Default Values in the Docker Configuration File
Before deploying **ClearML Server** in a Kubernetes cluster, modify several Elasticsearch settings in the Docker configuration.
For more information, see [Install Elasticsearch with Docker](https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#_notes_for_production_use_and_defaults)
@ -80,7 +80,7 @@ in the Elasticsearch documentation and [Daemon configuration file](https://docs.
sudo service docker restart
### Step 2. Deploy ClearML Server in the Kubernetes using Helm
### Step 2. Deploy ClearML Server in the Kubernetes Using Helm
After modifying several Elasticsearch settings in the Docker configuration (see Step 1 above), deploy **ClearML Server**.

View File

@ -135,7 +135,7 @@ instructions in the [Security](clearml_server_security.md) page.
The server is now running on [http://localhost:8080](http://localhost:8080).
## Port mapping
## Port Mapping
After deploying **ClearML Server**, the services expose the following ports:
@ -154,7 +154,7 @@ After deploying **ClearML Server**, the services expose the following ports:
## Backing up and restoring data and configuration
## Backing Up and Restoring Data and Configuration
The commands in this section are an example of how to back up and to restore data and configuration .

View File

@ -60,7 +60,7 @@ By default, **ClearML Server** launches with unrestricted access. To restrict **
The server is now running on [http://localhost:8080](http://localhost:8080).
## Port mapping
## Port Mapping
After deploying **ClearML Server**, the services expose the following node ports:

View File

@ -13,7 +13,7 @@ For upgrade purposes, the terms **Trains Server** and **ClearML Server** are int
The sections below contain the steps to upgrade **ClearML Server** on the [same AWS instance](#upgrading-on-the-same-aws-instance), and
to upgrade and migrate to a [new AWS instance](#upgrading-and-migrating-to-a-new-aws-instance).
### Upgrading on the same AWS instance
### Upgrading on the Same AWS Instance
This section contains the steps to upgrade **ClearML Server** on the same AWS instance.
@ -48,7 +48,7 @@ Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capabilit
docker-compose -f /opt/clearml/docker-compose.yml pull
docker-compose -f docker-compose.yml up -d
### Upgrading and migrating to a new AWS instance
### Upgrading and Migrating to a New AWS Instance
This section contains the steps to upgrade **ClearML Server** on the new AWS instance.

View File

@ -25,7 +25,7 @@ models, and dataviews, can be viewed in the project's [experiments table](../web
## Usage
### Creating sub-projects
### Creating Sub-projects
When [initializing a task](task.md#task-creation), its project needs to be specified. If the project entered does not exist, it will be created.
Projects can contain sub-projects, just like folders can contain sub-folders. Input into the `project_name`
@ -44,7 +44,7 @@ Nesting projects works on multiple levels. For example: `project_name=main_proje
Projects can also be created using the [`projects.create`](../references/api/endpoints.md#post-projectscreate) REST API call.
### View all projects in system
### View All Projects in System
To view all projects in the system, use the `Task` class method `get_projects`:
@ -54,7 +54,7 @@ project_list = Task.get_projects()
This returns a list of project sorted by last update time.
### More actions
### More Actions
For additional ways to work with projects, use the REST API `projects` resource. Some of the available actions include:
* [`projects.create`](../references/api/endpoints.md#post-projectscreate) and [`projects.delete`](../references/api/endpoints.md#post-projectsdelete) - create and delete projects

View File

@ -27,7 +27,7 @@ It's possible to copy ([clone](../webapp/webapp_exp_reproducing.md)) a task mult
![Task](../img/fundamentals_task.png)
## Task sections
## Task Sections
The sections of **ClearML Task** are made up of the information that a task captures and stores, which consists of code
execution details and execution outputs. This information is used for tracking
@ -48,7 +48,7 @@ The captured [execution output](../webapp/webapp_exp_track_visual.md#experiment-
To view a more in depth description of each task section, see [Tracking Experiments and Visualizing Results](../webapp/webapp_exp_track_visual.md).
## Task types
## Task Types
Tasks have a *type* attribute, which denotes their purpose (Training / Testing / Data processing). This helps to further
organize projects and ensure tasks are easy to [search and find](#querying--searching-tasks). The default task type is *training*.
@ -64,7 +64,7 @@ Available task types are:
- *data_processing*, *qc*
- *custom*
## Task lifecycle
## Task Lifecycle
ClearML Tasks are created in one of the following methods:
* Manually running code that is instrumented with the ClearML SDK and invokes `Task.init()`.
@ -102,7 +102,7 @@ The above diagram demonstrates how a previously run task can be used as a baseli
1. The new task is enqueued for execution.
1. A `clearml-agent` servicing the queue pulls the new task and executes it (where ClearML again logs all the execution outputs).
## Task states
## Task States
The state of a Task represents its stage in the Task lifecycle. It indicates whether the Task is read-write (editable) or
read-only. For each state, a state transition indicates which actions can be performed on an experiment, and the new state

View File

@ -5,6 +5,7 @@ title: First Steps
## Install ClearML
First, [sign up for free](https://app.community.clear.ml)
Install the clearml python package:
@ -18,7 +19,7 @@ clearml-init
```
## Auto-log experiment
## Auto-log Experiment
In ClearML, experiments are organized as [Tasks](../../fundamentals/task).

View File

@ -46,7 +46,7 @@ Artifacts can be stored anywhere, either on the ClearML server, or any object st
see all [storage capabilities](../../integrations/storage).
### Adding artifacts
### Adding Artifacts
Uploading a local file containing the preprocessed results of the data:
```python
@ -154,7 +154,7 @@ Any page is sharable by copying the URL from the address bar, allowing you to bo
It's also possible to tag Tasks for visibility and filtering allowing you to add more information on the execution of the experiment.
Later you can search based on task name and tag in the search bar, and filter experiments based on their tags, parameters, status and more.
## What's next?
## What's Next?
This covers the Basics of ClearML! Running through this guide we've learned how to log Parameters, Artifacts and Metrics!

View File

@ -11,7 +11,7 @@ while ClearML ensures your work is reproducible and scalable.
<img src="https://github.com/allegroai/clearml-docs/blob/main/docs/img/clearml_architecture.png?raw=true" width="100%" alt="Architecture diagram"/>
## What can you do with ClearML?
## What Can You Do with ClearML?
- Track and upload metrics and models with only 2 lines of code
- Create a bot that sends you a slack message whenever you model improves in accuracy

View File

@ -28,7 +28,7 @@ Once we have a Task in ClearML, we can clone and edit its definition in the UI.
- Create data monitoring & scheduling and launch inference jobs to test performance on any new coming dataset.
- Once there are two or more experiments that run after another, group them together into a [pipeline](../../fundamentals/pipelines.md)
## Manage your data
## Manage Your Data
Use [ClearML Data](../../clearml_data.md) to version your data, then link it to running experiments for easy reproduction.
Make datasets machine agnostic (i.e. store original dataset in a shared storage location, e.g. shared-folder/S3/Gs/Azure)
ClearML Data supports efficient Dataset storage and caching, differentiable & compressed

View File

@ -123,7 +123,7 @@ from clearml import Task
executed_task = Task.get_task(task_id='aabbcc')
# get a summary of the min/max/last value of all reported scalars
min_max_vlues = executed_task.get_last_scalar_metrics()
# get detialed graphs of all scalars
# get detailed graphs of all scalars
full_scalars = executed_task.get_reported_scalars()
```

View File

@ -13,7 +13,7 @@ on a remote or local machine, from the remote repository and from a local script
- [allegroai/events](https://github.com/allegroai/events) repository cloned (for local script execution)
### Executing code from a remote repository
### Executing Code from a Remote Repository
``` bash
clearml-task --project keras_examples --name remote_test --repo https://github.com/allegroai/events.git --script /webinar-0620/keras_mnist.py --args batch_size=64 epochs=1 --queue default
@ -57,7 +57,7 @@ or add the **`--packages '<package_name>`** flag to the command.
<br />
### Executing a local script
### Executing a Local Script
Using `clearml-task` to execute a local script is very similar to using it with a remote repo.
For this example, we will be using a local version of this [script](https://github.com/allegroai/events/blob/master/webinar-0620/keras_mnist.py).

View File

@ -10,7 +10,7 @@ the needed files.
1. Open terminal and change directory to the cloned repository's examples folder
`cd clearml/examples/reporting`
## Creating initial dataset
## Creating Initial Dataset
1. To create the dataset, run this code:
@ -68,7 +68,7 @@ The command also finalizes the dataset, making it immutable and ready to be cons
Dataset closed and finalized
```
## Listing Dataset content
## Listing Dataset Content
To see that all the files were added to the created dataset, use `clearml-data list` and enter the ID of the dataset
that was just closed.

View File

@ -25,7 +25,7 @@ Once these are logged, they can be visualized in the **ClearML Web UI**.
If you are not already using **ClearML**, see [Getting Started](/getting_started/ds/best_practices.md).
:::
## Adding ClearML to code
## Adding ClearML to Code
Add two lines of code:
```python

View File

@ -40,7 +40,7 @@ Text printed to the console for training progress, as well as all other console
![image](../../../img/keras_colab_01.png)
## Configuration objects
## Configuration Objects
The configuration appears in **CONFIGURATIONS** **>** **CONFIGURATION OBJECTS** **>** **General**.

View File

@ -21,7 +21,7 @@ The scatter plots appear in the **ClearML Web UI**, in **RESULTS** **>** **PLOTS
![image](../../../img/examples_matplotlib_example_03.png)
## Debug samples
## Debug Samples
The images appear in **RESULTS** **>** **DEBUG SAMPLES**. Each debug sample image is associated with a metric.

View File

@ -17,7 +17,7 @@ The scatter plots appear in the **ClearML Web UI**, in **RESULTS** **>** **PLOTS
![image](../../../img/examples_matplotlib_example_03.png)
## Debug samples
## Debug Samples
The images appear in **RESULTS** **>** **DEBUG SAMPLES**. Each debug sample image is associated with a metric.

View File

@ -29,7 +29,7 @@ Integrate **ClearML** with the following steps:
event_name=Events.ITERATION_COMPLETED)
```
### ClearMLLogger parameters
### ClearMLLogger Parameters
The following are the `ClearMLLogger` method parameters:
@ -52,7 +52,7 @@ The following are the `ClearMLLogger` method parameters:
## Logging
### Ignite engine output and / or metrics
### Ignite Engine Output and / or Metrics
To log scalars, Ignite engine's output and / or metrics, use the `OutputHandler`.
@ -91,7 +91,7 @@ clearml_logger.attach(evaluator,
event_name=Events.EPOCH_COMPLETED)
```
### Optimizer parameters
### Optimizer Parameters
To log optimizer parameters, use `OptimizerParamsHandler`:
```python
@ -101,7 +101,7 @@ clearml_logger.attach(trainer,
event_name=Events.ITERATION_STARTED)
```
### Model weights
### Model Weights
To log model weights as scalars, use `WeightsScalarHandler`:
@ -122,7 +122,7 @@ clearml_logger.attach(trainer,
```
## Model snapshots
## Model Snapshots
To save input snapshots as **ClearML** artifacts, use `ClearMLSaver`:
@ -137,7 +137,7 @@ handler = Checkpoint(to_save, ClearMLSaver(clearml_logger), n_saved=1,
validation_evaluator.add_event_handler(Events.EVENT_COMPLETED, handler)
```
## Visualizing experiment results
## Visualizing Experiment Results
When the code with an ignite `ClearMLLogger` object and attached [handlers](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/trains_logger.py)
runs, the experiment results can be visualized in the **ClearML Web UI**.
@ -154,7 +154,7 @@ View the scalars, including training and validation metrics, in the experiment's
![image](../../../img/ignite_training.png)
![image](../../../img/ignite_validation.png)
### Model snapshots
### Model Snapshots
To save model snapshots, use `ClearMLServer`.

View File

@ -10,17 +10,17 @@ The accuracy, learning rate, and training loss scalars are automatically logged,
![image](../../../../../img/examples_audio_classification_UrbanSound8K_03.png)
## Debug samples
## Debug Samples
The audio samples and spectrogram plots are automatically logged and appear in **RESULTS** **>** **DEBUG SAMPLES**.
### Audio samples
### Audio Samples
![image](../../../../../img/examples_audio_classification_UrbanSound8K_06.png)
By doubling clicking a thumbnail, you can play an audio sample.
### Spectrogram visualizations
### Spectrogram Visualizations
![image](../../../../../img/examples_audio_classification_UrbanSound8K_04.png)

View File

@ -11,17 +11,17 @@ demonstrates integrating **ClearML** into a Jupyter Notebook which uses PyTorch
![image](../../../../../img/examples_audio_preprocessing_example_08.png)
## Debug samples
## Debug Samples
**ClearML** automatically logs the audio samples which the example reports by calling TensorBoard methods, and the spectrogram visualizations reported by calling Matplotlib methods. They appear in **RESULTS** **>** **DEBUG SAMPLES**.
### Audio samples
### Audio Samples
You can play the audio samples by double clicking the audio thumbnail.
![image](../../../../../img/examples_audio_preprocessing_example_03.png)
### Spectrogram visualizations
### Spectrogram Visualizations
![image](../../../../../img/examples_audio_preprocessing_example_06.png)
![image](../../../../../img/examples_audio_preprocessing_example_06a.png)

View File

@ -77,7 +77,7 @@ All console output from `Hyper-Parameter Optimization` appears in **RESULTS** ta
![image](../../../../../img/examples_hyperparameter_search_03.png)
## Experiments comparison
## Experiments Comparison
**ClearML** automatically logs each job, meaning each experiment that executes with a set of hyperparameters, separately. Each appears as an individual experiment in the **ClearML Web UI**, where the Task name is `image_classification_CIFAR10` and the hyperparameters appended.
@ -93,31 +93,31 @@ Use the **ClearML Web UI** [experiment comparison](../../../../../webapp/webapp_
* Plots
* Debug images
### Side by side hyperparameter value comparison
### Side by Side Hyperparameter Value Comparison
In the experiment comparison window, **HYPER PARAMETERS** tab, select **Values** in the list (the right of **+ Add Experiment**), and hyperparameter differences appear with a different background color.
![image](../../../../../img/examples_hyperparameter_search_06.png)
### Metric comparison by hyperparameter
### Metric Comparison by Hyperparameter
Select **Parallel Coordinates** in the list, click a **Performance Metric**, and then select the checkboxes of the hyperparameters.
![image](../../../../../img/examples_hyperparameter_search_07.png)
### Scalar values comparison
### Scalar Values Comparison
In the **SCALARS** tab, select **Last Values**, **Min Values**, or **Max Values**. Value differences appear with a different background color.
![image](../../../../../img/examples_hyperparameter_search_09.png)
### Scalar series comparison
### Scalar Series Comparison
Select **Graph** and the scalar series for the jobs appears, where each scalar plot shows the series for all jobs.
![image](../../../../../img/examples_hyperparameter_search_08.png)
### Debug samples comparison
### Debug Samples Comparison
In the **DEBUG SAMPLES** tab, debug images appear.

View File

@ -17,7 +17,7 @@ The accuracy, accuracy per class, and training loss scalars are automatically lo
![image](../../../../../img/examples_image_classification_CIFAR10_05.png)
## Debug samples
## Debug Samples
The image samples are automatically logged and appear in **RESULTS** **>** **DEBUG SAMPLES**.

View File

@ -22,13 +22,13 @@ In this pipeline example, the data preprocessing Task and training Task are each
The data download Task is not a step in the pipeline, see [download_and_split](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_split.ipynb).
:::
## Pipeline controller and steps
## Pipeline Controller and Steps
In this example, a pipeline controller object is created.
pipe = PipelineController(default_execution_queue='dan_queue', add_pipeline_tags=True)
### Preprocessing step
### Preprocessing Step
Two preprocessing nodes are added to the pipeline. These steps will run concurrently.
@ -89,7 +89,7 @@ two sets of data are created in the pipeline.
</details>
### Training step
### Training Step
Each training node depends upon the completion of one preprocessing node. The parameter `parents` is a list of step names indicating all steps that must complete before the new step starts. In this case, `preprocessing_1` must complete before `train_1` begins, and `preprocessing_2` must complete before `train_2` begins.
@ -133,7 +133,7 @@ The ID of a Task whose artifact contains a set of preprocessed data for training
</details>
### Best model step
### Best Model Step
The best model step depends upon both training nodes completing and takes the two training node Task IDs to override.
@ -168,7 +168,7 @@ The IDs of the training Tasks from the steps named `train_1` and `train_2` are p
</details>
### Pipeline start, wait, and cleanup
### Pipeline Start, Wait, and Cleanup
Once all steps are added to the pipeline, start it. Wait for it to complete. Finally, cleanup the pipeline processes.
@ -196,7 +196,7 @@ Once all steps are added to the pipeline, start it. Wait for it to complete. Fin
</details>
## Running the pipeline
## Running the Pipeline
**To run the pipeline:**

View File

@ -21,7 +21,7 @@ These scalars, along with the resource utilization plots, which are titled **:mo
![image](../../../img/examples_pytorch_tensorboard_07.png)
## Debug samples
## Debug Samples
**ClearML** automatically tracks images and text output to TensorFlow. They appear in **RESULTS** **>** **DEBUG SAMPLES**.

View File

@ -40,7 +40,7 @@ When the script runs, it logs:
![image](../../../img/integration_keras_tuner_06.png)
## Summary of hyperparameter optimization
## Summary of Hyperparameter Optimization
**ClearML** automatically logs the parameters of each experiment run in the hyperparameter search. They appear in tabular
form in **RESULTS** **>** **PLOTS**.
@ -61,7 +61,7 @@ The model configuration is stored with the model.
![image](../../../img/integration_keras_tuner_05.png)
## Configuration objects
## Configuration Objects
### Hyperparameters

View File

@ -29,7 +29,7 @@ The `tf.summary.histogram` output appears in **RESULTS** **>** **PLOTS**.
![image](../../../img/examples_tensorboard_toy_04.png)
## Debug samples
## Debug Samples
**ClearML** automatically tracks images and text output to TensorFlow. They appear in **RESULTS** **>** **DEBUG SAMPLES**.

View File

@ -18,7 +18,7 @@ The **ClearML PyCharm plugin** enables syncing a local execution configuration t
![image](../../img/ide_pycharm_plugin_from_disk.png)
## Optional: ClearML configuration parameters
## Optional: ClearML Configuration Parameters
:::warning
If you set ClearML configuration parameters (ClearML Server and ClearML credentials) in the plugin, they will override

View File

@ -53,7 +53,7 @@ Interactive session config:
}
```
### Step 2: Launch interactive session
### Step 2: Launch Interactive Session
When the CLI asks whether to `Launch interactive session [Y]/n?`, press 'Y' or 'Enter'.
@ -72,7 +72,7 @@ Setup process details: https://app.community.clear.ml/projects/60893b87b0b642679
Waiting for environment setup to complete [usually about 20-30 seconds]
```
### Step 3: Connect to remote notebook
### Step 3: Connect to Remote Notebook
Then the CLI will output a link to the ready environment:
@ -87,7 +87,7 @@ Click on the JupyterLab link, which will open the remote session
Now, let's execute some code in the remote session!
### Step 4: Execute code
### Step 4: Execute Code
1. Open up a new Notebook.
@ -103,7 +103,7 @@ Now, let's execute some code in the remote session!
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but we don't need to install these
packages in Jupyter, because we specified them in the `--packages` flag of `clearml-session`.
### Step 5: Shut down remote session
### Step 5: Shut Down Remote Session
To shut down the remote session, which will free the `clearml-agent` and close the CLI, enter "Shutdown".

View File

@ -7,7 +7,7 @@ example script demonstrates hyperparameter optimization, which is automated by u
<a class="tr_top_negative" name="strategy"></a>
## Set the search strategy for optimization
## Set the Search Strategy for Optimization
A search strategy is required for the optimization, as well as a search strategy optimizer class to implement that strategy.
@ -57,7 +57,7 @@ the `RandomSearch` for the search strategy.
'we will be using RandomSearch strategy instead')
aSearchStrategy = RandomSearch
## Define a callback
## Define a Callback
When the optimization starts, a callback is provided that returns the best performing set of hyperparameters. In the script,
the `job_complete_callback` function returns the ID of `top_performance_job_id`.
@ -73,7 +73,7 @@ the `job_complete_callback` function returns the ID of `top_performance_job_id`.
if job_id == top_performance_job_id:
print('WOOT WOOT we broke the record! Objective reached {}'.format(objective_value))
## Initialize the optimization Task
## Initialize the Optimization Task
Initialize the Task, which will be stored in **ClearML Server** when the code runs. After the code runs at least once, it
can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../../../webapp/webapp_exp_tuning.md).
@ -89,7 +89,7 @@ the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False)
## Set up the arguments
## Set Up the Arguments
Create an arguments dictionary that contains the ID of the Task to optimize, and a Boolean indicating whether the
optimizer will run as a service, see [Running as a service](#running-as-a-service).
@ -112,7 +112,7 @@ to optimize a different experiment, see [tuning experiments](../../../webapp/web
args['template_task_id'] = Task.get_task(
project_name='examples', task_name='Keras HP optimization base').id
## Instantiate the optimizer object
## Instantiate the Optimizer Object
Instantiate an [automation.optimization.HyperParameterOptimizer](../../../references/sdk/hpo_optimization_hyperparameteroptimizer.md)
object, setting the optimization parameters, beginning with the ID of the experiment to optimize.
@ -170,7 +170,7 @@ Specify the remaining parameters, including the time limit per Task (minutes), p
<a class="tr_top_negative" name="service"></a>
## Running as a service
## Running as a Service
The optimization can run as a service, if the `run_as_service` argument is set to `true`. For more information about
running as a service, see [ClearML Agent services container](../../../clearml_agent.md#services-mode)

View File

@ -37,7 +37,7 @@ controller Task has already run at least once and is in **ClearML Server**).
The sections below describe in more detail what happens in the controller Task and in each step Task.
## The pipeline controller
## The Pipeline Controller
1. Create the pipeline controller object.
@ -98,7 +98,7 @@ The sections below describe in more detail what happens in the controller Task a
pipe.stop()
```
## Step 1 - Downloading the data
## Step 1 - Downloading the Data
In the Step 1 Task ([step1_dataset_artifact.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step1_dataset_artifact.py)):
1. Clone base Task and enqueue it for execution
@ -118,7 +118,7 @@ when the `add_step` method is called in the pipeline controller.
task.upload_artifact('dataset', artifact_object=local_iris_pkl)
```
## Step 2 - Processing the data
## Step 2 - Processing the Data
In the Step 2 Task ([step2_data_processing.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step2_data_processing.py)):
1. Create a parameter dictionary and connect it to the Task.
@ -158,7 +158,7 @@ In the Step 2 Task ([step2_data_processing.py](https://github.com/allegroai/clea
task.upload_artifact('y_test', y_test)
```
## Step 3 - Training the network
## Step 3 - Training the Network
In the Step 3 Task ([step3_train_model.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step3_train_model.py)):
1. Create a parameter dictionary and connect it to the Task.
@ -191,7 +191,7 @@ In the Step 3 Task ([step3_train_model.py](https://github.com/allegroai/clearml/
1. Train the network and log plots, along with **ClearML** automatic logging.
## Running the pipeline
## Running the Pipeline
**To run the pipeline:**

View File

@ -9,7 +9,7 @@ When the script runs, it creates an experiment named `3D plot reporting`, which
**ClearML** reports these plots in the **ClearML Web UI** **>** experiment page **>** **RESULTS** tab **>** **PLOTS** sub-tab.
## Surface plot
## Surface Plot
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
@ -30,7 +30,7 @@ Visualize the reported surface plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_01.png)
## 3D scatter plot
## 3D Scatter Plot
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
method.

View File

@ -31,7 +31,7 @@ When the script runs, it creates an experiment named `artifacts example`, which
![image](../../img/examples_reporting_03.png)
## Dynamically tracked artifacts
## Dynamically Tracked Artifacts
Currently, **ClearML** supports uploading and dynamically tracking Pandas DataFrames. Use the [Task.register_artifact](../../references/sdk/task.md#register_artifact)
method. If the Pandas DataFrame changes, **ClearML** uploads the changes. The updated artifact is associated with the experiment.
@ -59,7 +59,7 @@ method to retrieve it, we can see that **ClearML** tracked the change.
# or access it from anywhere using the Task's get_registered_artifacts()
Task.current_task().get_registered_artifacts()['train'].sample(frac=0.5, replace=True, random_state=1)
## Artifacts without tracking
## Artifacts Without Tracking
**ClearML** supports several types of objects that can be uploaded and are not tracked. Use the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method.
@ -79,7 +79,7 @@ Artifacts without tracking include:
# add and upload pandas.DataFrame (onetime snapshot of the object)
task.upload_artifact('Pandas', artifact_object=df)
### Local files
### Local Files
# add and upload local file artifact
task.upload_artifact('local file', artifact_object=os.path.join('data_samples', 'dancing.jpg'))
@ -89,12 +89,12 @@ Artifacts without tracking include:
# add and upload dictionary stored as JSON)
task.upload_artifact('dictionary', df.to_dict())
### Numpy objects
### Numpy Objects
# add and upload Numpy Object (stored as .npz file)
task.upload_artifact('Numpy Eye', np.eye(100, 100))
### Image files
### Image Files
# add and upload Image (stored as .png file)
im = Image.open(os.path.join('data_samples', 'dancing.jpg'))

View File

@ -94,7 +94,7 @@ method.
![image](../../img/colab_explicit_reporting_06.png)
### Confusion matrices
### Confusion Matrices
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.

View File

@ -17,7 +17,7 @@ example script from ClearML's GitHub repo:
* The [clearml](https://github.com/allegroai/clearml) repository is cloned.
* The `clearml` package is installed.
## Before starting
## Before Starting
Make a copy of `pytorch_mnist.py` in order to add explicit reporting to it.
@ -26,7 +26,7 @@ Make a copy of `pytorch_mnist.py` in order to add explicit reporting to it.
cp pytorch_mnist.py pytorch_mnist_tutorial.py
## Step 1: Setting an output destination for model checkpoints
## Step 1: Setting an Output Destination for Model Checkpoints
Specify a default output location, which is where model checkpoints (snapshots) and artifacts will be stored when the
experiment runs. Some possible destinations include:
@ -72,7 +72,7 @@ For example, if the Task ID is `9ed78536b91a44fbb3cc7a006128c1b0`, then the dire
| +-- models
| +-- artifacts
## Step 2: Logger class reporting methods
## Step 2: Logger Class Reporting Methods
In addition to **ClearML** automagical logging, the **ClearML** Python
package contains methods for explicit reporting of plots, log text, media, and tables. These methods include:
@ -90,14 +90,14 @@ package contains methods for explicit reporting of plots, log text, media, and t
* [Logger.report_media](../../references/sdk/logger.md#report_media) - Report media including images, audio, and video.
* [Logger.get_default_upload_destination](../../references/sdk/logger.md#get_default_upload_destination) - Retrieve the destination that is set for uploaded media.
### Get a logger
### Get a Logger
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
method.
logger = task.get_logger
### Plot scalar metrics
### Plot Scalar Metrics
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method to report loss metrics.
@ -125,7 +125,7 @@ method to report loss metrics.
logger.report_scalar(title='Scalar example {} - epoch'.format(epoch),
series='Loss', value=loss.item(), iteration=batch_idx)
### Plot other (not scalar) data
### Plot Other (Not Scalar) Data
The script contains a function named `test`, which determines loss and correct for the trained model. We add a histogram
and confusion matrix to log them.
@ -165,19 +165,19 @@ and confusion matrix to log them.
logger.report_confusion_matrix(title='Confusion matrix example',
series='Test loss / correct', matrix=matrix, iteration=1)
### Log text
### Log Text
Extend **ClearML** by explicitly logging text, including errors, warnings, and debugging statements. We use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
logger.report_text('The default output destination for model snapshots and artifacts is: {}'.format(model_snapshots_path ), level=logging.DEBUG)
## Step 3: Registering artifacts
## Step 3: Registering Artifacts
Registering an artifact uploads it to **ClearML Server**, and if it changes, the change is logged in **ClearML Server**.
Currently, **ClearML** supports Pandas DataFrames as registered artifacts.
### Register the artifact
### Register the Artifact
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
@ -193,7 +193,7 @@ that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/ta
task.register_artifact('Test_Loss_Correct', df, metadata={'metadata string': 'apple',
'metadata int': 100, 'metadata dict': {'dict string': 'pear', 'dict int': 200}})
### Reference the registered artifact
### Reference the Registered Artifact
Once an artifact is registered, it can be referenced and utilized in the Python experiment script.
@ -205,7 +205,7 @@ methods to take a sample.
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(frac=0.5,
replace=True, random_state=1)
## Step 4: Uploading artifacts
## Step 4: Uploading Artifacts
Artifact can be uploaded to the **ClearML Server**, but changes are not logged.
@ -225,7 +225,7 @@ method with metadata specified in the `metadata` parameter.
metadata={'metadata string': 'banana', 'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}})
## Additional information
## Additional Information
After extending the Python experiment script, run it and view the results in the **ClearML Web UI**.

View File

@ -22,7 +22,7 @@ function, which reports the **ClearML** documentation's home page.
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://allegro.ai/docs/index.html")
## Reporting HTML local files
## Reporting HTML Local Files
Report the following using the `Logger.report_media` parameter method `local_path` parameter:
* [Interactive HTML](#interactive-html)

View File

@ -13,7 +13,7 @@ line options (in the **Args** subsection).
When the script runs, it creates an experiment named `hyper-parameters example`, which is associated with the `examples` project.
## argparse command line options
## Argparse Command Line Options
If a code uses argparse and initializes a Task, **ClearML** automatically logs the argparse arguments.
@ -45,7 +45,7 @@ TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
![image](../../img/examples_reporting_hyper_param_03.png)
## Parameter dictionaries
## Parameter Dictionaries
Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
method, and **ClearML** logs the parameters. **ClearML** also tracks changes to the parameters.

View File

@ -20,7 +20,7 @@ sub-tab.
When the script runs, it creates an experiment named `audio and video reporting`, which is associated with the `examples`
project.
## Reporting (uploading) media from a source by URL
## Reporting (Uploading) Media from a Source by URL
Report by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `url` parameter.
@ -40,7 +40,7 @@ The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click
![image](../../img/examples_reporting_08.png)
## Reporting (uploading) media from a local file
## Reporting (Uploading) Media from a Local File
Use the `local_path` parameter.

View File

@ -9,9 +9,9 @@ the configuration and label enumeration with it.
When the script runs, it creates an experiment named `Model configuration example`, which is associated with the `examples` project.
## Configuring models
## Configuring Models
### Using a configuration file
### Using a Configuration File
Connect a configuration file to a Task by calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the file location and the configuration object's name as arguments. In this example, we connect a JSON file and a YAML file
@ -29,7 +29,7 @@ in the **yaml file** object, as specified in the `name` parameter of the `connec
![image](../../img/examples_reporting_config.png)
### Configuration dictionary
### Configuration Dictionary
Connect a configuration dictionary to a Task by creating a dictionary, and then calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the dictionary and the object name as arguments. After the configuration is connected, **ClearML** tracks changes to it.
@ -50,7 +50,7 @@ method with the dictionary and the object name as arguments. After the configura
![image](../../img/examples_reporting_config_3.png)
## Label enumeration
## Label Enumeration
Connect a label enumeration dictionary by creating the dictionary, and then calling the [Task.connect_label_enumeration](../../references/sdk/task.md#connect_label_enumeration)
method with the dictionary as an argument.

View File

@ -9,7 +9,7 @@ sub-tab.
When the script runs, it creates an experiment named `pandas table reporting`, which is associated with the `examples` project.
## Reporting Pandas DataFrames as tables
## Reporting Pandas DataFrames as Tables
Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
method, and providing the DataFrame in the `table_plot` parameter.
@ -28,7 +28,7 @@ method, and providing the DataFrame in the `table_plot` parameter.
![image](../../img/examples_reporting_12.png)
## Reporting CSV files as tables
## Reporting CSV Files as Tables
Report CSV files by providing the URL location of the CSV file in the `url` parameter. For a local CSV file, use the `csv` parameter.

View File

@ -84,7 +84,7 @@ method.
yaxis_reversed=True,
)
## 2D scatter plots
## 2D Scatter Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
method. Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.

View File

@ -9,7 +9,7 @@ In the budget, set the maximum number of each instance type to spin for experime
Configure multiple instance types per queue, and multiple queues. The **ClearML** AWS
autoscaler will spin down idle instances based on the maximum idle time and the polling interval configurations.
## Running the ClearML AWS autoscaler
## Running the ClearML AWS Autoscaler
The **ClearML** AWS autoscaler can execute in [ClearML services mode](../../clearml_agent.md#services-mode),
and is configurable.
@ -23,7 +23,7 @@ Run **ClearML** AWS autoscaler in one of these ways:
* Run script locally or as a service.
* When executed, a Task is created, named `AWS Auto-Scaler` that associated with the `DevOps` project.
### Running using the ClearML Web UI
### Running Using the ClearML Web UI
Edit the parameters for the instance types, edit budget configuration by editing the Task, and then enqueue the Task to
run in **ClearML Agent** services mode.
@ -102,7 +102,7 @@ run in **ClearML Agent** services mode.
1. In the experiments table, right click the **AWS Auto-Scaler** Task **>** **Enqueue** **>** **services** queue **>** **ENQUEUE**.
### Running using the script
### Running Using the Script
The [aws_autoscaler.py](https://github.com/allegroai/clearml/blob/master/examples/services/aws-autoscaler/aws_autoscaler.py)
script includes a wizard which prompts for instance details and budget configuration.
@ -112,7 +112,7 @@ The script can run in two ways:
* Configure and enqueue.
* Enqueue with an existing configuration.
#### To configure and enqueue:
#### To Configure and Enqueue:
Use the `run` command line option:
@ -241,7 +241,7 @@ Execution log https://app.clearml-master.hosted.allegro.ai/projects/142a598b5d23
<br/>
#### To enqueue with an existing configuration:
#### To Enqueue with an Existing Configuration:
Use the `remote` command line option:

View File

@ -20,11 +20,11 @@ with options to run locally or as a service.
* **ClearML Agent** is [installed and configured](../../clearml_agent.md#installation).
* **ClearML Agent** is launched in [services mode](../../clearml_agent.md#services-mode).
## Running the cleanup service
## Running the Cleanup Service
### Running using the ClearML Web UI
### Running Using the ClearML Web UI
#### Step 1. Configuring the cleanup service
#### Step 1. Configuring the Cleanup Service
1. In the **ClearML Web UI** **Projects** page, click the **DevOps** project **>** click the **Cleanup Service** Task.
1. In the info panel, click the **CONFIGURATION** tab.
@ -43,14 +43,14 @@ with options to run locally or as a service.
* Right click the **Cleanup Service** Task **>** **Enqueue** **>** In the queue list, select **services** **>** **ENQUEUE**.
### Running using the script
### Running Using the Script
The [cleanup_service.py](https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py) allows
to enqueue the cleanup service to run in **ClearML Agent** services mode, because the `run_as_service` parameter is set to `True`.
python cleanup_service.py
## The cleanup service code
## The Cleanup Service Code
[cleanup_service.py](https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py) creates
a **ClearML** API client session to delete the Tasks. It creates an `APIClient` object that establishes a session with the

View File

@ -14,7 +14,7 @@ Notebook server by passing environment variables to the subprocess, which point
Task. When the script runs, it creates an experiment named `Allocate Jupyter Notebook Instance`, which is associated with
the `DevOps` project in the **ClearML Web UI**.
## Running the Jupyter Notebook server service
## Running the Jupyter Notebook Server Service
1. The example script must run at least once before it can execute as a **ClearML Agent** service, because the Task must
be stored in **ClearML Server** in order to be enqueued for a **ClearML Agent** to fetch and execute.
@ -36,7 +36,7 @@ the `DevOps` project in the **ClearML Web UI**.
The status changes to *Pending* and then to *Running*. Once it is running, the Jupyter Notebook server is ready to
run notebooks.
## Logging the Jupyter Notebook server
## Logging the Jupyter Notebook Server
**ClearML** stores the Jupyter Notebook server links in the `Task.comment` property, which appears in the **ClearML Web UI**
**>** the experiment's **INFO** tab **>** **DESCRIPTION** section.

View File

@ -43,14 +43,14 @@ The Slack API token and channel you create are required to configure the Slack a
1. In the confirmation dialog, click **Allow**.
1. Click **Copy** to copy the **Bot User OAuth Access Token**.
## Running the service
## Running the Service
There are two options to run the Slack alerts service:
* [Using the ClearML Web UI](#running-using-the-clearml-web-ui)
* [Using the script](#running-using-the-script)
### Running using the ClearML Web UI
### Running Using the ClearML Web UI
#### Step 1. Configuring the service
#### Step 1. Configuring the Service
1. In the **ClearML Web UI** **Projects** page, click the **Monitoring** project **>** click the **Slack Alerts** Task.
1. In the info panel, click the **CONFIGURATION** tab.
@ -73,11 +73,11 @@ There are two options to run the Slack alerts service:
**services**.
* **slack_api** - The Slack API key. The default value can be set in the environment variable, `SLACK_API_TOKEN` (MANDATORY).
#### Step 2. Enqueuing the service
#### Step 2. Enqueuing the Service
* Right click the **Monitoring** Task **>** **Enqueue** **>** Select **services** **>** **ENQUEUE**.
### Running using the script
### Running Using the Script
The [slack_alerts.py](https://github.com/allegroai/clearml/blob/master/examples/services/monitoring/slack_alerts.py)
allows to configure the monitoring service, and then either:
@ -109,7 +109,7 @@ allows to configure the monitoring service, and then either:
* ``local`` - If ``True``, run locally only instead of as a service. If ``False``, then automatically enqueue the Task
to run in **ClearML Agent** services mode. The default value is ``False``.
## Additional information about slack_alerts.py
## Additional Information about slack_alerts.py
In `slack_alerts.py`, the class `SlackMonitor` inherits from the `Monitor` class in `clearml.automation.monitor`.
`SlackMonitor` overrides the following `Monitor` class methods:

View File

@ -6,7 +6,7 @@ If your computer is offline, or you do not want a Task's data and logs stored in
the **Offline Mode** option. In this mode, all the data and logs that the Task captures from the code are stored in a
local folder, which can be later uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md).
## Setting Task to offline mode
## Setting Task to Offline Mode
Before initializing a Task, use the [Task.set_offline](../references/sdk/task.md#taskset_offline) class method and set the
`offline_mode` argument to `True`.
@ -33,7 +33,7 @@ ClearML Task: Offline session stored in /home/user/.clearml/cache/offline/b78684
All the information captured by the Task is saved locally. Once the task script finishes execution, it's zipped. The
session's zip folder's location is `~/.clearml/cache/offline/<task_id>.zip`.
## Uploading local session
## Uploading Local Session
In order to upload to the ClearML Server the local execution data that the Task captured offline, use the
[Task.import_offline_session](../references/sdk/task.md#taskimport_offline_session) method. This method will upload the

View File

@ -16,7 +16,7 @@ class. The storage examples include:
## StorageManager
### Downloading a file
### Downloading a File
To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
method, and specify the destination location as the `remote_url` argument:
@ -39,7 +39,7 @@ To download a non-compressed file, set the `extract_archive` argument to `False`
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False)
### Uploading a file
### Uploading a File
To upload a file to storage, call the [StorageManager.upload_file](../../references/sdk/storage.md#storagemanagerupload_file)
method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url`
@ -48,7 +48,7 @@ argument.
manager.upload_file(local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder")
### Setting cache limits
### Setting Cache Limits
To set a limit on the number of files cached, call the [StorageManager.set_cache_file_limit](../../references/sdk/storage.md#storagemanagerset_cache_file_limit)
method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size,

View File

@ -21,7 +21,7 @@ For this tutorial, use one of the following as a project:
* A project on the demo **ClearML Server** ([https://demoapp.demo.clear.ml/profile](https://demoapp.demo.clear.ml/profile)).
* Clone the [clearml](https://github.com/allegroai/clearml) repository and execute some of the example scripts.
## Step 1: Select a project
## Step 1: Select a Project
The leaderboard will track experiments in one or all projects.
@ -29,7 +29,7 @@ Begin by opening the **ClearML Web UI** and selecting a project, by doing one of
* On the Home page, click a project card or **VIEW ALL**.
* On the Projects page, click a project card or the **All projects** card.
## Step 2: Filter the experiments
## Step 2: Filter the Experiments
The experiments table allows filtering experiments by experiment name, type, and status.
@ -52,7 +52,7 @@ The experiments table allows filtering experiments by experiment name, type, and
* **Aborted** - The experiment ran and was manually or programmatically terminated.
* **Published** - The experiment is not running, it is preserved as read-only.
## Step 3: Hide the defaults column
## Step 3: Hide the Defaults Column
Customize the columns on the tracking leaderboard by hiding any of the default columns shown below.
@ -70,7 +70,7 @@ Customize the columns on the tracking leaderboard by hiding any of the default c
* **UPDATED** - The elapsed time since the experiment update.
* **ITERATION** - The last iteration of the experiment. For experiments with a status of Running, this is the most recent iteration. For Completed, Aborted, and Failed experiments, this is the final iteration.
## Step 4: Show metrics or hyperparameters
## Step 4: Show Metrics or Hyperparameters
The leaderboard can contain any combination of metrics and hyperparameters. For each metric, choose whether to view the last (most
recent), minimum, and / or maximum values.
@ -82,7 +82,7 @@ recent), minimum, and / or maximum values.
the leaderboard, and select the metric values (min / max / last).
1. For hyperparameters, click **+ HYPER PARAMETERS**, and then select the hyperparameter checkboxes of those to show in the leaderboard.
## Step 5: Enable auto refresh
## Step 5: Enable Auto Refresh
Auto refresh allows monitoring the progress of experiments in real time. It is enabled by default.
@ -90,7 +90,7 @@ Auto refresh allows monitoring the progress of experiments in real time. It is e
* Hover over refresh and then check / uncheck the **Auto Refresh** checkbox.
## Step 6: Save the tracking leaderboard
## Step 6: Save the Tracking Leaderboard
The URL for **ClearML Web UI** now contains parameters and values for the customized leaderboard. Bookmark it to be able
to return to the leaderboard and monitor the experiments.

View File

@ -12,13 +12,13 @@ example script.
for the TensorFlow examples.
* Have **ClearML Agent** [installed and configured](../../clearml_agent.md#installation).
## Step 1: Run the experiment
## Step 1: Run the Experiment
In the `examples/frameworks/pytorch` directory, run the experiment script:
python pytorch_mnist.py
## Step 2: Clone the experiment
## Step 2: Clone the Experiment
Clone the experiment to create an editable copy for tuning.
@ -28,7 +28,7 @@ Clone the experiment to create an editable copy for tuning.
1. In the context menu, click **Clone** **>** **CLONE**. The newly cloned experiment appears and its info panel slides open.
## Step 3: Tune the cloned experiment
## Step 3: Tune the Cloned Experiment
To demonstrate tuning, change two hyperparameter values.
@ -40,7 +40,7 @@ To demonstrate tuning, change two hyperparameter values.
1. Click **SAVE**.
## Step 4: Run a worker daemon listening to a queue
## Step 4: Run a Worker Daemon Listening to a Queue
To execute the cloned experiment, use a worker that can run a worker daemon listening to a queue.
@ -78,7 +78,7 @@ Run the worker daemon on the local development machine.
Running CLEARML-AGENT daemon in background mode, writing stdout/stderr to /home/<username>/.clearml_agent_daemon_outym6lqxrz.txt
## Step 5: Enqueue the tuned experiment
## Step 5: Enqueue the Tuned Experiment
Enqueue the tuned experiment.
@ -92,7 +92,7 @@ Enqueue the tuned experiment.
the status becomes Running. The progress of the experiment can be viewed in the info panel. When the status becomes
Completed, continue to the next step.
## Step 6: Compare the experiments
## Step 6: Compare the Experiments
To compare the original and tuned experiments:
1. In the **ClearML Web-App (UI)**, on the Projects page, click the `examples` project.

View File

@ -8,7 +8,7 @@ Metadata can be customized as needed using: **meta** dictionaries:
## Usage
### Adding Frame metadata
### Adding Frame Metadata
When instantiating a Frame, metadata that applies for the entire frame can be
added as an argument.
@ -28,7 +28,7 @@ frame = SingleFrame(
frame.metadata['dangerous'] = 'no'
```
### Adding ROI metadata
### Adding ROI Metadata
Metadata can be added to individual ROIs when adding an annotation to a `frame`, using the `add_annotation`
method.

View File

@ -80,7 +80,7 @@ myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset',
description='some description text')
```
### Accessing current Dataset
### Accessing Current Dataset
To get the current Dataset, use the `DatasetVersion.get_current` method.
@ -134,7 +134,7 @@ a Dataset version that yields a parent with two children, or when publishing the
Manage Dataset versioning using the DatasetVersion class in the ClearML Enterprise SDK.
### Creating snapshots
### Creating Snapshots
If the Dataset contains only one version whose status is *Draft*, snapshots of the current version can be created.
When creating a snapshot, the current version becomes the snapshot (it keeps the same version ID),
@ -143,7 +143,7 @@ and the newly created version (with its new version ID) becomes the current vers
To create a snapshot, use the `DatasetVersion.create_snapshot` method.
#### Snapshot naming
#### Snapshot Naming
In the simple version structure, ClearML Enterprise supports two methods for snapshot naming:
* **Timestamp naming** - If only the Dataset name or ID is provided, the snapshot is named `snapshot` with a timestamp
@ -172,7 +172,7 @@ In the simple version structure, ClearML Enterprise supports two methods for sna
The newly created version (with a new version ID) becomes the current version, and its name is `Current`.
#### Current version naming
#### Current Version Naming
In the simple version structure, ClearML Enterprise supports two methods for current version naming:
@ -189,7 +189,7 @@ myDataset = DatasetVersion.create_snapshot(dataset_name='MyDataset',
child_name='NewCurrentVersionName')
```
#### Adding metadata and comments
#### Adding Metadata and Comments
Add a metadata dictionary and / or comment to a snapshot.
@ -201,7 +201,7 @@ myDataset = DatasetVersion.create_snapshot(dataset_name='MyDataset',
child_comment='some text comment')
```
### Creating child versions
### Creating Child Versions
Create a new version from any version whose status is *Published*.
@ -230,7 +230,7 @@ myVersion = DatasetVersion.create_version(dataset_name='MyDataset',
raise_if_exists=True))
```
### Creating root-level parent versions
### Creating Root-level Parent Versions
Create a new version at the root-level. This is a version without a parent, and it contains no frames.
@ -239,7 +239,7 @@ myDataset = DatasetVersion.create_version(dataset_name='MyDataset',
version_name='NewRootVersion')
```
### Getting versions
### Getting Versions
To get a version or versions, use the `DatasetVersion.get_version` and `DatasetVersion.get_versions`
methods, respectively.
@ -279,7 +279,7 @@ myDatasetversion = DatasetVersion.get_version(dataset_name='MyDataset',
version_name='VersionName')
```
### Deleting versions
### Deleting Versions
Delete versions which are status *Draft* using the `Dataset.delete_version` method.
@ -291,7 +291,7 @@ myDataset.delete_version(version_name='VersionToDelete')
```
### Publishing versions
### Publishing Versions
Publish (make read-only) versions which are status *Draft* using the `Dataset.publish_version` method. This includes the current version, if the Dataset is in
the simple version structure.

View File

@ -41,7 +41,7 @@ A frame filter contains the following criteria:
Use combinations of these frame filters to build sophisticated queries.
## Debiasing input data
## Debiasing Input Data
Apply debiasing to each frame filter to adjust for an imbalance in input data. Ratios (weights) enable setting the proportion
of frames that are inputted, according to any of the criteria in a frame filter, including ROI labels, frame metadata,
@ -52,7 +52,7 @@ you want to input the same number of both. To debias the data, create two frame
of `1`, and the other for `nighttime` with a ratio of `5`. The Dataview will iterate approximately an equal number of
SingleFrames for each.
## ROI Label mapping (label translation)
## ROI Label Mapping (Label Translation)
ROI label mapping (label translation) applies to the new model. For example, apply mapping to:
@ -60,12 +60,12 @@ ROI label mapping (label translation) applies to the new model. For example, app
* Consolidate disparate datasets containing different names for the ROI.
* Hide labeled objects from the training process.
## Class label enumeration
## Class Label Enumeration
Define class labels for the new model and assign integers to each in order to maintain data conformity across multiple
codebases and datasets. It is important to set enumeration values for all labels of importance.
## Data augmentation
## Data Augmentation
On-the-fly data augmentation is applied to SingleFrames, transforming images without creating new data. Apply data augmentation
in steps, where each step is composed of a method, an operation, and a strength as follows:
@ -99,7 +99,7 @@ in steps, where each step is composed of a method, an operation, and a strength
* 1.0 - Medium (recommended)
* 2.0 - High (strong)
## Iteration control
## Iteration Control
The input data **iteration control** settings determine the order, number, timing, and reproducibility of the Dataview iterating
SingleFrames. Depending upon the combination of iteration control settings, all SingleFrames may not be iterated, and some
@ -141,7 +141,7 @@ from allegroai import DataView, IterationOrder
myDataView = DataView(iteration_order=IterationOrder.random, iteration_infinite=True)
```
### Adding queries
### Adding Queries
To add a query to a DataView, use the `DataView.add_query` method and specify Dataset versions,
ROI and / or frame queries, and other criteria.
@ -154,7 +154,7 @@ specify the queries.
Multiple queries can be added to the same or different Dataset versions, each query with the same or different ROI
and / or frame queries.
#### ROI queries:
#### ROI Queries:
* ROI query for a single label
@ -206,7 +206,7 @@ myDataView.add_query(dataset_name='myDataset', version_name='training',
roi_query='label.keyword:\"Car\" AND NOT label.keyword:\"partly_occluded\"')
```
#### Querying multiple Datasets and versions
#### Querying Multiple Datasets and Versions
This example demonstrates an ROI query filtering for frames containing the ROI labels `car`, `truck`, or `bicycle`
from two versions of one Dataset, and one version of another Dataset.
@ -234,7 +234,7 @@ myDataView.add_query(dataset_name='dataset_2',
```
#### Frame queries
#### Frame Queries
Use frame queries to filter frames by ROI labels and / or frame metadata key-value pairs that a frame must include or
exclude for the DataView to return the frame.
@ -252,13 +252,13 @@ myDataView.add_query(dataset_name='myDataset',
```
### Controlling query iteration
### Controlling Query Iteration
Use `DataView.set_iteration_parameters` to manage the order, number, timing, and reproducibility of frames
for training.
#### Iterate frames infinitely
#### Iterate Frames Infinitely
This example demonstrates creating a Dataview and setting its parameters to iterate infinitely until the script is
manually terminated.
@ -271,7 +271,7 @@ myDataView = DataView()
myDataView.set_iteration_parameters(order=IterationOrder.random, infinite=True)
```
#### Iterate all frames matching the query
#### Iterate All Frames Matching the Query
This example demonstrates creating a DataView and setting its parameters to iterate and return all frames matching a query.
```python
@ -287,7 +287,7 @@ myDataView.add_query(dataset_name='myDataset',
version_name='myVersion', roi_query='cat')
```
#### Iterate a maximum number of frames
#### Iterate a Maximum Number of Frames
This example demonstrates creating a DataView and setting its parameters to iterate a specific number of frames. If the
Dataset version contains fewer than that number of frames matching the query, then fewer are returned by the iterator.
@ -301,7 +301,7 @@ myDataView.set_iteration_parameters(
maximum_number_of_frames=5000)
```
### Debiasing input data
### Debiasing Input Data
Debias input data using the `DataView.add_query` method's `weight` argument to add weights. This
is the same `DataView.add_query` that can be used to specify Dataset versions, and ROI queries and frame queries.

View File

@ -100,7 +100,7 @@ myVersion.update_frames(frames)
```
### Deleting frames
### Deleting Frames
To delete a FrameGroup, use the `DatasetVersion.delete_frames` method, just like when deleting a
SingleFrame, except that a FrameGroup is being referenced.

View File

@ -30,7 +30,7 @@ See [Example 1](#example-1), which shows `masks` in `sources`, `mask` in `rois`,
a mask to its source in a frame.
## Masks structure
## Masks Structure
The chart below explains the keys and values of the `masks` dictionary (in the [`sources`](sources.md)
section of a Frame).

View File

@ -19,7 +19,7 @@ list of source IDs. Those IDs connect `sources` to ROIs.
The examples below demonstrate the `sources` section of a Frame for different types of content.
### Example 1: Video sources
### Example 1: Video Sources
This example demonstrates `sources` for video.
@ -70,7 +70,7 @@ is the source with the ID `front` and the other is the source with the ID `rear`
Sources includes a variety of content types. This example shows mp4 video.
:::
### Example 2: Images sources
### Example 2: Images Sources
This example demonstrates `sources` images.
<details className="cml-expansion-panel info">
@ -101,7 +101,7 @@ The `sources` of this frame contains the following information:
* `timestamp` is 0 (timestamps are used for video).
### Example 3: Sources and regions of interest
### Example 3: Sources and Regions of Interest
This example demonstrates `sources` for video, `masks`, and `preview`.

View File

@ -42,30 +42,30 @@ Use annotation tasks to efficiently organize the annotation of frames in Dataset
1. Click **Create**.
### Completing annotation tasks
### Completing Annotation Tasks
To mark an annotation task as **Completed**:
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu) **>** **Complete** **>** **CONFIRM**.
### Deleting annotation tasks
### Deleting Annotation Tasks
To delete an annotation task:
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu) **>** **Delete** **>** **CONFIRM**.
### Filtering annotation tasks
### Filtering Annotation Tasks
There are two option for filtering annotation tasks:
* Active / Completed Filter - Toggle to show annotation tasks that are either **Active** or **Completed**
* Dataset Filter - Use to view only the annotation tasks for a specific Dataset.
### Sorting annotation tasks
### Sorting annotation Tasks
Sort the annotation tasks by either using **RECENT** or **NAME** from the drop-down menu on the top left of the page.
### Viewing annotation task information
### Viewing Annotation Task Information
To View the Dataset version, filters, and iteration information:
@ -87,7 +87,7 @@ depend upon the settings in the annotation task (see [Creating Annotation Tasks]
1. See instructions below about annotating frames.
#### Add FrameGroup objects
#### Add FrameGroup Objects
1. Select an annotation mode and add the bounded area to the frame image.
@ -100,63 +100,63 @@ depend upon the settings in the annotation task (see [Creating Annotation Tasks]
1. Optionally, add metadata.
1. Optionally, lock the annotation.
#### Add frame labels
#### Add Frame Labels
1. In **FRAME LABEL**, click **+ Add new**.
1. In the new label area, choose or enter a label.
1. Optionally, add metadata.
1. Optionally, lock the annotation.
#### Copy / paste an annotations
#### Copy / Paste an Annotations
1. Click the annotation or bounded area in the image or video clip.
1. Optionally, navigate to a different frame.
1. Click **PASTE**. The new annotation appears in the same location as the one you copied.
1. Optionally, to paste the same annotation, again, click **PASTE**.
#### Copy / paste all annotations
#### Copy / Paste All Annotations
1. Click **COPY ALL**.
1. Optionally, navigate to a different frame.
1. Click **PASTE**.
#### Move annotations
#### Move Annotations
* Move a bounded area by clicking on it and dragging.
#### Resize annotations
#### Resize Annotations
* Resize a bounded area by clicking on a vertex and dragging.
#### Delete annotations
#### Delete Annotations
1. Click the annotation or bounded area in the image or video clip.
1. Press **DELETE** or in the annotation, click **>X**.
#### Add labels
#### Add Labels
* Click in the annotation and choose a label from the label list, or type a new label.
#### Modify labels
#### Modify Labels
* In the annotation label textbox, choose a label from the list or type a new label.
#### Delete labels
#### Delete Labels
* In the annotation, in the label area, click the label's **X**.
#### Modify annotation metadata
#### Modify Annotation Metadata
* In the label, click edit and then in the popup modify the metadata dictionary (in JSON format).
#### Modify annotation color
#### Modify Annotation Color
* Modify the color of an area by clicking the circle in the label name and select a new color.
#### Lock / unlock annotations
#### Lock / Unlock Annotations
* Click the lock.
#### Modify frame metadata
#### Modify Frame Metadata
* Expand the **FRAME METADATA** area, click edit, and then in the popup modify the metadata dictionary (in JSON format).

View File

@ -10,7 +10,7 @@ The Datasets page offers the following functionalities:
![image](../../img/hyperdatasets/datasets_01.png)
## Dataset cards
## Dataset Cards
Dataset cards show summary information about versions, frames, and labels in a Dataset, and the elapsed time since the Dataset was last update and the user doing the update. Dataset cards allow you to open a specific Dataset to perform Dataset versioning and frames management.
@ -26,7 +26,7 @@ Dataset cards show summary information about versions, frames, and labels in a D
To change the label color coding, hover over a label color, click thr hand pointer, and then select a new color.
:::
## Creating new Datasets
## Creating New Datasets
Create a new Dataset which will contain one version named `Current`. The new version will not contain any frames.

View File

@ -9,7 +9,7 @@ filtering logic.
![Dataset page](../../img/hyperdatasets/frames_01.png)
## Frame viewer
## Frame Viewer
Frame viewer allows you to view and edit annotations which can be FrameGroup objects (Regions of Interest) and FrameGroup
labels applied to the entire frame not a region of the frame, the frame details (see [frames](../frames.md)),
@ -17,7 +17,7 @@ frame metadata, the raw data source URI, as well as providing navigation and vie
![Frame viewer](../../img/hyperdatasets/web-app/dataset_example_frame_editor.png)
### Frame viewer controls
### Frame Viewer Controls
Use frame viewer controls to navigate between frames in a Dataset Version, and control frame changes and viewing.
@ -39,14 +39,14 @@ Use frame viewer controls to navigate between frames in a Dataset Version, and c
#### Additional keyboard shortcuts
**General controls**
**General Controls**
|Control|Action|
|----|-----|
|Hold Spacebar + Press and hold image + Drag| Move around image. NOTE: If using a touchpad, this only works if the *Disable touchpad while typing* setting is turned off |
|Esc | Escape frame viewer and return to dataset page |
**General annotation controls**
**General Annotation Controls**
|Control|Action|
|----|-----|
@ -63,7 +63,7 @@ Use frame viewer controls to navigate between frames in a Dataset Version, and c
| Enter | Key points (<img src="/docs/latest/icons/ico-keypoint-icon-purple.svg" alt="Key points mode" className="icon size-md space-sm" />) | Complete annotation |
| Esc | Key points (<img src="/docs/latest/icons/ico-keypoint-icon-purple.svg" alt="Key points mode" className="icon size-md space-sm" />), Polygon (<img src="/docs/latest/icons/ico-polygon-icon-purple.svg" alt="Polygon mode" className="icon size-md space-sm" />) | Cancel annotation process |
### Viewing and editing frames
### Viewing and Editing Frames
**To view / edit a frame in the frame editor**
@ -94,8 +94,8 @@ a dropdown list in the **Current Source** section.
![Frame dropdown menu in FrameGroup](../../img/hyperdatasets/framegroup_01.png)
## Filtering frames
### Simple frame filtering
## Filtering Frames
### Simple Frame Filtering
Simple frame filtering applies one annotation object (ROI) label and returns frames containing at least one annotation
with that label.
@ -130,7 +130,7 @@ For example:
</div>
</details>
### Advanced frame filtering
### Advanced Frame Filtering
Advanced frame filtering applies sophisticated filtering logic, which is composed of as many frame filters as needed,
where each frame filter can be a combination of ROI, frame, and source rules.
@ -156,7 +156,7 @@ where each frame filter can be a combination of ROI, frame, and source rules.
### Examples
#### ROI rules
#### ROI Rules
* Create one ROI rule for <code>person</code> shows the same three frames as the simple frame filter (above).
@ -194,7 +194,7 @@ where each frame filter can be a combination of ROI, frame, and source rules.
<br/>
#### Frame rules
#### Frame Rules
Filter by metadata using Lucene queries.
@ -223,7 +223,7 @@ Filter by metadata using Lucene queries.
<br/>
#### Source rules
#### Source Rules
Filter by sources using Lucene queries.
@ -243,7 +243,7 @@ Use Lucene queries in ROI label filters and frame rules.
## Annotations
### Frame objects (Regions of Interest)
### Frame Objects (Regions of Interest)
You can add annotations by drawing new bounding areas, and copying existing annotations in the same or other frames.
@ -288,7 +288,7 @@ You can add annotations by drawing new bounding areas, and copying existing anno
1. Optionally, navigate to a different frame.
1. Click **PASTE**.
### Frame labels
### Frame Labels
**To add frame labels:**
@ -297,7 +297,7 @@ You can add annotations by drawing new bounding areas, and copying existing anno
1. Enter a label.
1. Optionally, click <img src="/docs/latest/icons/ico-edit.svg" className="icon size-md space-sm" />.
### Annotation management
### Annotation Management
**To move annotations:**
@ -331,7 +331,7 @@ You can add annotations by drawing new bounding areas, and copying existing anno
* Change - In the annotation label textbox, choose a label from the list or type a new label.
* Delete - In the annotation, in the label area, click the label's **X**.
## Frame metadata
## Frame Metadata
**To edit frame metadata:**

View File

@ -7,7 +7,7 @@ deleting Dataset versions.
From the Datasets page, click on one of the Datasets in order to see and work with its versions.
### Viewing snapshots
### Viewing Snapshots
View snapshots in the simple version structure using either:
@ -35,7 +35,7 @@ chronological order, with oldest at the top, and the most recent at the bottom.
</div>
</details>
### Creating snapshots
### Creating Snapshots
To create a snapshot, you must be in the simple (version table) view.
@ -54,7 +54,7 @@ To create a snapshot, you must be in the simple (version table) view.
The WebApp (UI) does not currently support the automatic naming of snapshots with timestamps appended. You must provide a snapshot name.
:::
### Creating versions
### Creating Versions
To create a version, you must be in the advanced (version tree) view.
@ -66,7 +66,7 @@ To create a version, you must be in the advanced (version tree) view.
1. Enter a version name, and optionally a description.
1. Click **CREATE**.
### Publishing versions
### Publishing Versions
Publish (make read-only) any Dataset version whose status is *Draft*. If the Dataset is in the simple structure,
and you publish the current version, then only the advanced view is available,
@ -79,7 +79,7 @@ and you cannot create snapshots.
1. Click **PUBLISH**.
1. Click **PUBLISH** again to confirm.
### Exporting frames
### Exporting Frames
Frame exports downloaded filtered frames as a JSON file.
@ -88,20 +88,20 @@ Frame exports downloaded filtered frames as a JSON file.
* In the Thumbnails area, click **EXPORT FRAMES**. The frames JSON file downloads.
### Modifying version names
### Modifying Version Names
**To modify a Dataset version name, do the following:**
* At the top right of the Dataset page, hover over the Dataset version name, click <img src="/docs/latest/icons/ico-edit.svg" className="icon size-md space-sm" /> , edit the name, and then click <img src="/docs/latest/icons/ico-save.svg" className="icon size-md space-sm" /> (check).
### Modifying version descriptions
### Modifying Version Descriptions
**To modify a version description, do the following:**
* Expand the **INFO** area, hover over the **Description**, click <img src="/docs/latest/icons/ico-edit.svg" className="icon size-md space-sm" />,
edit the name, and then click <img src="/docs/latest/icons/ico-save.svg" className="icon size-md space-sm" /> (check).
### Deleting versions
### Deleting Versions
You can delete versions whose status is *Draft*.

View File

@ -19,7 +19,7 @@ to a new position.
* **CREATED** - Elapsed time since the Dataview was created.
* **DESCRIPTION**
## Customizing the Dataviews table
## Customizing the Dataviews Table
The Dataviews table can be customized. Changes are persistent (cached in the browser), and represented in the URL.
Save customized settings in a browser bookmark, and share the URL with teammates.
@ -43,7 +43,7 @@ all the Dataviews in the project. The customizations of these two views are save
:::
## Dataview actions
## Dataview Actions
The following table describes the actions that can be performed from the Dataviews table.
@ -66,7 +66,7 @@ The same information can be found in the bottom menu, in a tooltip that appears
![Dataviews table batch operations](../../img/webapp_dataview_table_batch_operations.png)
## Viewing, adding, and editing Dataviews
## Viewing, Adding, and Editing Dataviews
**To view, add, or edit a Dataview:**

View File

@ -5,7 +5,7 @@ title: Comparing Dataviews
In addition to [**ClearML**'s comparison features](../../webapp/webapp_exp_comparing.md), the ClearML Enterprise WebApp
provides a deep comparison of input data selection criteria of experiment Dataviews, enabling to easily locate, visualize, and analyze differences.
## Selecting experiments
## Selecting Experiments
**To select experiments to compare:**
@ -14,7 +14,7 @@ provides a deep comparison of input data selection criteria of experiment Datavi
1. In the bottom bar, click **COMPARE**. The comparison page appears, showing a column for each experiment and differences with
a highlighted background color. The experiment on the left is the base experiment. Other experiments compare to the base experiment.
## Dataviews (input data)
## Dataviews (Input Data)
**To locate the input data differences:**

View File

@ -40,7 +40,7 @@ are iterated and frame filters (see [Dataviews](webapp_dataviews.md).
After importing a Dataview, it can be renamed and / or removed.
:::
### Selecting Dataset versions
### Selecting Dataset Versions
To input data from a different data source or different version of a data source, select a different Dataset version used
by the Dataview.
@ -62,7 +62,7 @@ by the Dataview.
1. Click **SAVE**.
## Filtering frames
## Filtering Frames
Filtering of SingleFrames iterated by a Dataview for input to the experiment is accomplished by frame filters.
For more detailed information, see [Filtering](../dataviews.md#filtering).
@ -85,7 +85,7 @@ For more detailed information, see [Filtering](../dataviews.md#filtering).
1. Click **SAVE**.
## Mapping labels (label translation)
## Mapping Labels (Label Translation)
Modify the ROI label mapping rules, which translate one or more input labels to another label for the output model. Labels
that are not mapped are ignored.
@ -105,7 +105,7 @@ that are not mapped are ignored.
1. Click **SAVE**
## Label enumeration
## Label Enumeration
Modify the label enumeration assigned to output models.
@ -121,7 +121,7 @@ Modify the label enumeration assigned to output models.
1. Click **SAVE**.
## Data augmentation
## Data Augmentation
Modify the on-the-fly data augmentation applied to frames input from the select Dataset versions and filtered by the frame filters. Data augmentation is applied in steps, where each step applies a method, operation, and strength.
@ -137,7 +137,7 @@ For more detailed information, see [Data Augmentation](../dataviews.md#data-augm
1. Click **SAVE**.
## Iteration controls
## Iteration Controls
Modify the frame iteration performed by the Dataview to control the order, number, timing, and reproducibility of frames
for training.

View File

@ -8,7 +8,7 @@ Enterprise WebApp (UI).
In addition to all of **ClearML**'s offerings, ClearML Enterprise keeps track of the Dataviews associated with an
experiment, which can be viewed and [modified](webapp_exp_modifying.md) in the WebApp.
## Viewing an experiment's Dataviews
## Viewing an Experiment's Dataviews
In an experiment's page, go to the **DATAVIEWS** tab to view all the experiment's Dataview details, including:
* Input data [selection](#dataset-versions) and [filtering](#filtering)
@ -48,7 +48,7 @@ ROI label mapping (label translation) applies to the new model. For example, use
For detailed information, see [Mapping ROI labels](../dataviews.md#mapping-roi-labels).
### Label enumeration
### Label Enumeration
Assign label enumeration in the **LABELS ENUMERATION** area.
@ -59,7 +59,7 @@ where each step is composed of a method, an operation, and a strength.
For detailed information, see [Data augmentation](../dataviews.md#data-augmentation).
### Iteration control
### Iteration Control
The input data iteration control settings determine the order, number, timing, and reproducibility of the Dataview iterating
SingleFrames. Depending upon the combination of iteration control settings, all SingleFrames may not be iterated, and some may repeat.

View File

@ -252,7 +252,7 @@ title: Version 0.17
* Fix experiment / model table - name column restores to default size after opening and closing info.
* Fix double click resizer should auto fit column.
### ClearML Hosted Service only
### ClearML Hosted Service Only
* Launched free [ClearML Hosted Service](https://app.community.clear.ml/dashboard).
* Multiple workspaces.

View File

@ -25,7 +25,7 @@ The **ClearML Web UI** provides a deep experiment comparison, allowing to locate
The **ClearML** experiment comparison provides [comparison features](#comparison-features) making it easy to compare experiments.
## Selecting experiments to compare
## Selecting Experiments to Compare
**To select experiments to compare:**
@ -45,7 +45,7 @@ The **DETAILS** tab includes deep comparisons of the following:
* Output model and model design.
* Other artifacts, if any.
### Execution details
### Execution Details
* The Source code - repository, branch, commit ID, script file name, and working directory.
* Uncommitted changes, sorted by file name.
* Installed Python packages and versions, sorted by package name.
@ -55,7 +55,7 @@ The **DETAILS** tab includes deep comparisons of the following:
sorted by sections.
### To locate the source differences:
### To Locate the Source Differences:
* Click the **DETAILS** tab **>** Expand highlighted sections, or, in the header, click <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Previous diff" className="icon size-md" />
(Previous diff) or <img src="/docs/latest/icons/ico-next-diff.svg" alt="next difference" className="icon size-md space-sm" /> (Next diff).
@ -71,7 +71,7 @@ and name are different.
Compare hyperparameters as values, or compare by metric (hyperparameter parallel coordinate comparison).
### Values mode
### Values Mode
The Values mode is a side-by-side comparison that shows hyperparameter value differences highlighted line-by-line.
@ -89,7 +89,7 @@ For example, expanding **General** shows that the `batch_size` and `epochs` diff
![image](../img/webapp_compare_10.png)
### Parallel Coordinates mode
### Parallel Coordinates Mode
In the Parallel Coordinates mode, compare a metric to any combination of hyperparameters using a parallel coordinates plot.
@ -120,7 +120,7 @@ Hover over one of the experiment names in the legend, and the plot shows only th
Visualize the comparison of scalars, which includes metrics and monitored resources in the **SCALARS** tab.
### Compare specific values
### Compare Specific Values
**To compare specific values:**
@ -133,7 +133,7 @@ Visualize the comparison of scalars, which includes metrics and monitored resour
![image](../img/webapp_exp_comparing_scalars.png)
### Compare scalar series
### Compare Scalar Series
Compare scalar series in plots and analyze differences using **ClearML Web UI** plot tools.
@ -187,7 +187,7 @@ in the **PLOTS** tab.
## Debug samples
## Debug Samples
Compare debug samples at any iteration to verify that an experiment is running as expected. The most recent iteration appears
first. Use the viewer / player to inspect images, audio, video samples and do any of the following:
@ -226,7 +226,7 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
* Zoom
* For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
## Comparison features
## Comparison Features
To assist in experiment analysis, the comparison page supports:
@ -241,7 +241,7 @@ To assist in experiment analysis, the comparison page supports:
### Adding experiments to the comparison
### Adding Experiments to the Comparison
Add an experiment to the comparison - Click **Add Experiment** and start typing an experiment name. An experiment search
and select dialog appears showing matching experiments to choose from. To add an experiment, click **+**. To Remove
@ -251,26 +251,26 @@ an experiment, click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash" cla
### Finding the next or previous difference
### Finding the Next or Previous Difference
* Find the previous difference <img src="/docs/latest/icons/ico-previous-diff.svg" className="icon size-md space-sm" />, or
the next difference <img src="/docs/latest/icons/ico-next-diff.svg" className="icon size-md space-sm" />.
### Hiding identical fields
### Hiding Identical Fields
Move the **Hide Identical Fields** slider to "on" mode to see only fields that are different.
### Searching all text
### Searching All Text
Search all text in the comparison.
### Choosing a different base experiment
### Choosing a Different Base Experiment
Show differences in other experiments in reference to a new base experiment. To set a new base experiment, do one of the following:
* Click on <img src="/docs/latest/icons/ico-switch-base.svg" className="icon size-md space-sm" /> on the top right of the experiment that will be the new base.
@ -280,20 +280,20 @@ Show differences in other experiments in reference to a new base experiment. To
### Dynamic ordering of the compared experiments
### Dynamic Ordering of the Compared Experiments
To reorder the experiments being compared, press <img src="/docs/latest/icons/ico-pan.svg" className="icon size-md space-sm" /> on the top right of the experiment that
needs to be moved, and drag the experiment to its new position.
![image](../img/webapp_compare_21.png)
### Removing an experiment from the comparison
### Removing an Experiment from the Comparison
Remove an experiment from the comparison, by pressing <img src="/docs/latest/icons/ico-remove-compare.svg" className="icon size-md space-sm" />
on the top right of the experiment that needs to be removed.
![image](../img/webapp_compare_23.png)
### Sharing experiments
### Sharing Experiments
To share a comparison table, copy the full URL from the address bar and send it to a teammate to collaborate. They will
get the exact same page (including selected tabs etc.).

View File

@ -14,7 +14,7 @@ them.
When a user opens the hyperlink for a shared experiment in their browser, only that experiment appears in the experiment table.
:::
## Sharing experiments
## Sharing Experiments
Share experiments from the experiments table, the info panel menu, and/or the full screen details menu.
@ -30,7 +30,7 @@ Share experiments from the experiments table, the info panel menu, and/or the fu
1. Copy the hyperlink and send it to a **ClearML Hosted Service** user of another workspace.
## Making shared experiment private
## Making Shared Experiment Private
**To make a shared experiment private again:**

View File

@ -14,7 +14,7 @@ including:
* [Plots](#other-plots) - Other plots and data, for example: Matplotlib, Plotly, and **ClearML** explicit reporting.
* [Debug samples](#debug-samples) - Images, audio, video, and HTML.
## Viewing modes
## Viewing Modes
The **ClearML Web UI** provides two viewing modes for experiment details:
@ -27,7 +27,7 @@ Both modes contain all experiment details. When either view is open, switch to t
table / full screen**.
### Info panel
### Info Panel
The info panel keeps the experiment table in view so that [experiment actions](webapp_exp_table#clearml-actions-from-the-experiments-table)
can be performed from the table (as well as the menu in the info panel).
@ -41,7 +41,7 @@ can be performed from the table (as well as the menu in the info panel).
</div>
</details>
### Full screen details view
### Full Screen Details View
The full screen details view allows for easier viewing and working with experiment tracking and results. The experiments
table is not visible when the full screen details view is open. Perform experiment actions from the menu.
@ -56,7 +56,7 @@ table is not visible when the full screen details view is open. Perform experime
</details>
## Execution details
## Execution Details
In the EXECUTION tab of an experiment's detail page, there are records of:
* Source code
* **ClearML Agent** configuration
@ -65,7 +65,7 @@ In the EXECUTION tab of an experiment's detail page, there are records of:
* Installed Python packages
### Source code, ClearML Agent configuration, and output details
### Source Code, ClearML Agent Configuration, and Output Details
The source code details of the EXECUTION tab of an experiment include:
* The experiment's repository
@ -94,7 +94,7 @@ The output details include:
</details>
### Uncommitted changes
### Uncommitted Changes
<details className="cml-expansion-panel screenshot">
<summary className="cml-expansion-panel-summary">View a screenshot</summary>
@ -106,7 +106,7 @@ The output details include:
</details>
### Installed Python packages and their versions
### Installed Python packages and Their Versions
<details className="cml-expansion-panel screenshot">
<summary className="cml-expansion-panel-summary">View a screenshot</summary>
<div className="cml-expansion-panel-content">
@ -129,7 +129,7 @@ In older versions of **ClearML Server**, the **CONFIGURATION** tab was named **H
Hyperparameters are grouped by their type and appear in **CONFIGURATION** **>** **HYPER PARAMETERS**.
#### Command line arguments
#### Command Line Arguments
The **Args** parameter group shows automatically logged `argparse` arguments, and all older experiments parameters, except TensorFlow Definitions. Hover over a parameter, and the type, description, and default value appear, if they were provided.
@ -143,7 +143,7 @@ The **Args** parameter group shows automatically logged `argparse` arguments, an
</details>
#### Environment variables
#### Environment Variables
If the `CLEARML_LOG_ENVIRONMENT` variable was set, the **Environment** group will show environment variables (see [this FAQ](../faq#track-env-vars)).
@ -157,7 +157,7 @@ If the `CLEARML_LOG_ENVIRONMENT` variable was set, the **Environment** group wil
</details>
#### Custom parameter groups
#### Custom Parameter Groups
Custom parameter groups show parameter dictionaries if the parameters were connected to the Task, using the `Task.connect` method,
with a `name` argument provided.
@ -186,7 +186,7 @@ The **TF_DEFINE** parameter group shows automatic TensorFlow logging.
Once an experiment is run and stored in **ClearML Server**, any of these hyperparameters can be [modified](webapp_exp_tuning.md#modifying-experiments).
### User properties
### User Properties
User properties allow to store any descriptive information in a key-value pair format. They are editable in any experiment,
except experiments whose status is *Published* (read-only).
@ -201,7 +201,7 @@ except experiments whose status is *Published* (read-only).
</details>
### Configuration objects
### Configuration Objects
**ClearML** tracks experiment (Task) model configuration objects, which appear in **Configuration Objects** **>** **General**.
These objects include those that are automatically tracked, and those connected to a Task in code (see [Task.connect_configuration](../references/sdk/task.md#connect_configuration)).
@ -266,7 +266,7 @@ including design, label enumeration, and general information, go to the **MODELS
</details>
### Other artifacts
### Other Artifacts
**To retrieve another artifact:**
@ -276,7 +276,7 @@ including design, label enumeration, and general information, go to the **MODELS
* Copy its location to the clipboard <img src="/docs/latest/icons/ico-clipboard.svg" alt="Copy Clipboard" className="icon size-md space-sm" />,
if it is in a local file.
#### Data audit
#### Data Audit
Artifacts which are uploaded and dynamically tracked by **ClearML** appear in the **DATA AUDIT** section. They include the file path, file size, hash, and metadata stored with the artifact.
@ -306,7 +306,7 @@ Other artifacts, which are uploaded but not dynamically tracked after the upload
## General information
## General Information
General experiment details appear in the **INFO** tab. This includes information describing the stored experiment:
* The parent experiment
@ -339,7 +339,7 @@ General experiment details appear in the **INFO** tab. This includes information
## Experiment results
## Experiment Results
@ -364,7 +364,7 @@ is downloadable. To view the end of the log, click **Jump to end**.
All scalars that **ClearML** automatically logs, as well as those explicitly reported in code, appear in **RESULTS** **>** **SCALARS**.
#### Scalar plot tools
#### Scalar Plot Tools
Use the scalar tools to improve analysis of scalar metrics. In the info panel, click <img src="/docs/latest/icons/ico-settings.svg" className="icon size-md space-sm" /> to use the tools. In the full screen details view, the tools
are on the left side of the window. The tools include:
@ -417,7 +417,7 @@ Individual plots can be shown / hidden or filtered by title.
</details>
#### Plot controls
#### Plot Controls
The table below lists the plot controls which may be available for any plot (in the **SCALARS** and **PLOTS** tabs).
These controls allow you to better analyze the results. Hover over a plot, and the controls appear.
@ -441,7 +441,7 @@ These controls allow you to better analyze the results. Hover over a plot, and t
| <img src="/docs/latest/icons/ico-download-json.svg" alt="Download JSON icon" className="icon size-sm space-sm" /> | To get metric data for further analysis, download plot data to JSON file. |
| <img src="/docs/latest/icons/ico-maximize.svg" alt="Maximize plot icon" className="icon size-sm space-sm" /> | Expand plot to entire window. |
#### 3D plot controls
#### 3D Plot Controls
|Icon|Description|
|---|---|
| <img src="/docs/latest/icons/ico-orbital-rotation.svg" alt="Orbital rotation mode icon" className="icon size-sm" />| Switch to orbital rotation mode - rotate the plot around its middle point. |
@ -449,7 +449,7 @@ These controls allow you to better analyze the results. Hover over a plot, and t
| <img src="/docs/latest/icons/ico-homepage.svg" alt="reset axes icon" className="icon size-sm" />| Reset axes to default position. |
### Debug samples
### Debug Samples
View debug samples by metric at any iteration. The most recent iteration appears first. Use the viewer / player to inspect images, audio, video samples and do any of the following:
@ -501,7 +501,7 @@ View debug samples by metric at any iteration. The most recent iteration appears
* For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
## Tagging experiments
## Tagging Experiments
Tags are user-defined, color-coded labels that can be added to experiments (and models), allowing to easily identify and
group experiments. Tags can show any text. For example, add tags for the type of remote machine experiments were executed
@ -518,6 +518,6 @@ on, label versions of experiments, or apply team names to organize experimentati
## Locating the experiment (Task) ID
## Locating the Experiment (Task) ID
* In the info panel, in the top area, to the right of the Task name, click **ID**. The Task ID appears.

View File

@ -4,7 +4,7 @@ title: Tuning experiments
Tune experiments and edit an experiment's execution details, then execute the tuned experiments on local or remote machines.
## To tune an experiment and execute it remotely:
## To Tune an Experiment and Execute it Remotely:
1. Locate the experiment. Open the experiment's Project page from the Home page or the main Projects page.
@ -32,7 +32,7 @@ Tune experiments and edit an experiment's execution details, then execute the tu
The experiment's status becomes *Pending*. When the worker assigned to the queue fetches the Task (experiment), the
status becomes *Running*. The experiment can now be tracked and its results visualized.
## Modifying experiments
## Modifying Experiments
Experiments whose status is *Draft* are editable (see the [user properties](#user-properties) exception). In the **ClearML
Web UI**, edit any of the following
@ -51,11 +51,11 @@ User parameters are editable in any experiment, except experiments whose status
* [Initial weight input model](#initial-weights-input-model)
* [Output destination for artifacts storage](#output-destination)
### Execution details
### Execution Details
#### Source code
#### Source Code
Select source code by changing any of the following:
@ -70,7 +70,7 @@ Select source code by changing any of the following:
#### Base Docker image
#### Base Docker Image
Select a pre-configured Docker that **ClearML Agent** will use to remotely execute this experiment (see [Building Docker containers](../clearml_agent.md#building-docker-containers)).
**To add, change, or delete a base Docker image:**
@ -80,7 +80,7 @@ Select a pre-configured Docker that **ClearML Agent** will use to remotely execu
#### Output destination
#### Output Destination
Set an output destination for model checkpoints (snapshots) and other artifacts. Examples of supported types of destinations
and formats for specifying locations include:
@ -102,7 +102,7 @@ method), and in the **ClearML** configuration file for all experiments (see [def
on the **ClearML** Configuration Reference page).
:::
#### Log level
#### Log Level
Set a logging level for the experiment (see the standard Python [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels)).
@ -142,7 +142,7 @@ Add, change, or delete hyperparameters, which are organized in the **ClearML Web
#### User properties
#### User Properties
User properties allow storing any descriptive information in key-value pair format. They are editable in any experiment,
except experiments whose status is *Published* (read-only).
@ -154,7 +154,7 @@ except experiments whose status is *Published* (read-only).
#### Configuration objects
#### Configuration Objects
:::important
In older versions of **ClearML Server**, the Task model configuration appeared in the **ARTIFACTS** tab **>** **MODEL
@ -168,7 +168,7 @@ CONFIGURATION** section. Task model configurations now appear in **CONFIGURATION
### Artifacts
### Initial weights input model
### Initial Weights Input Model
Edit model configuration and label enumeration, choose a different initial input weight model for the same project or any
other project, or remove the model.

View File

@ -7,7 +7,7 @@ view model details, and modify, publish, archive, tag, and move models to other
![Models table](../img/webapp_models_01.png)
## Models table columns
## Models Table Columns
The models table contains the following columns:
@ -25,7 +25,7 @@ The models table contains the following columns:
## Customizing the models table
## Customizing the Models Table
The models table is customizable. Changes are persistent (cached in the browser) and represented in the URL, so customized settings
can be saved in a browser bookmark and shared with other **ClearML** users to collaborate.
@ -51,7 +51,7 @@ If a project has sub-projects, the models can be viewed by their sub-project gro
all the models in the project. The customizations of these two views are saved separately.
:::
## Model actions
## Model Actions
The following table describes the actions that can be done from the models table, including the states that
allow each feature. Model states are *Draft* (editable) and *Published* (read-only).
@ -75,7 +75,7 @@ The same information can be found in the bottom menu, in a tooltip that appears
![Models table batch operations](../img/webapp_models_table_batch_operations.png)
## Tagging models
## Tagging Models
Tags are user-defined, color-coded labels that can be added to models (and experiments), allowing to easily identify and
group of experiments. A tag can show any text, for any purpose. For example, add tags for the type of remote machine

View File

@ -9,7 +9,7 @@ meaning that it's the first thing that is seen when opening the project.
![Project overview tab gif](../img/gif/webapp_metric_snapshot.gif)
## Metric snapshot
## Metric Snapshot
On the top of the **OVERVIEW** tab, there is an option to display a **metric snapshot**. Choose a metric and variant,
and then the window will present an aggregated view of the value for that metric and the time that each
@ -20,14 +20,14 @@ on their status (`Completed`, `Aborted`, `Published`, or `Failed`). Hover over a
appear with the details of the experiment associated with the metric value. Click the point, and you will
be sent to the experiment's page.
## Project description
## Project Description
Every project has a `description` field. The UI provides a Markdown editor to edit this field.
In the Markdown document, you can write and share reports and add links to **ClearML** experiments
or any network resource such as issue tracker, web repository, etc.
### Editing the description
### Editing the Description
To edit the description in the **OVERVIEW** tab, hover over the description section, and press the **EDIT** button that
appears on the top right of the window.