Small edits (#128)

This commit is contained in:
pollfly 2021-12-02 19:53:37 +02:00 committed by GitHub
parent 3af0edb147
commit 49de7323ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 269 additions and 185 deletions

View File

@ -434,7 +434,7 @@ ClearML Agent supports executing tasks in multiple environments.
### PIP Mode
By default, ClearML Agent works in PIP Mode, in which it uses [pip](https://en.wikipedia.org/wiki/Pip_(package_manager))
as the package manager. When ClearML runs, it will create a virtual environment
(or reuse an exisitng one, see [here](clearml_agent.md#virtual-environment-reuse)).
(or reuse an existing one, see [here](clearml_agent.md#virtual-environment-reuse)).
Task dependencies (Python packages) will be installed in the virtual environment.
### Conda Mode
@ -582,7 +582,7 @@ Do not enqueue training or inference tasks into the services queue. They will pu
Self hosted [ClearML Server](deploying_clearml/clearml_server.md) comes by default with a services queue.
By default, the server is open and does not require username and password, but it can be [password protected](deploying_clearml/clearml_server_security#user-access-security).
In case it is password protected the services agent will need to be configured with server credentials (associated with a user).
In case it is password-protected, the services agent will need to be configured with server credentials (associated with a user).
To do that, set these environment variables on the ClearML Server machine with the appropriate credentials:
```
@ -627,8 +627,8 @@ It's possible to add the Docker container as the base Docker image to a Task (ex
## Google Colab
ClearML Agent can run on a [google colab](https://colab.research.google.com/) instance. This helps users to leverage
compute resources provided by google colab and send experiments for execution on it.
ClearML Agent can run on a [Google Colab](https://colab.research.google.com/) instance. This helps users to leverage
compute resources provided by Google Colab and send experiments for execution on it.
Check out [this](guides/ide/google_colab.md) tutorial on how to run a ClearML Agent on Google Colab!

View File

@ -31,7 +31,7 @@ and **ClearML Server** needs to be installed.
* Read/write permissions for the default **Trains Server** data directory `/opt/clearml/data` and its subdirectories, or,
if this default directory is not used, the permissions for the directory and subdirectories that are used.
* A minimum of 8GB system RAM.
* A minimum of 8 GB system RAM.
* Minimum free disk space of at least 30% plus two times the size of the data.
* Python version >=2.7 or >=3.6, and Python accessible from the command-line as `python`
@ -43,20 +43,28 @@ and **ClearML Server** needs to be installed.
* **Linux and macOS**
docker-compose -f /opt/trains/docker-compose.yml down
```bash
docker-compose -f /opt/trains/docker-compose.yml down
```
* **Windows**
docker-compose -f c:\opt\trains\docker-compose-win10.yml down
```bash
docker-compose -f c:\opt\trains\docker-compose-win10.yml down
```
* **Kubernetes**
```bash
kubectl delete -k overlays/current_version
```
kubectl delete -k overlays/current_version
* **Kubernetes using Helm**
helm del --purge trains-server
kubectl delete namespace trains
```bash
helm del --purge trains-server
kubectl delete namespace trains
```
1. For **Kubernetes** and **Kubernetes using Helm**, connect to the node in the Kubernetes cluster labeled `app=trains`.
@ -74,11 +82,13 @@ and **ClearML Server** needs to be installed.
* **Linux, macOS, and Windows** - if managing own containers.
Run the migration script. If elevated privileges are used to run Docker (`sudo` in Linux, or admin in Windows),
Run the migration script. If elevated privileges are used to run Docker (`sudo` in Linux, or admin in Windows),
then use elevated privileges to run the migration script.
python elastic_upgrade.py [-s|--source <source_path>] [-t|--target <target_path>] [-n|--no-backup] [-p|--parallel]
```bash
python elastic_upgrade.py [-s|--source <source_path>] [-t|--target <target_path>] [-n|--no-backup] [-p|--parallel]
```
The following optional command line parameters can be used to control the execution of the migration script:
* `<source_path>` - The path to the Elasticsearch data directory in the current **Trains Server** deployment.
@ -203,7 +213,7 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
docker-compose -f /opt/clearml/docker-compose.yml pull
docker-compose -f /opt/clearml/docker-compose.yml up -d
If issues arise during the upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq#common-docker-upgrade-errors).
If issues arise during the upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors).
##### Other Deployment Formats

View File

@ -12,7 +12,7 @@ provides custom images for each released version of **ClearML Server**. For a li
After deploying **ClearML Server**, configure the **ClearML Python Package** for it, see [Configuring ClearML for ClearML Server](clearml_config_for_clearml_server.md).
For information about updgrading **ClearML server on GCP, see [here](upgrade_server_gcp.md).
For information about upgrading **ClearML server on GCP, see [here](upgrade_server_gcp.md).
:::important
If **ClearML Server** is being reinstalled, we recommend clearing browser cookies for **ClearML Server**. For example,

View File

@ -117,7 +117,7 @@ The node ports map to the following container ports:
* `30081` maps to `clearml-fileserver` container on port `8081`
:::important
We recommend using the container ports (``8080``, ``8008``, and ``8081``), or a load balancer (see the next section, [Accessing ClearML Server](#accessing)).
We recommend using the container ports (``8080``, ``8008``, and ``8081``), or a load balancer (see the next section, [Accessing ClearML Server](#accessing-clearml-server)).
:::
## Accessing ClearML Server

View File

@ -165,7 +165,10 @@ that metric column.
Yes! For example, you can use the [Task.set_model_label_enumeration](references/sdk/task.md#set_model_label_enumerationenumerationnone)
method to store label enumeration:
Task.current_task().set_model_label_enumeration( {"label": int(0), } )
```python
Task.current_task().set_model_label_enumeration( {"label": int(0), } )
```
For more information about `Task` class methods, see the [Task Class](fundamentals/task.md) reference page.
@ -176,7 +179,9 @@ For more information about `Task` class methods, see the [Task Class](fundamenta
Yes! Use the [Task.set_model_config](references/sdk/task.md#set_model_configconfig_textnone-config_dictnone)
method:
Task.current_task().set_model_config("a very long text with the configuration file's content")
```python
Task.current_task().set_model_config("a very long text with the configuration file's content")
```
<br/>
@ -196,10 +201,12 @@ and [Task.connect](references/sdk/task.md#connect) methods to manually connect a
[OutputModel.update_weights](references/sdk/model_outputmodel.md#update_weights)
method to manually connect a model weights file.
input_model = InputModel.import_model(link_to_initial_model_file)
Task.current_task().connect(input_model)
```python
input_model = InputModel.import_model(link_to_initial_model_file)
Task.current_task().connect(input_model)
OutputModel(Task.current_task()).update_weights(link_to_new_model_file_here)
OutputModel(Task.current_task()).update_weights(link_to_new_model_file_here)
```
For more information about models, see [InputModel](references/sdk/model_inputmodel.md)
and [OutputModel](references/sdk/model_outputmodel.md) classes.
@ -281,13 +288,15 @@ Yes! ClearML supports connecting hyperparameter dictionaries to experiments, usi
For example, to log the hyperparameters `learning_rate`, `batch_size`, `display_step`,
`model_path`, `n_hidden_1`, and `n_hidden_2`:
# Create a dictionary of parameters
parameters_dict = { 'learning_rate': 0.001, 'batch_size': 100, 'display_step': 1,
'model_path': "/tmp/model.ckpt", 'n_hidden_1': 256, 'n_hidden_2': 256 }
```python
# Create a dictionary of parameters
parameters_dict = { 'learning_rate': 0.001, 'batch_size': 100, 'display_step': 1,
'model_path': "/tmp/model.ckpt", 'n_hidden_1': 256, 'n_hidden_2': 256 }
# Connect the dictionary to your CLEARML Task
parameters_dict = Task.current_task().connect(parameters_dict)
# Connect the dictionary to your CLEARML Task
parameters_dict = Task.current_task().connect(parameters_dict)
```
<br/>
@ -296,7 +305,10 @@ For example, to log the hyperparameters `learning_rate`, `batch_size`, `display_
Yes! When creating experiments and calling [Task.init](fundamentals/task.md#usage),
you can provide an experiment type. ClearML supports [multiple experiment types](fundamentals/task.md#task-types). For example:
task = Task.init(project_name, task_name, Task.TaskTypes.testing)
```python
task = Task.init(project_name, task_name, Task.TaskTypes.testing)
```
<br/>
@ -348,24 +360,26 @@ Your firewall may be preventing the connection. Try one of the following solutio
An experiment's name is a user-controlled property, which can be accessed via the `Task.name` variable. This allows you to use meaningful naming schemes for easily filtering and comparing of experiments.
For example, to distinguish between different experiments, you can append the task ID to the task name:
task = Task.init('examples', 'train')
task.name += ' {}'.format(task.id)
```python
task = Task.init('examples', 'train')
task.name += ' {}'.format(task.id)
```
Or, append the Task ID post-execution:
```python
tasks = Task.get_tasks(project_name='examples', task_name='train')
for t in tasks:
t.name += ' {}'.format(task.id)
```
tasks = Task.get_tasks(project_name='examples', task_name='train')
for t in tasks:
t.name += ' {}'.format(task.id)
Another example is to append a specific hyperparameter and its value to each task's name:
tasks = Task.get_tasks(project_name='examples', task_name='my_automl_experiment')
for t in tasks:
params = t.get_parameters()
if 'my_secret_parameter' in params:
t.name += ' my_secret_parameter={}'.format(params['my_secret_parameter'])
```python
tasks = Task.get_tasks(project_name='examples', task_name='my_automl_experiment')
for t in tasks:
params = t.get_parameters()
if 'my_secret_parameter' in params:
t.name += ' my_secret_parameter={}'.format(params['my_secret_parameter'])
```
Use this experiment naming when creating automation pipelines with a naming convention.
<a id="typing"></a>
@ -398,11 +412,12 @@ You cannot undo the deletion of an experiment.
:::
For example, the following script deletes an experiment whose Task ID is `123456789`.
from clearml_agent import APIClient
```python
from clearml_agent import APIClient
client = APIClient()
client.tasks.delete(task='123456789')
client = APIClient()
client.tasks.delete(task='123456789')
```
<a id="random_see"></a>
@ -429,14 +444,14 @@ that ran the Task stored the file. This applies to debug samples and artifacts.
If metric reporting begins within the first three minutes, ClearML reports resource monitoring by iteration. Otherwise,
it reports resource monitoring by seconds from start, and logs a message:
CLEARML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start.
```
CLEARML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start.
```
However, if metric reporting begins after three minutes and anytime up to thirty minutes, resource monitoring reverts to
by iteration, and ClearML logs a message
CLEARML Monitor: Reporting detected, reverting back to iteration based reporting.
```
CLEARML Monitor: Reporting detected, reverting back to iteration based reporting.
```
After thirty minutes, it remains unchanged.
<br/>
@ -499,11 +514,17 @@ info panel > RESULTS tab > CONSOLE sub-tab, use the *Download full log* feature.
Yes! You can manually create a plot with a single point X-axis for the hyperparameter value, and Y-axis for the accuracy.
For example:
number_layers = 10
accuracy = 0.95
Task.current_task().get_logger().report_scatter2d(
"performance", "accuracy", iteration=0,
mode='markers', scatter=[(number_layers, accuracy)])
```python
number_layers = 10
accuracy = 0.95
Task.current_task().get_logger().report_scatter2d(
"performance",
"accuracy",
iteration=0,
mode='markers',
scatter=[(number_layers, accuracy)]
)
```
Assuming the hyperparameter is `number_layers` with current value `10`, and the `accuracy` for the trained model is `0.95`. Then, the experiment comparison graph shows:
@ -511,11 +532,19 @@ Assuming the hyperparameter is `number_layers` with current value `10`, and the
Another option is a histogram chart:
number_layers = 10
accuracy = 0.95
Task.current_task().get_logger().report_vector(
"performance", "accuracy", iteration=0, labels=['accuracy'],
values=[accuracy], xlabels=['number_layers %d' % number_layers])
```python
number_layers = 10
accuracy = 0.95
Task.current_task().get_logger().report_vector(
"performance",
"accuracy",
iteration=0,
labels=['accuracy'],
values=[accuracy],
xlabels=['number_layers %d' % number_layers]
)
```
![image](img/clearml_faq_screenshots/compare_plots_hist.png)
@ -535,13 +564,28 @@ method reports all series with the same `title` and `iteration` parameter values
For example, the following two scatter2D series are reported on the same plot, because both have a `title` of `example_scatter` and an `iteration` of `1`:
scatter2d_1 = np.hstack((np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1))))
logger.report_scatter2d("example_scatter", "series_1", iteration=1, scatter=scatter2d_1,
xaxis="title x", yaxis="title y")
```python
scatter2d_1 = np.hstack((np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1))))
logger.report_scatter2d(
"example_scatter",
"series_1",
iteration=1,
scatter=scatter2d_1,
xaxis="title x",
yaxis="title y"
)
scatter2d_2 = np.hstack((np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1))))
logger.report_scatter2d("example_scatter", "series_2", iteration=1, scatter=scatter2d_2,
xaxis="title x", yaxis="title y")
scatter2d_2 = np.hstack((np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1))))
logger.report_scatter2d(
"example_scatter",
"series_2",
iteration=1,
scatter=scatter2d_2,
xaxis="title x",
yaxis="title y"
)
```
## GIT and Storage
@ -668,18 +712,21 @@ Yes! You can run ClearML in Jupyter Notebooks using either of the following:
1. Use the [Task.set_credentials](references/sdk/task.md#taskset_credentials)
method to specify the host, port, access key and secret key (see step 1).
# Set your credentials using the trains apiserver URI and port, access_key, and secret_key.
Task.set_credentials(host='http://localhost:8008',key='<access_key>', secret='<secret_key>')
```python
# Set your credentials using the trains apiserver URI and port, access_key, and secret_key.
Task.set_credentials(host='http://localhost:8008',key='<access_key>', secret='<secret_key>')
```
:::note
`host` is the API server (default port `8008`), not the web server (default port `8080`).
:::
1. You can now use ClearML.
# create a task and start training
task = Task.init('juptyer project', 'my notebook')
```python
# create a task and start training
task = Task.init('juptyer project', 'my notebook')
```
<a id="commit-git-in-jupyter"></a>
@ -733,15 +780,21 @@ Set the OS environment variable `ClearML_LOG_ENVIRONMENT` with the variables you
* All environment variables:
export ClearML_LOG_ENVIRONMENT="*"
```
export ClearML_LOG_ENVIRONMENT="*"
```
* Specific environment variables, for example, log `PWD` and `PYTHONPATH`:
export ClearML_LOG_ENVIRONMENT="PWD,PYTHONPATH"
```
export ClearML_LOG_ENVIRONMENT="PWD,PYTHONPATH"
```
* No environment variables:
export ClearML_LOG_ENVIRONMENT=
```
export ClearML_LOG_ENVIRONMENT=
```
## ClearML Hosted Service
@ -752,8 +805,6 @@ If you joined the ClearML Hosted Service and run a script, but your experiment d
pip install clearml
clearml-init
## ClearML Server Deployment
@ -879,32 +930,46 @@ To change the MongoDB and / or Elastic ports for your ClearML Server, do the fol
* For MongoDB:
MONGODB_SERVICE_PORT: <new-mongodb-port>
```bash
MONGODB_SERVICE_PORT: <new-mongodb-port>
```
* For Elastic:
ELASTIC_SERVICE_PORT: <new-elasticsearch-port>
```bash
ELASTIC_SERVICE_PORT: <new-elasticsearch-port>
```
For example:
MONGODB_SERVICE_PORT: 27018
ELASTIC_SERVICE_PORT: 9201
```bash
MONGODB_SERVICE_PORT: 27018
ELASTIC_SERVICE_PORT: 9201
```
1. For MongoDB, in the `services/mongo/ports` section, expose the new MongoDB port:
<new-mongodb-port>:27017
For example:
```bash
<new-mongodb-port>:27017
```
For example:
20718:27017
```bash
20718:27017
```
1. For Elastic, in the `services/elasticsearch/ports` section, expose the new Elastic port:
<new-elasticsearch-port>:9200
For example:
```bash
<new-elasticsearch-port>:9200
```
For example:
9201:9200
```bash
9201:9200
```
1. Restart ClearML Server, see [Restarting ClearML Server](#restart).
@ -929,14 +994,18 @@ Do the following:
* Linux:
no_proxy=127.0.0.1
NO_PROXY=127.0.0.1
```bash
no_proxy=127.0.0.1
NO_PROXY=127.0.0.1
```
* Windows:
set no_proxy=127.0.0.1
set NO_PROXY=127.0.0.1
```bash
set no_proxy=127.0.0.1
set NO_PROXY=127.0.0.1
```
1. Run the ClearML wizard `clearml-init` to configure ClearML for ClearML Server, which will prompt you to open the ClearML Web UI at, [http://127.0.0.1:8080/](http://127.0.0.1:8080/), and create new ClearML credentials.
The wizard completes with:
@ -1022,32 +1091,35 @@ For example, to get the metrics for an experiment and to print metrics as a hist
1. From the response, get the data for the experiment (task) ID `11` and print the experiment name.
1. Send a request for a metrics histogram for experiment (task) ID `11` using the `events` service `ScalarMetricsIterHistogramRequest` method and print the histogram.
# Import Session from the trains backend_api
from trains.backend_api import Session
# Import the services for tasks, events, and projects
from trains.backend_api.services import tasks, events, projects
```python
# Import Session from the trains backend_api
from trains.backend_api import Session
# Import the services for tasks, events, and projects
from trains.backend_api.services import tasks, events, projects
# Create an authenticated session
session = Session()
# Create an authenticated session
session = Session()
# Get projects matching the project name 'examples'
res = session.send(projects.GetAllRequest(name='examples'))
# Get all the project Ids matching the project name 'examples"
projects_id = [p.id for p in res.response.projects]
print('project ids: {}'.format(projects_id))
# Get projects matching the project name 'examples'
res = session.send(projects.GetAllRequest(name='examples'))
# Get all the project Ids matching the project name 'examples"
projects_id = [p.id for p in res.response.projects]
print('project ids: {}'.format(projects_id))
# Get all the experiments/tasks
res = session.send(tasks.GetAllRequest(project=projects_id))
# Get all the experiments/tasks
res = session.send(tasks.GetAllRequest(project=projects_id))
# Do your work
# For example, get the experiment whose ID is '11'
task = res.response.tasks[11]
print('task name: {}'.format(task.name))
# Do your work
# For example, get the experiment whose ID is '11'
task = res.response.tasks[11]
print('task name: {}'.format(task.name))
# For example, for experiment ID '11', get the experiment metric values
res = session.send(events.ScalarMetricsIterHistogramRequest(
task=task.id,
))
scalars = res.response_data
print('scalars {}'.format(scalars))
# For example, for experiment ID '11', get the experiment metric values
res = session.send(events.ScalarMetricsIterHistogramRequest(
task=task.id,
)
)
scalars = res.response_data
print('scalars {}'.format(scalars))
```

View File

@ -7,7 +7,7 @@ ClearML logs hyperparameters used in experiments from multiple different sources
In ClearML, parameters are split into 3 sections:
- User Properties - Modifiable section that can be edited post execution.
- Hyperparameters - Individual parameters for configuration
- Hyperparameters - Individual parameters for configuration.
- Configuration Objects - Usually configuration files (Json \ YAML) or python objects.
These sections are further broken down into sub-sections (General \ Args \ TF_Define) for convenience.

View File

@ -3,10 +3,10 @@ title: ClearML Modules
---
- **ClearML Python Package** (clearml) for integrating **ClearML** into your existing code-base.
- **ClearML Server** (clearml-server) storing experiment, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the ML-Ops.
- **ClearML Agent** (clearml-agent) The ML-Ops orchestration agent. Enabling experiment and workflow reproducibility, and scalability.
- **ClearML Server** (clearml-server) storing experiment, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the MLOps.
- **ClearML Agent** (clearml-agent) The MLOps orchestration agent. Enabling experiment and workflow reproducibility, and scalability.
- **ClearML Data** (clearml-data) data management and versioning on top of file-systems/object-storage.
- **ClearML Session** (clearml-session) Launch remote instances of Jupyter Notebooks and VSCode.
solutions combined with the clearml-server control plain.
Solutions combined with the clearml-server control plane.
![clearml architecture](../img/clearml_architecture.png)

View File

@ -14,10 +14,10 @@ while ClearML ensures your work is reproducible and scalable.
## What Can You Do with ClearML?
- Track and upload metrics and models with only 2 lines of code
- Create a bot that sends you a slack message whenever you model improves in accuracy
- Create a bot that sends you a Slack message whenever your model improves in accuracy
- Automatically scale AWS instances according to your resources needs
- Reproduce experiments with 3 mouse clicks
- Much More!
- Much more!
#### Who We Are
ClearML is supported by you :heart: and by the team behind [allegro.ai](https://www.allegro.ai) , where we build even more MLOps for enterprise companies.

View File

@ -72,7 +72,7 @@ Docker container image to be used, or change the hyperparameters and configurati
Once you have set up an experiment, it is now time to execute it.
**To execute an experiment through the ClearML WebApp:**
1. Right click your draft experiment (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" />
1. Right click your draft experiment (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
button on the top right of the experiments info panel)
1. Click **ENQUEUE,** which will open the **ENQUEUE EXPERIMENT** window
1. In the window, select `default` in the queue menu

View File

@ -35,14 +35,18 @@ task = Task.init(project_name='data', task_name='create', task_type='data_proces
dataset = Dataset.get(dataset_project='data', dataset_name='dataset_v1')
# get a local mutable copy of the dataset
dataset_folder = dataset.get_mutable_local_copy(target_folder='work_dataset', overwrite=True)
dataset_folder = dataset.get_mutable_local_copy(
target_folder='work_dataset',
overwrite=True
)
# change some files in the `./work_dataset` folder
...
# create a new version of the dataset with the pickle file
new_dataset = Dataset.create(
dataset_project='data', dataset_name='dataset_v2',
parent_datasets=[dataset],
use_current_task=True, # this will make sure we have the creation code and the actual dataset artifacts on the same Task
use_current_task=True,
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
)
new_dataset.sync_folder(local_path=dataset_folder)
new_dataset.upload()

View File

@ -13,8 +13,9 @@ Hyper-Datasets are supported by the `allegroai` python package.
Use [`Task.connect`](../references/sdk/task.md#connect) to connect a Dataview object to a Task:
```python
from allegroai import DataView
from allegroai import DataView, Task
task = Task.init(project_name='examples', task_name='my task')
dataview = DataView()
task.connect(dataview)
```
@ -24,7 +25,7 @@ task.connect(dataview)
Use the `Task.get_dataviews` method to access the Dataviews that are connected to a Task.
```python
task.get_dataviews():
task.get_dataviews()
```
This returns a dictionary of Dataview objects and their names.

View File

@ -27,7 +27,7 @@ Use annotation tasks to efficiently organize the annotation of frames in Dataset
* **All Frames** - Include all frames in this task.
* **Empty Frames** - Include only frames without any annotations in this task.
* **By Label** - Include only frames with specific labels, and optionally filter these frames by confidence level and
the number of instances. You can also click <img src="/docs/latest/icons/ico-code.svg" className="icon size-md space-sm" /> and then add a Lucene query for this ROI label filter.
the number of instances. You can also click <img src="/docs/latest/icons/ico-code.svg" alt="Code" className="icon size-md space-sm" /> and then add a Lucene query for this ROI label filter.
1. Choose the iteration parameters specifying how frames in this version are presented to the annotator.
@ -46,13 +46,13 @@ Use annotation tasks to efficiently organize the annotation of frames in Dataset
To mark an annotation task as **Completed**:
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu) **>** **Complete** **>** **CONFIRM**.
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" /> (menu) **>** **Complete** **>** **CONFIRM**.
### Deleting Annotation Tasks
To delete an annotation task:
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu) **>** **Delete** **>** **CONFIRM**.
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" /> (menu) **>** **Delete** **>** **CONFIRM**.
### Filtering Annotation Tasks
@ -69,7 +69,7 @@ Sort the annotation tasks by either using **RECENT** or **NAME** from the drop-d
To View the Dataset version, filters, and iteration information:
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu) **>** **Info**
* In the annotation task card, click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" /> (menu) **>** **Info**
## Annotating Images and Video
@ -82,7 +82,7 @@ depend upon the settings in the annotation task (see [Creating Annotation Tasks]
**To annotate frames:**
1. On the Annotator page, click the annotation task card, or click <img src="/docs/latest/icons/ico-bars-menu.svg" className="icon size-md space-sm" /> (menu)
1. On the Annotator page, click the annotation task card, or click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" /> (menu)
and then click **Annotate**.
1. See instructions below about annotating frames.
@ -91,10 +91,10 @@ depend upon the settings in the annotation task (see [Creating Annotation Tasks]
1. Select an annotation mode and add the bounded area to the frame image.
* Rectangle mode - Click <img src="/docs/latest/icons/ico-rectangle-icon-purple.svg" className="icon size-md space-sm" /> and then click the image, drag and release.
* Polygon mode - Click <img src="/docs/latest/icons/ico-polygon-icon-purple.svg" className="icon size-md space-sm" /> and then click the image for the first vertex,
* Rectangle mode - Click <img src="/docs/latest/icons/ico-rectangle-icon-purple.svg" alt="Rectangle mode" className="icon size-md space-sm" /> and then click the image, drag and release.
* Polygon mode - Click <img src="/docs/latest/icons/ico-polygon-icon-purple.svg" alt="Polygon mode" className="icon size-md space-sm" /> and then click the image for the first vertex,
move to another vertex and click, continue until closing the last vertex.
* Key points mode - Click <img src="/docs/latest/icons/ico-keypoint-icon-purple.svg" className="icon size-md space-sm" /> and then click each key point.
* Key points mode - Click <img src="/docs/latest/icons/ico-keypoint-icon-purple.svg" alt="Key points mode" className="icon size-md space-sm" /> and then click each key point.
1. In the new label area, choose or enter a label.
1. Optionally, add metadata.

View File

@ -67,7 +67,7 @@ Use frame viewer controls to navigate between frames in a Hyper-Dataset Version,
**To view / edit a frame in the frame editor**
1. Locate your frame by applying a [simple frame filter](#simple) or [advanced frame filter](#advanced), and clicking <span class="tr_gui">LOAD MORE</span>, if required.
1. Locate your frame by applying a [simple frame filter](#simple-frame-filtering) or [advanced frame filter](#advanced-frame-filtering), and clicking <span class="tr_gui">LOAD MORE</span>, if required.
1. Click the frame thumbnail. The frame editor appears.
1. Do any of the following:
* View frame details, including:
@ -148,7 +148,7 @@ where each frame filter can be a combination of ROI, frame, and source rules.
* Choose **Include** or **Exclude**, select ROI labels, and optionally set the confidence level range.
* To switch from the ROI dropdown list to a Lucene query mode, click <img src="/docs/latest/icons/ico-edit.svg" className="icon size-md space-sm" />.
* To switch from the ROI dropdown list to a Lucene query mode, click <img src="/docs/latest/icons/ico-edit.svg" alt="edit pencil" className="icon size-md space-sm" />.
* Frame rule - Enter a Lucene query using frame metadata fields in the format `meta.<key>:<value>`.

View File

@ -98,8 +98,8 @@ Frame exports downloaded filtered frames as a JSON file.
**To modify a version description, do the following:**
* Expand the **INFO** area, hover over the **Description**, click <img src="/docs/latest/icons/ico-edit.svg" className="icon size-md space-sm" />,
edit the name, and then click <img src="/docs/latest/icons/ico-save.svg" className="icon size-md space-sm" /> (check).
* Expand the **INFO** area, hover over the **Description**, click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit pencil" className="icon size-md space-sm" />,
edit the name, and then click <img src="/docs/latest/icons/ico-save.svg" alt="Check mark" className="icon size-md space-sm" /> (check).
### Deleting Versions

View File

@ -19,7 +19,7 @@ provides a deep comparison of input data selection criteria of experiment Datavi
**To locate the input data differences:**
1. Click the **DETAILS** tab **>** Expand the **DATAVIEWS** section, or, in the header, click <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Previous diff" className="icon size-md" />
(Previous diff) or <img src="/docs/latest/icons/ico-next-diff.svg" className="icon size-md space-sm" /> (Next diff).
(Previous diff) or <img src="/docs/latest/icons/ico-next-diff.svg" alt="Next diff" className="icon size-md space-sm" /> (Next diff).
1. Expand any of the following sections:
* **Augmentation** - On-the-fly data augmentation.

View File

@ -6,8 +6,8 @@ An experiment that has been executed can be [cloned](../../webapp/webapp_exp_rep
execution details can be modified, and the modified experiment can be executed.
In addition to all the [**ClearML** tuning capabilities](../../webapp/webapp_exp_tuning.md), the **ClearML Enterprise WebApp** (UI)
enables modifying Dataviews, including:
* [Selected Dataview](#selected-dataview)
enables modifying [Dataviews](webapp_dataviews.md), including:
* [Selected Dataview](#selecting-dataviews)
* [Dataset versions](#selecting-dataset-versions)
* [Frame filtering](#filtering-frames)
* [Label mapping](#mapping-labels-label-translation)
@ -15,10 +15,7 @@ enables modifying Dataviews, including:
* [Data augmentation](#data-augmentation)
* [Input frame iteration controls](#iteration-controls)
The selection and control of input data can be modified in *Draft* experiments that are not [development experiments](../task.md#development-experiments).
Do this by modifying the Dataview used by the experiment. The Dataview specifies the Hyper-Dataset versions from which frames
are iterated and frame filters (see [Dataviews](webapp_dataviews.md)).
## Selecting Dataviews
**To choose a Dataview**, do any of the following:
@ -33,8 +30,8 @@ are iterated and frame filters (see [Dataviews](webapp_dataviews.md)).
* Import a different Dataview associated with the same or another project.
* Click <img src="/docs/latest/icons/ico-import.svg" className="icon size-md space-sm" /> (**Import dataview**) and then
select **Import to current dataview** or **Import to aux dataview**.
* Click <img src="/docs/latest/icons/ico-import.svg" alt="Import" className="icon size-md space-sm" /> (**Import dataview**) and then
select **Import to current dataview** or **Import as aux dataview**.
:::note
After importing a Dataview, it can be renamed and / or removed.
@ -101,7 +98,7 @@ that are not mapped are ignored.
1. Select or enter the label to map to in the output model.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" className="icon size-md space-sm" />) a mapping.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />) a mapping.
1. Click **SAVE**
@ -117,13 +114,13 @@ Modify the label enumeration assigned to output models.
* Select a label and then enter an integer for it.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" className="icon size-md space-sm" />) an enumeration.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />) an enumeration.
1. Click **SAVE**.
## Data Augmentation
Modify the on-the-fly data augmentation applied to frames input from the select Hyper-Dataset versions and filtered by the frame filters. Data augmentation is applied in steps, where each step applies a method, operation, and strength.
Modify the on-the-fly data augmentation applied to frame input from the select Hyper-Dataset versions and filtered by the frame filters. Data augmentation is applied in steps, where each step applies a method, operation, and strength.
For more detailed information, see [Data Augmentation](../dataviews.md#data-augmentation).
@ -133,7 +130,7 @@ For more detailed information, see [Data Augmentation](../dataviews.md#data-augm
* Add (**+**) or edit an augmentation step - Select a **METHOD**, **OPERATION**, and **STRENGTH**.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" className="icon size-md space-sm" />) an augmentation step.
* Remove (<img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />) an augmentation step.
1. Click **SAVE**.
@ -161,7 +158,7 @@ For more detailed information, see [Iteration Control](../dataviews.md#iteration
* **Infinite Iterations**
1. Select the **RANDOM SEED** - If the experiment is rerun and the seed remains unchanged, the frames iteration is the same.
1. Select the **RANDOM SEED** - If the experiment is rerun and the seed remains unchanged, the frame iteration is the same.
1. For video, enter a **CLIP LENGTH** - For video data sources, in the number of sequential frames from a clip to iterate.

View File

@ -57,8 +57,8 @@ sorted by sections.
### To Locate the Source Differences:
* Click the **DETAILS** tab **>** Expand highlighted sections, or, in the header, click <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Previous diff" className="icon size-md" />
(Previous diff) or <img src="/docs/latest/icons/ico-next-diff.svg" alt="next difference" className="icon size-md space-sm" /> (Next diff).
* Click the **DETAILS** tab **>** Expand highlighted sections, or, in the header, click <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Left arrow" className="icon size-md" />
(Previous diff) or <img src="/docs/latest/icons/ico-next-diff.svg" alt="Right arrow" className="icon size-md space-sm" /> (Next diff).
For example, in the image below, expanding **ARTIFACTS** **>** **Output Model** **>** **Model** shows that the model ID
and name are different.
@ -81,8 +81,8 @@ The Values mode is a side-by-side comparison that shows hyperparameter value dif
1. In the dropdown menu (on the upper left, next to **+ Add Experiments**), choose **Values**.
1. To show only differences, move the **Hide Identical Fields** slider to on.
1. Locate differences by either:
* Clicking <img src="/docs/latest/icons/ico-previous-diff.svg" className="icon size-md space-sm" /> (Previous diff) or
<img src="/docs/latest/icons/ico-next-diff.svg" className="icon size-md space-sm" /> (Next diff).
* Clicking <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Left arrow" className="icon size-md space-sm" /> (Previous diff) or
<img src="/docs/latest/icons/ico-next-diff.svg" alt="Right arrow" className="icon size-md space-sm" /> (Next diff).
* Scrolling to see highlighted hyperparameters.
For example, expanding **General** shows that the `batch_size` and `epochs` differ between the experiments.
@ -193,7 +193,7 @@ Compare debug samples at any iteration to verify that an experiment is running a
first. Use the viewer / player to inspect images, audio, video samples and do any of the following:
* Move to the same sample in a different iteration (move the iteration slider).
* Show the next or previous iteration's sample.
* Download the file <img src="/docs/latest/icons/ico-download-json.svg" className="icon size-md space-sm" />.
* Download the file <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-md space-sm" />.
* Zoom.
* View the sample's iteration number, width, height, and coordinates.
@ -203,8 +203,8 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
1. Locate debug samples by doing the following:
* Filter by metric. In the **Metric** list, choose a metric.
* Show other iterations. Click <img src="/docs/latest/icons/ico-circle-older.svg" className="icon size-md space-sm" /> (Older images),
<img src="/docs/latest/icons/ico-circle-newer.svg" className="icon size-md space-sm" /> (New images), or <img src="/docs/latest/icons/ico-circle-newest.svg" className="icon size-md space-sm" /> (Newest images).
* Show other iterations. Click <img src="/docs/latest/icons/ico-circle-older.svg" alt="Left arrow" className="icon size-md space-sm" /> (Older images),
<img src="/docs/latest/icons/ico-circle-newer.svg" alt="Right arrow" className="icon size-md space-sm" /> (New images), or <img src="/docs/latest/icons/ico-circle-newest.svg" alt="right arrow, newest image" className="icon size-md space-sm" /> (Newest images).
![image](../img/webapp_compare_30.png)
@ -212,8 +212,8 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
![image](../img/webapp_compare_31.png)
1. To move to the same sample in another iteration, click <img src="/docs/latest/icons/ico-previous.svg" className="icon size-md space-sm" />
(previous), <img src="/docs/latest/icons/ico-next.svg" className="icon size-md space-sm" /> (next), or move the slider.
1. To move to the same sample in another iteration, click <img src="/docs/latest/icons/ico-previous.svg" alt="Left arrow" className="icon size-md space-sm" />
(previous), <img src="/docs/latest/icons/ico-next.svg" alt="Right arrow" className="icon size-md space-sm" /> (next), or move the slider.
**To view a debug sample in the viewer / player:**
@ -222,7 +222,7 @@ first. Use the viewer / player to inspect images, audio, video samples and do an
1. Do any of the following:
* Move to the same sample in another iteration - Move the slider, or click **<** (previous) or **>** (next).
* Download the file - Click <img src="/docs/latest/icons/ico-download-json.svg" className="icon size-md space-sm" />.
* Download the file - Click <img src="/docs/latest/icons/ico-download-json.svg" alt="Download" className="icon size-md space-sm" />.
* Zoom
* For images, locate a position on the sample - Hover over the sample and the X, Y coordinates appear in the legend below the sample.
@ -253,8 +253,8 @@ an experiment, click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash" cla
### Finding the Next or Previous Difference
* Find the previous difference <img src="/docs/latest/icons/ico-previous-diff.svg" className="icon size-md space-sm" />, or
the next difference <img src="/docs/latest/icons/ico-next-diff.svg" className="icon size-md space-sm" />.
* Find the previous difference <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Left arrow" className="icon size-md space-sm" />, or
the next difference <img src="/docs/latest/icons/ico-next-diff.svg" alt="Right arrow" className="icon size-md space-sm" />.
@ -273,8 +273,8 @@ Search all text in the comparison.
### Choosing a Different Base Experiment
Show differences in other experiments in reference to a new base experiment. To set a new base experiment, do one of the following:
* Click on <img src="/docs/latest/icons/ico-switch-base.svg" className="icon size-md space-sm" /> on the top right of the experiment that will be the new base.
* Click on <img src="/docs/latest/icons/ico-pan.svg" className="icon size-md space-sm" /> the new base experiment and drag it all the way to the left
* Click on <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base" className="icon size-md space-sm" /> on the top right of the experiment that will be the new base.
* Click on <img src="/docs/latest/icons/ico-pan.svg" alt="Pan" className="icon size-md space-sm" /> the new base experiment and drag it all the way to the left
![image](../img/webapp_compare_22.png)
@ -282,13 +282,13 @@ Show differences in other experiments in reference to a new base experiment. To
### Dynamic Ordering of the Compared Experiments
To reorder the experiments being compared, press <img src="/docs/latest/icons/ico-pan.svg" className="icon size-md space-sm" /> on the top right of the experiment that
To reorder the experiments being compared, press <img src="/docs/latest/icons/ico-pan.svg" alt="Pan" className="icon size-md space-sm" /> on the top right of the experiment that
needs to be moved, and drag the experiment to its new position.
![image](../img/webapp_compare_21.png)
### Removing an Experiment from the Comparison
Remove an experiment from the comparison, by pressing <img src="/docs/latest/icons/ico-remove-compare.svg" className="icon size-md space-sm" />
Remove an experiment from the comparison, by pressing <img src="/docs/latest/icons/ico-remove-compare.svg" alt="Minus" className="icon size-md space-sm" />
on the top right of the experiment that needs to be removed.
![image](../img/webapp_compare_23.png)

View File

@ -100,7 +100,7 @@ The output details include:
<summary className="cml-expansion-panel-summary">View a screenshot</summary>
<div className="cml-expansion-panel-content">
![Uncomitted changes section](../img/webapp_tracking_19.png)
![Uncommitted changes section](../img/webapp_tracking_19.png)
</div>
</details>
@ -205,7 +205,7 @@ except experiments whose status is *Published* (read-only).
**ClearML** tracks experiment (Task) model configuration objects, which appear in **Configuration Objects** **>** **General**.
These objects include those that are automatically tracked, and those connected to a Task in code (see [Task.connect_configuration](../references/sdk/task.md#connect_configuration)).
**ClearML** supports providing a name for a Task model configuration object (see the [name](../references/sdk/task.md#connect_configuration)
**ClearML** supports providing a name for a Task model configuration object (see the [name](../references/sdk/task.md#connect_configuration))
parameter in `Task.connect_configuration`.
:::important

View File

@ -20,6 +20,6 @@ For each class, label enumeration contains the class name (key) and value.
**To add, change, or delete label enumeration classes:**
* In the **MODELS** tab, click a model **>** **LABELS** **>** Hover over **LABELS** **>** **EDIT** **>** **+**, edit a
key or value, or <img src="/docs/latest/icons/ico-trash.svg" alt="trash" className="icon size-sm space-sm" /> (delete) **>** **SAVE**.
key or value, or <img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-sm space-sm" /> (delete) **>** **SAVE**.
![image](../img/webapp_models_04a.png)

View File

@ -68,7 +68,7 @@ allow each feature. Model states are *Draft* (editable) and *Published* (read-on
These actions can be accessed with the context menu (when right-clicking a model or clicking the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
in a model's info panel).
Some of the actions mentioned in the chart above can be performed on multiple models at once.
Some actions mentioned in the chart above can be performed on multiple models at once.
Select multiple models, then use either the context menu, or the bar that appears at the bottom of the page, to perform
operations on the selected models. The context menu shows the number of models that can be affected by each action.
The same information can be found in the bottom menu, in a tooltip that appears when hovering over an action icon.

View File

@ -13,7 +13,7 @@ The **ClearML Web UI** is the graphical user interface for the **ClearML** platf
The **ClearML Web UI** is composed of the following pages:
* The [Home](webapp_home.md) Page - The dashboard for recent activity, and quick access to experiments and and projects.
* The [Home](webapp_home.md) Page - The dashboard for recent activity, and quick access to experiments and projects.
* The Projects Page - The main experimentation page. It is a main projects page where specific projects can be opened.
Each project page contains customizable [experiments](webapp_exp_table.md) and [models](webapp_model_table.md) tables