Small edits (#526)

This commit is contained in:
pollfly 2023-04-04 16:16:54 +03:00 committed by GitHub
parent 4700306b9d
commit 3b71c66636
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 41 additions and 255 deletions

View File

@ -104,7 +104,7 @@ task.connect(input_model)
## Accessing Models
### Querying Models
Retrieve a list of model objects by querying the system by model names, projects, tags, and more, using the
[`Model.query_models`](../references/sdk/model_model.md#modelquery_models) and / or
[`Model.query_models`](../references/sdk/model_model.md#modelquery_models) and/or
the [`InputModel.query_models`](../references/sdk/model_inputmodel.md#inputmodelquery_models) class methods. These
methods return a list of model objects that match the queries. The list is ordered according to the models last update
time.

View File

@ -127,7 +127,7 @@ Compatible with Docker versions 0.6.5 and above
* `rand_string` - random lower-case letters string (up to 32 characters)
* The resulting name must start with an alphanumeric character, while the rest of the name may contain alphanumeric characters,
underscores (`_`), dots (`.`) and / or dashes (`-`)
underscores (`_`), dots (`.`) and/or dashes (`-`)
* For example: `clearml-id-{task_id}-{rand_string:.8}`
@ -1294,7 +1294,7 @@ This feature is available under the ClearML Enterprise plan
:::
The ClearML Enterprise Server includes the configuration vault. Users can add configuration sections to the vault and, once
the vault is enabled, the configurations will be merged into the ClearML and ClearML Agent configurations upon code execution and / or agent launch.
the vault is enabled, the configurations will be merged into the ClearML and ClearML Agent configurations upon code execution and/or agent launch.
These configurations override the configurations written in the configuration file.

View File

@ -46,7 +46,7 @@ title: FAQ
* [Is there something ClearML can do about uncommitted code running?](#help-uncommitted-code)
* [I read there is a feature for centralized model storage. How do I use it?](#centralized-model-storage)
* [When using PyCharm to remotely debug a machine, the Git repo is not detected. Do you have a solution?](#pycharm-remote-debug-detect-git)
* [Debug images and / or artifacts are not loading in the UI after I migrated ClearML Server to a new address. How do I fix this?](#migrate_server_debug)
* [Debug images and/or artifacts are not loading in the UI after I migrated ClearML Server to a new address. How do I fix this?](#migrate_server_debug)
**Remote Debugging (ClearML PyCharm Plugin)**
@ -668,10 +668,10 @@ repository / commit ID. For detailed information about using the plugin, see the
**Debug images and/or artifacts are not loading in the UI after I migrated ClearML Server to a new address. How do I fix this?** <a id="migrate_server_debug"></a>
This can happen if your debug images and / or artifacts were uploaded to the ClearML file server, since the value
This can happen if your debug images and/or artifacts were uploaded to the ClearML file server, since the value
registered was their full URL at the time of registration (e.g. `https://files.<OLD_ADDRESS>/path/to/artifact`).
To fix this, the registered URL of each debug image and / or artifact needs to be replaced with its current URL.
To fix this, the registered URL of each debug image and/or artifact needs to be replaced with its current URL.
* For **debug images**, use the following command. Make sure to insert the old address and the new address that will replace it
```bash
@ -946,7 +946,7 @@ try removing deprecated images:
**Why is web login authentication not working?** <a className="tr_top_negative" id="port-conflict"></a>
A port conflict between the ClearML Server MongoDB and / or Elastic instances, and other instances running on your system may prevent web login authentication from working correctly.
A port conflict between the ClearML Server MongoDB and/or Elastic instances, and other instances running on your system may prevent web login authentication from working correctly.
ClearML Server uses the following default ports which may be in conflict with other instances:
@ -955,9 +955,9 @@ ClearML Server uses the following default ports which may be in conflict with ot
You can check for port conflicts in the logs in `/opt/clearml/log`.
If a port conflict occurs, change the MongoDB and / or Elastic ports in the `docker-compose.yml`, and then run the Docker compose commands to restart the ClearML Server instance.
If a port conflict occurs, change the MongoDB and/or Elastic ports in the `docker-compose.yml`, and then run the Docker compose commands to restart the ClearML Server instance.
To change the MongoDB and / or Elastic ports for your ClearML Server, do the following:
To change the MongoDB and/or Elastic ports for your ClearML Server, do the following:
1. Edit the `docker-compose.yml` file.
1. Add the following environment variable(s) in the `services/trainsserver/environment` section:

View File

@ -61,7 +61,7 @@ new_dataset.tags = ['latest']
The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parents` argument.
This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
since it only stores the changed and / or added files from the parent versions.
since it only stores the changed and/or added files from the parent versions.
When you access the Dataset, it automatically merges the files from all parent versions
in a fully automatic and transparent process, as if the files were always part of the requested Dataset.

View File

@ -1,108 +0,0 @@
---
title: Dataset Management with CLI and SDK
---
In this tutorial, you are going to manage the CIFAR dataset with `clearml-data` CLI, and then use ClearML's [`Dataset`](../../references/sdk/dataset.md)
class to ingest the data.
## Creating the Dataset
### Downloading the Data
Before registering the CIFAR dataset with `clearml-data`, you need to obtain a local copy of it.
Execute this python script to download the data
```python
from clearml import StorageManager
manager = StorageManager()
dataset_path = manager.get_local_copy(
remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
)
# make sure to copy the printed value
print("COPY THIS DATASET PATH: {}".format(dataset_path))
```
Expected response:
```bash
COPY THIS DATASET PATH: ~/.clearml/cache/storage_manager/global/f2751d3a22ccb78db0e07874912b5c43.cifar-10-python_artifacts_archive_None
```
The script prints the path to the downloaded data. It will be needed later on.
### Creating the Dataset
To create the dataset, execute the following command:
```
clearml-data create --project dataset_examples --name cifar_dataset
```
Expected response:
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=ee1c35f60f384e65bc800f42f0aca5ec
```
Where `ee1c35f60f384e65bc800f42f0aca5ec` is the dataset ID.
## Adding Files
Add the files that were just downloaded to the dataset:
```
clearml-data add --files <dataset_path>
```
where `dataset_path` is the path that was printed earlier, which denotes the location of the downloaded dataset.
:::note
There's no need to specify a `dataset_id`, since the `clearml-data` session stores it.
:::
## Finalizing the Dataset
Run the [`close`](../../references/sdk/dataset.md#close) command to upload the files (it'll be uploaded to ClearML Server by default):<br/>
```
clearml-data close
```
This command sets the dataset task's status to *completed*, so it will no longer be modifiable. This ensures future
reproducibility.
Information about the dataset can be viewed in the WebApp, in the dataset's [details panel](../../webapp/datasets/webapp_dataset_viewing.md#version-details-panel).
In the panel's **CONTENT** tab, you can see a table summarizing version contents, including file names, file sizes, and hashes.
![Dataset content tab](../../img/examples_data_management_cifar_dataset.png)
## Using the Dataset
Now that a new dataset is registered, you can consume it.
The [data_ingestion.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/data_ingestion.py) example
script demonstrates using the dataset within Python code.
```python
dataset_name = "cifar_dataset"
dataset_project = "dataset_examples"
from clearml import Dataset
dataset_path = Dataset.get(
dataset_name=dataset_name,
dataset_project=dataset_project,
alias="Cifar dataset"
).get_local_copy()
trainset = datasets.CIFAR10(
root=dataset_path,
train=True,
download=False,
transform=transform
)
```
In cases like this, where you use a dataset in a task, you can have the dataset's ID stored in the tasks
hyperparameters. Passing `alias=<dataset_alias_string>` stores the datasets ID in the
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using.
The Dataset's [`get_local_copy`](../../references/sdk/dataset.md#get_local_copy) method will return a path to the cached,
downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object.
The script then trains a neural network to classify images using the dataset created above.

View File

@ -1,106 +0,0 @@
---
title: Data Management with Python
---
The [dataset_creation.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/dataset_creation.py) and
[data_ingestion.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/data_ingestion.py)
together demonstrate how to use ClearML's [`Dataset`](../../references/sdk/dataset.md) class to create a dataset and
subsequently ingest the data.
## Dataset Creation
The [dataset_creation.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/dataset_creation.py) script
demonstrates how to do the following:
* Create a dataset and add files to it
* Upload the dataset to the ClearML Server
* Finalize the dataset
### Downloading the Data
You first need to obtain a local copy of the CIFAR dataset.
```python
from clearml import StorageManager
manager = StorageManager()
dataset_path = manager.get_local_copy(
remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
)
```
This script downloads the data and `dataset_path` contains the path to the downloaded data.
### Creating the Dataset
```python
from clearml import Dataset
dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which
can be viewed in the WebApp.
### Adding Files
```python
dataset.add_files(path=dataset_path)
```
This adds the downloaded files to the current dataset.
### Uploading the Files
```python
dataset.upload()
```
This uploads the dataset to the ClearML Server by default. The dataset's destination can be changed by specifying the
target storage with the `output_url` parameter of the [`upload`](../../references/sdk/dataset.md#upload) method.
### Finalizing the Dataset
Run the [`finalize`](../../references/sdk/dataset.md#finalize) command to close the dataset and set that dataset's tasks
status to *completed*. The dataset can only be finalized if it doesn't have any pending uploads.
```python
dataset.finalize()
```
After a dataset has been closed, it can no longer be modified. This ensures future reproducibility.
Information about the dataset can be viewed in the WebApp, in the dataset's [details panel](../../webapp/datasets/webapp_dataset_viewing.md#version-details-panel).
In the panel's **CONTENT** tab, you can see a table summarizing version contents, including file names, file sizes, and hashes.
![Dataset content tab](../../img/examples_data_management_cifar_dataset.png)
## Data Ingestion
Now that a new dataset is registered, you can consume it!
The [data_ingestion.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/data_ingestion.py) script
demonstrates data ingestion using the dataset created in the first script.
```python
dataset_name = "cifar_dataset"
dataset_project = "dataset_examples"
dataset_path = Dataset.get(
dataset_name=dataset_name,
dataset_project=dataset_project
).get_local_copy()
```
The script above gets the dataset and uses the [`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy)
method to return a path to the cached, read-only local dataset.
If you need a modifiable copy of the dataset, use the following:
```python
Dataset.get(dataset_name, dataset_project).get_mutable_local_copy("path/to/download")
```
The script then creates a neural network to train a model to classify images from the dataset that was
created above.

View File

@ -55,7 +55,7 @@ The following are the `ClearMLLogger` parameters:
* `histogram_granularity` - Histogram sampling granularity. Default is 50.
### Logging
To log scalars, ignite engine's output and / or metrics, use the `OutputHandler`.
To log scalars, ignite engine's output and/or metrics, use the `OutputHandler`.
* Log training loss at each iteration:
```python

View File

@ -72,7 +72,7 @@ Customize the columns on the tracking leaderboard by hiding any of the default c
## Step 4: Show Metrics or Hyperparameters
The leaderboard can contain any combination of metrics and hyperparameters. For each metric, choose whether to view the last (most
recent), minimum, and / or maximum values.
recent), minimum, and/or maximum values.
**To select metrics or hyperparameters:**

View File

@ -9,7 +9,7 @@ and functionality for the following purposes:
* Integrating the powerful features of [Dataviews](dataviews.md) with an experiment
* [Annotating](webapp/webapp_datasets_frames.md#annotations) images and videos
Datasets consist of versions with SingleFrames and / or FrameGroups. Each Dataset can contain multiple versions, which
Datasets consist of versions with SingleFrames and/or FrameGroups. Each Dataset can contain multiple versions, which
can have multiple children that inherit their parent's contents.
Mask-labels can be defined globally, for a DatasetVersion. When defined this way, they will be applied to all masks in
@ -158,7 +158,7 @@ versions are locked for further changes and which can be modified. See details [
Dataset versions can have either *Draft* or *Published* state.
A *Draft* version is editable, so frames can be added to and deleted and / or modified.
A *Draft* version is editable, so frames can be added to and deleted and/or modified.
A *Published* version is read-only, which ensures reproducible experiments and preserves the Dataset version contents.
Child versions can only be created from *Published* versions, as they inherit their predecessor version contents.
@ -236,7 +236,7 @@ myDataset = DatasetVersion.create_snapshot(
#### Adding Metadata and Comments
Add a metadata dictionary and / or comment to a snapshot.
Add a metadata dictionary and/or comment to a snapshot.
For example:

View File

@ -35,7 +35,7 @@ A frame filter contains the following criteria:
* Any combination of the following rules:
* ROI rule - Include or exclude frames containing at least one ROI with any combination of labels in the Dataset version.
Optionally, limit the number of matching ROIs (instances) per frame, and / or limit the confidence level of the label.
Optionally, limit the number of matching ROIs (instances) per frame, and/or limit the confidence level of the label.
For example: include frames containing two to four ROIs labeled `cat` and `dog`, with a confidence level from `0.8` to `1.0`.
* Frame rule - Filter by frame metadata key-value pairs, or ROI labels.
For example: if some frames contain the metadata
@ -150,7 +150,7 @@ myDataView = DataView(iteration_order=IterationOrder.random, iteration_infinite=
### Adding Queries
To add a query to a DataView, use the [`DataView.add_query`](../references/hyperdataset/dataview.md#add_query) method
and specify Dataset versions, ROI and / or frame queries, and other criteria.
and specify Dataset versions, ROI and/or frame queries, and other criteria.
The `dataset_name` and `version_name` arguments specify the Dataset Version. The `roi_query` and `frame_query` arguments
specify the queries.
@ -158,7 +158,7 @@ specify the queries.
* `frame_query` must be assigned a Lucene query.
Multiple queries can be added to the same or different Dataset versions, each query with the same or different ROI
and / or frame queries.
and/or frame queries.
You can retrieve the Dataview frames using [`DataView.to_list`](../references/hyperdataset/dataview.md#to_list),
[`DataView.to_dict`](../references/hyperdataset/dataview.md#to_dict), or [`DataView.get_iterator`](../references/hyperdataset/dataview.md#get_iterator)
@ -286,7 +286,7 @@ list_of_frames = myDataView.to_list()
#### Frame Queries
Use frame queries to filter frames by ROI labels and / or frame metadata key-value pairs that a frame must include or
Use frame queries to filter frames by ROI labels and/or frame metadata key-value pairs that a frame must include or
exclude for the Dataview to return the frame.
**Frame queries** match frame meta key-value pairs, ROI labels, or both.

View File

@ -39,11 +39,11 @@ Customize the table using any of the following:
* Dynamic column order - Drag a column title to a different position.
* Resize columns - Drag the column separator to change the width of that column. Double-click the column separator for automatic fit.
* Filter by user and / or status - When a filter is applied to a column, its filter icon will appear with a highlighted
* Filter by user and/or status - When a filter is applied to a column, its filter icon will appear with a highlighted
dot on its top right (<img src="/docs/latest/icons/ico-filter-on.svg" alt="Filter on" className="icon size-md" /> ). To
clear all active filters, click <img src="/docs/latest/icons/ico-filter-reset.svg" alt="Clear filters" className="icon size-md" />
in the top right corner of the table.
* Sort columns - By experiment name and / or elapsed time since creation.
* Sort columns - By experiment name and/or elapsed time since creation.
:::note
The following Dataviews-table customizations are saved on a **per-project** basis:

View File

@ -34,7 +34,7 @@ enables modifying [Dataviews](webapp_dataviews.md), including:
select **Import to current dataview** or **Import as aux dataview**.
:::note
After importing a Dataview, it can be renamed and / or removed.
After importing a Dataview, it can be renamed and/or removed.
:::
### Selecting Dataset Versions

View File

@ -31,7 +31,7 @@ The **FILTERING** section lists the SingleFrame filters iterated by a Dataview,
Each frame filter is composed of:
* A Dataset version to input from
* ROI Rules for SingleFrames to include and / or exclude certain criteria.
* ROI Rules for SingleFrames to include and/or exclude certain criteria.
* Weights for debiasing input data.
Combinations of frame filters can implement complex querying.

View File

@ -75,7 +75,7 @@ allowing the pipeline logic to reuse the step outputs.
### Callbacks
Callbacks can be utilized to control pipeline execution flow. A callback can be defined to be called before and / or after
Callbacks can be utilized to control pipeline execution flow. A callback can be defined to be called before and/or after
the execution of every task in a pipeline. Additionally, you can create customized, step-specific callbacks.
### Pipeline Reusing

View File

@ -28,7 +28,7 @@ The models table contains the following columns:
| **STARTED** | Elapsed time since the run started. To view the date and time of start, hover over the elapsed time. | Date-time |
| **UPDATED** | Elapsed time since the last update to the run. To view the date and time of update, hover over the elapsed time. | Date-time |
| **RUN TIME** | The current / total running time of the run. | Time |
| **_Metrics_** | Add metrics column (last, minimum, and / or maximum values). Available options depend upon the runs in the table. | Varies according to runs in table |
| **_Metrics_** | Add metrics column (last, minimum, and/or maximum values). Available options depend upon the runs in the table. | Varies according to runs in table |
| **_Hyperparameters_** | Add hyperparameters. Available options depend upon the runs in the table. | Varies according to runs in table |
## Customizing the Runs Table
@ -57,8 +57,8 @@ on a column, and the relevant filter appears.
There are a few types of filters:
* Value set - Choose which values to include from a list of all values in the column
* Numerical ranges - Insert minimum and / or maximum value
* Date ranges - Insert starting and / or ending date and time
* Numerical ranges - Insert minimum and/or maximum value
* Date ranges - Insert starting and/or ending date and time
* Tags - Choose which tags to filter by from a list of all tags used in the column.
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR" respectively. These
options appear on the top of the tag list.

View File

@ -236,7 +236,7 @@ To assist in experiment analysis, the comparison page supports:
experiment table with the currently compared experiments at the top.
1. Find the experiments to add by sorting and [filtering](webapp_exp_table.md#filtering-columns) the experiments with
the appropriate column header controls. Alternatively, use the search bar to find experiments by name.
1. Select experiments to include in the comparison (and / or clear the selection of any experiment you wish to remove).
1. Select experiments to include in the comparison (and/or clear the selection of any experiment you wish to remove).
1. Click **APPLY**.
![image](../img/webapp_compare_add.png)

View File

@ -36,7 +36,7 @@ The experiments table default and customizable columns are described in the foll
| **ITERATION** | Last or most recent iteration of the experiment. | Default |
| **DESCRIPTION** | A description of the experiment. For cloned experiments, the description indicates it was auto generated with a timestamp. | Default (hidden) |
| **RUN TIME** | The current / total running time of the experiment. | Default (hidden) |
| **_Metrics_** | Add metrics column (last, minimum, and / or maximum values). The metrics depend upon the experiments in the table. See [adding metrics](#to-add-metrics). | Customizable |
| **_Metrics_** | Add metrics column (last, minimum, and/or maximum values). The metrics depend upon the experiments in the table. See [adding metrics](#to-add-metrics). | Customizable |
| **_Hyperparameters_** | Add hyperparameters. The hyperparameters depend upon the experiments in the table. See [adding hyperparameters](#to-add-hyperparameters). | Customizable |
@ -61,7 +61,7 @@ Use experiments table customization for various use cases, including:
* Creating a [leaderboard](#creating-an-experiment-leaderboard) that will update in real time with experiment
performance, which can be shared and stored.
* Sorting models by metrics - Models are associated with the experiments that created them. For each metric, use the last
value, the minimal value, and / or the maximal value.
value, the minimal value, and/or the maximal value.
* Tracking hyperparameters - Track hyperparameters by adding them as columns, and applying filters and sorting.
Changes are persistent (cached in the browser), and represented in the URL so customized settings can be saved in a browser
@ -81,17 +81,17 @@ all the experiments in the project. The customizations of these two views are sa
### Adding Metrics and / or Hyperparameters
### Adding Metrics and/or Hyperparameters
![Experiment table customization gif](../img/gif/webapp_exp_table_cust.gif)
Add metrics and / or hyperparameters columns to the experiments table. The metrics and hyperparameters depend upon the
Add metrics and/or hyperparameters columns to the experiments table. The metrics and hyperparameters depend upon the
experiments in the table.
#### To Add Metrics:
* Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" /> **>** **+ METRICS** **>** Expand a metric **>** Select the **LAST** (value),
**MIN** (minimal value), and / or **MAX** (maximal value) checkboxes.
**MIN** (minimal value), and/or **MAX** (maximal value) checkboxes.
#### To Add Hyperparameters:
@ -112,8 +112,8 @@ on a column, and the relevant filter appears.
There are a few types of filters:
* Value set - Choose which values to include from a list of all values in the column
* Numerical ranges - Insert minimum and / or maximum value
* Date ranges - Insert starting and / or ending date and time
* Numerical ranges - Insert minimum and/or maximum value
* Date ranges - Insert starting and/or ending date and time
* Tags - Choose which tags to filter by from a list of all tags used in the column.
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR" respectively. These
options appear on the top of the tag list.

View File

@ -60,7 +60,7 @@ User parameters are editable in any experiment, except experiments whose status
Select source code by changing any of the following:
* Repository, commit (select by ID, tag name, or choose the last commit in the branch), script, and /or working directory.
* Installed Python packages and / or versions - Edit or clear (remove) them all.
* Installed Python packages and/or versions - Edit or clear (remove) them all.
* Uncommitted changes - Edit or clear (remove) them all.
**To select different source code:**

View File

@ -2,7 +2,7 @@
title: Model Details
---
In the models table, double-click on a model to view and / or modify the following:
In the models table, double-click on a model to view and/or modify the following:
* General model information
* Model configuration
* Model label enumeration

View File

@ -223,7 +223,7 @@ The user group table lists all the active user groups. Each row includes a group
#### To edit a user group:
1. Hover over the user group's row on the table
1. Click the <img src="/docs/latest/icons/ico-edit.svg" alt="Edit Pencil" className="icon size-md" /> button
1. Edit the group's name and / or description
1. Edit the group's name and/or description
1. Edit group members (see details [here](#to-create-a-user-group))
1. Click **Save**
@ -241,7 +241,7 @@ This feature is available under the ClearML Enterprise plan
:::
Workspace administrators can use the **Access Rules** page to manage workspace permissions, by specifying which users
and / or user groups have access permissions to the following workspace resources:
and/or user groups have access permissions to the following workspace resources:
* [Projects](../fundamentals/projects.md)
* [Tasks](../fundamentals/task.md)
@ -260,7 +260,7 @@ Access privileges can be viewed, defined, and edited in the **Access Rules** tab
specific project or task), click the input box, and select the object from the list that appears. Filter the
list by typing part of the desired object name
1. Select the permission type - **Read Only** or **Read & Modify**
1. Assign users and / or [user groups](#user-groups) to be given access. Click the desired input box, and select the
1. Assign users and/or [user groups](#user-groups) to be given access. Click the desired input box, and select the
users / groups from the list that appears. Filter the list by typing part of the desired object name. To revoke
access, hover over a user's or group's row and click the <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" />
button

View File

@ -128,7 +128,7 @@ module.exports = {
{'Automation': ['guides/automation/manual_random_param_search_example', 'guides/automation/task_piping']},
{'ClearML Task': ['guides/clearml-task/clearml_task_tutorial']},
{'ClearML Agent': ['guides/clearml_agent/executable_exp_containers', 'guides/clearml_agent/exp_environment_containers']},
{'Datasets': ['guides/datasets/data_man_cifar_classification', 'guides/datasets/data_man_python']},
{'Datasets': ['clearml_data/data_management_examples/data_man_cifar_classification', 'clearml_data/data_management_examples/data_man_python']},
{'Distributed': ['guides/distributed/distributed_pytorch_example', 'guides/distributed/subprocess_example']},
{'Docker': ['guides/docker/extra_docker_shell_script']},
{'Frameworks': [