mirror of
https://github.com/clearml/clearml-docs
synced 2025-02-24 21:14:37 +00:00
Small edits (#724)
This commit is contained in:
parent
4b02af91f7
commit
680bca6644
docs
clearml_data
clearml_sdk
community.mdfaq.mdfundamentals
guides
hyperdatasets
integrations
autokeras.mdcatboost.mdfastai.mdhydra.mdignite.mdkeras.mdkeras_tuner.mdlightgbm.mdmegengine.mdmonai.mdoptuna.mdpytorch.mdpytorch_lightning.mdscikit_learn.mdtensorboard.mdtensorboardx.mdtensorflow.mdtransformers.mdxgboost.mdyolov5.mdyolov8.md
model_registry.mdwebapp
@ -9,7 +9,7 @@ See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced querya
|
||||
|
||||
`clearml-data` is a data management CLI tool that comes as part of the `clearml` python package. Use `clearml-data` to
|
||||
create, modify, and manage your datasets. You can upload your dataset to any storage service of your choice (S3 / GS /
|
||||
Azure / Network Storage) by setting the dataset’s upload destination (see [`--storage`](#upload)). Once you have uploaded
|
||||
Azure / Network Storage) by setting the dataset's upload destination (see [`--storage`](#upload)). Once you have uploaded
|
||||
your dataset, you can access it from any machine.
|
||||
|
||||
The following page provides a reference to `clearml-data`'s CLI commands.
|
||||
@ -41,7 +41,7 @@ clearml-data create [-h] [--parents [PARENTS [PARENTS ...]]] [--project PROJECT]
|
||||
|
||||
|
||||
:::tip Dataset ID
|
||||
* For datasets created with `clearml` v1.6 or newer on ClearML Server v1.6 or newer, find the ID in the dataset version’s info panel in the [Dataset UI](../webapp/datasets/webapp_dataset_viewing.md).
|
||||
* For datasets created with `clearml` v1.6 or newer on ClearML Server v1.6 or newer, find the ID in the dataset version's info panel in the [Dataset UI](../webapp/datasets/webapp_dataset_viewing.md).
|
||||
For datasets created with earlier versions of `clearml`, or if using an earlier version of ClearML Server, find the ID in the task header of the [dataset task's info panel](../webapp/webapp_exp_track_visual.md).
|
||||
* clearml-data works in a stateful mode so once a new dataset is created, the following commands
|
||||
do not require the `--id` flag.
|
||||
@ -66,7 +66,7 @@ clearml-data add [-h] [--id ID] [--dataset-folder DATASET_FOLDER]
|
||||
|Name|Description|Optional|
|
||||
|---|---|---|
|
||||
|`--id` | Dataset's ID. Default: previously created / accessed dataset| <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--files`| Files / folders to add. Items will be uploaded to the dataset’s designated storage. | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--files`| Files / folders to add. Items will be uploaded to the dataset's designated storage. | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--wildcard`| Add specific set of files, denoted by these wildcards. For example: `~/data/*.jpg ~/data/json`. Multiple wildcards can be passed. | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--links`| Files / folders link to add. Supports S3, GS, Azure links. Example: `s3://bucket/data` `azure://bucket/folder`. Items remain in their original location. | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--dataset-folder` | Dataset base folder to add the files to in the dataset. Default: dataset root| <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
@ -183,7 +183,7 @@ clearml-data sync [-h] [--id ID] [--dataset-folder DATASET_FOLDER] --folder FOLD
|
||||
|`--parents`|IDs of the dataset's parents (i.e. merge all parents). All modifications made to the folder since the parents were synced will be reflected in the dataset|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--project`|If creating a new dataset, specify the dataset's project name|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--name`|If creating a new dataset, specify the dataset's name|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--version`|Specify the dataset’s version using the [semantic versioning](https://semver.org) scheme. Default: `1.0.0`|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--version`|Specify the dataset's version using the [semantic versioning](https://semver.org) scheme. Default: `1.0.0`|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--tags`|Dataset user tags|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--skip-close`|Do not auto close dataset after syncing folders|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--chunk-size`| Set dataset artifact upload chunk size in MB. Default 512, (pass -1 for a single chunk). Example: 512, dataset will be split and uploaded in 512 MB chunks. |<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
@ -233,7 +233,7 @@ clearml-data set-description [-h] [--id ID] [--description DESCRIPTION]
|
||||
|
||||
|Name|Description|Optional|
|
||||
|---|---|---|
|
||||
|`--id`|Dataset’s ID|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--id`|Dataset's ID|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--description`|Description to be set|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|
||||
|
||||
|
@ -51,7 +51,7 @@ dataset = Dataset.create(
|
||||
```
|
||||
|
||||
:::tip Locating Dataset ID
|
||||
For datasets created with `clearml` v1.6 or newer on ClearML Server v1.6 or newer, find the ID in the dataset version’s info panel in the [Dataset UI](../webapp/datasets/webapp_dataset_viewing.md).
|
||||
For datasets created with `clearml` v1.6 or newer on ClearML Server v1.6 or newer, find the ID in the dataset version's info panel in the [Dataset UI](../webapp/datasets/webapp_dataset_viewing.md).
|
||||
For datasets created with earlier versions of `clearml`, or if using an earlier version of ClearML Server, find the ID in the task header of the [dataset task's info panel](../webapp/webapp_exp_track_visual.md).
|
||||
:::
|
||||
|
||||
@ -64,7 +64,7 @@ and auto-increments the version number.
|
||||
Use the `output_uri` parameter to specify a network storage target to upload the dataset files, and associated information
|
||||
(such as previews) to (e.g. `s3://bucket/data`, `gs://bucket/data`, `azure://bucket/data`, `file:///mnt/share/data`).
|
||||
By default, the dataset uploads to ClearML's file server. The `output_uri` parameter of the [`Dataset.upload`](#uploading-files)
|
||||
method overrides this parameter’s value.
|
||||
method overrides this parameter's value.
|
||||
|
||||
The created dataset inherits the content of the `parent_datasets`. When multiple dataset parents are listed,
|
||||
they are merged in order of specification. Each parent overrides any overlapping files from a previous parent dataset.
|
||||
@ -99,7 +99,7 @@ In addition, the target storage location for the squashed dataset can be specifi
|
||||
Once a dataset has been created and uploaded to a server, the dataset can be accessed programmatically from anywhere.
|
||||
|
||||
Use the [`Dataset.get`](../references/sdk/dataset.md#datasetget) class method to access a specific Dataset object, by
|
||||
providing any of the dataset’s following attributes: dataset ID, project, name, tags, and or version. If multiple
|
||||
providing any of the dataset's following attributes: dataset ID, project, name, tags, and or version. If multiple
|
||||
datasets match the query, the most recent one is returned.
|
||||
|
||||
```python
|
||||
@ -117,10 +117,10 @@ dataset = Dataset.get(
|
||||
Pass `auto_create=True`, and a dataset will be created on-the-fly with the input attributes (project name, dataset name,
|
||||
and tags) if no datasets match the query.
|
||||
|
||||
In cases where you use a dataset in a task (e.g. consuming a dataset), you can have its ID stored in the task’s
|
||||
hyperparameters: pass `alias=<dataset_alias_string>`, and the task using the dataset will store the dataset’s ID in the
|
||||
In cases where you use a dataset in a task (e.g. consuming a dataset), you can have its ID stored in the task's
|
||||
hyperparameters: pass `alias=<dataset_alias_string>`, and the task using the dataset will store the dataset's ID in the
|
||||
`dataset_alias_string` parameter under the `Datasets` hyperparameters section. This way you can easily track which
|
||||
dataset the task is using. If you use `alias` with `overridable=True`, you can override the dataset ID from the UI’s
|
||||
dataset the task is using. If you use `alias` with `overridable=True`, you can override the dataset ID from the UI's
|
||||
**CONFIGURATION > HYPERPARAMETERS >** `Datasets` section, allowing you to change the dataset used when running a task
|
||||
remotely.
|
||||
|
||||
@ -135,8 +135,8 @@ of an entire dataset. This method downloads the dataset to a specific folder (no
|
||||
the specified folder already has contents, specify whether to overwrite its contents with the dataset contents, using the `overwrite` parameter.
|
||||
|
||||
ClearML supports parallel downloading of datasets. Use the `max_workers` parameter of the `Dataset.get_local_copy` or
|
||||
`Dataset.get_mutable_copy` methods to specify the number of threads to use when downloading the dataset. By default, it’s
|
||||
the number of your machine’s logical cores.
|
||||
`Dataset.get_mutable_copy` methods to specify the number of threads to use when downloading the dataset. By default, it's
|
||||
the number of your machine's logical cores.
|
||||
|
||||
## Modifying Datasets
|
||||
|
||||
@ -225,7 +225,7 @@ By default, the dataset uploads to ClearML's file server. This target storage ov
|
||||
[`Dataset.create`](#creating-datasets) method.
|
||||
|
||||
ClearML supports parallel uploading of datasets. Use the `max_workers` parameter to specify the number of threads to use
|
||||
when uploading the dataset. By default, it’s the number of your machine’s logical cores.
|
||||
when uploading the dataset. By default, it's the number of your machine's logical cores.
|
||||
|
||||
Dataset files must be uploaded before a dataset is [finalized](#finalizing-a-dataset).
|
||||
|
||||
@ -317,9 +317,9 @@ You can enable offline mode in one of the following ways:
|
||||
|
||||
* Before creating a dataset, set `CLEARML_OFFLINE_MODE=1`
|
||||
|
||||
All the dataset’s information is zipped and is saved locally.
|
||||
All the dataset's information is zipped and is saved locally.
|
||||
|
||||
The dataset task's console output displays the task’s ID and a path to the local dataset folder:
|
||||
The dataset task's console output displays the task's ID and a path to the local dataset folder:
|
||||
|
||||
```
|
||||
ClearML Task: created new task id=offline-372657bb04444c25a31bc6af86552cc9
|
||||
|
@ -84,7 +84,7 @@ Now that a new dataset is registered, you can consume it!
|
||||
The [data_ingestion.py](https://github.com/allegroai/clearml/blob/master/examples/datasets/data_ingestion.py) script
|
||||
demonstrates data ingestion using the dataset created in the first script.
|
||||
|
||||
The following script gets the dataset and uses [`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy)
|
||||
The following script gets the dataset and uses [`Dataset.get_local_copy()`](../../references/sdk/dataset.md#get_local_copy)
|
||||
to return a path to the cached, read-only local dataset.
|
||||
|
||||
```python
|
||||
|
@ -175,7 +175,7 @@ See full `Task.init` reference [here](../references/sdk/task.md#taskinit).
|
||||
You can continue the execution of a previously run task using the `continue_last_task` parameter of the `Task.init`
|
||||
method. This will retain all of its previous artifacts / models / logs.
|
||||
|
||||
The task will continue reporting its outputs based on the iteration in which it had left off. For example: a task’s last
|
||||
The task will continue reporting its outputs based on the iteration in which it had left off. For example: a task's last
|
||||
train/loss scalar reported was for iteration 100, when continued, the next report will be as iteration 101.
|
||||
|
||||
:::note Reproducibility
|
||||
@ -216,8 +216,8 @@ task = Task.create(
|
||||
See full `Task.create` reference [here](../references/sdk/task.md#taskcreate).
|
||||
|
||||
## Tracking Task Progress
|
||||
Track a task’s progress by setting the task progress property using the [`Task.set_progress`](../references/sdk/task.md#set_progress) method.
|
||||
Set a task’s progress to a numeric value between 0 - 100. Access the task’s current progress, using the
|
||||
Track a task's progress by setting the task progress property using the [`Task.set_progress`](../references/sdk/task.md#set_progress) method.
|
||||
Set a task's progress to a numeric value between 0 - 100. Access the task's current progress, using the
|
||||
[`Task.get_progress`](../references/sdk/task.md#get_progress) method.
|
||||
|
||||
```python
|
||||
@ -230,8 +230,8 @@ print(task.get_progress())
|
||||
task.set_progress(100)
|
||||
```
|
||||
|
||||
While the task is running, the WebApp will show the task’s progress indication in the experiment table, next to the
|
||||
task’s status. If a task failed or was aborted, you can view how much progress it had made.
|
||||
While the task is running, the WebApp will show the task's progress indication in the experiment table, next to the
|
||||
task's status. If a task failed or was aborted, you can view how much progress it had made.
|
||||
|
||||
<div class="max-w-50">
|
||||
|
||||
@ -239,7 +239,7 @@ task’s status. If a task failed or was aborted, you can view how much progress
|
||||
|
||||
</div>
|
||||
|
||||
Additionally, you can view a task’s progress in its [INFO](../webapp/webapp_exp_track_visual.md#general-information) tab
|
||||
Additionally, you can view a task's progress in its [INFO](../webapp/webapp_exp_track_visual.md#general-information) tab
|
||||
in the WebApp.
|
||||
|
||||
|
||||
@ -478,7 +478,7 @@ Function tasks must be created from within a regular task, created by calling `T
|
||||
ClearML supports distributed remote execution through multiple worker nodes using [`Task.launch_multi_node()`](../references/sdk/task.md#launch_multi_node).
|
||||
This method creates multiple copies of a task and enqueues them for execution.
|
||||
|
||||
Each copy of the task is called a node. The original task that initiates the nodes’ execution is called the master node.
|
||||
Each copy of the task is called a node. The original task that initiates the nodes' execution is called the master node.
|
||||
|
||||
```python
|
||||
Task = task.init(task_name ="my_task", project_name="my_project")
|
||||
@ -611,7 +611,7 @@ Upload the execution data that the Task captured offline to the ClearML Server u
|
||||
```
|
||||
|
||||
You can also use the offline task to update the execution of an existing previously executed task by providing the
|
||||
previously executed task’s ID. To avoid overwriting metrics, you can specify the initial iteration offset with
|
||||
previously executed task's ID. To avoid overwriting metrics, you can specify the initial iteration offset with
|
||||
`iteration_offset`.
|
||||
|
||||
```python
|
||||
@ -660,7 +660,7 @@ For example:
|
||||
task.upload_artifact(name='link', artifact_object='azure://bucket/folder')
|
||||
```
|
||||
|
||||
* Serialize and upload a Python object. ClearML automatically chooses the file format based on the object’s type, or you
|
||||
* Serialize and upload a Python object. ClearML automatically chooses the file format based on the object's type, or you
|
||||
can explicitly specify the format as follows:
|
||||
* dict - `.json` (default), `.yaml`
|
||||
* pandas.DataFrame - `.csv.gz` (default), `.parquet`, `.feather`, `.pickle`
|
||||
@ -685,7 +685,7 @@ For example:
|
||||
See more details in the [Artifacts Reporting example](../guides/reporting/artifacts.md) and in the [SDK reference](../references/sdk/task.md#upload_artifact).
|
||||
|
||||
### Using Artifacts
|
||||
A task's artifacts are accessed through the task’s *artifact* property which lists the artifacts’ locations.
|
||||
A task's artifacts are accessed through the task's *artifact* property which lists the artifacts' locations.
|
||||
|
||||
The artifacts can subsequently be retrieved from their respective locations by using:
|
||||
* `get_local_copy()` - Downloads the artifact and caches it for later use, returning the path to the cached copy.
|
||||
@ -742,8 +742,8 @@ Models can also be manually updated independently, without any task. See [Output
|
||||
|
||||
### Using Models
|
||||
|
||||
Accessing a task’s previously trained model is quite similar to accessing task artifacts. A task's models are accessed
|
||||
through the task’s models property which lists the input models and output model snapshots’ locations.
|
||||
Accessing a task's previously trained model is quite similar to accessing task artifacts. A task's models are accessed
|
||||
through the task's models property which lists the input models and output model snapshots' locations.
|
||||
|
||||
The models can subsequently be retrieved from their respective locations by using `get_local_copy()` which downloads the
|
||||
model and caches it for later use, returning the path to the cached copy (if using TensorFlow, the snapshots are stored
|
||||
@ -790,7 +790,7 @@ using [clearml-agent](../clearml_agent.md) to execute code.
|
||||
To define parameters manually use the [`Task.set_parameters`](../references/sdk/task.md#set_parameters) method to specify
|
||||
name-value pairs in a parameter dictionary.
|
||||
|
||||
Parameters can be designated into sections: specify a parameter’s section by prefixing its name, delimited with a slash
|
||||
Parameters can be designated into sections: specify a parameter's section by prefixing its name, delimited with a slash
|
||||
(i.e. `section_name/parameter_name:value`). `General` is the default section.
|
||||
|
||||
Call the [`set_parameter`](../references/sdk/task.md#set_parameter) method to set a single parameter.
|
||||
@ -847,7 +847,7 @@ The parameters and their section names are case-sensitive
|
||||
### Tracking Python Objects
|
||||
|
||||
ClearML can track Python objects (such as dictionaries and custom classes) as they evolve in your code, and log them to
|
||||
your task’s configuration using the [`Task.connect`](../references/sdk/task.md#connect) method. Once objects are connected
|
||||
your task's configuration using the [`Task.connect`](../references/sdk/task.md#connect) method. Once objects are connected
|
||||
to a task, ClearML automatically logs all object elements (e.g. class members, dictionary key-values pairs).
|
||||
|
||||
```python
|
||||
@ -892,7 +892,7 @@ config_file_yaml = task.connect_configuration(
|
||||

|
||||
|
||||
### User Properties
|
||||
A task’s user properties do not impact task execution, so you can add / modify the properties at any stage. Add user
|
||||
A task's user properties do not impact task execution, so you can add / modify the properties at any stage. Add user
|
||||
properties to a task with the [`Task.set_user_properties`](../references/sdk/task.md#set_user_properties) method.
|
||||
|
||||
```python
|
||||
|
@ -24,9 +24,9 @@ Follow **ClearML** on [LinkedIn](https://www.linkedin.com/company/clearml).
|
||||
|
||||
## Guidelines for Contributing
|
||||
|
||||
Firstly, we thank you for taking the time to contribute!
|
||||
Firstly, thank you for taking the time to contribute!
|
||||
|
||||
Contribution comes in many forms:
|
||||
Contributions come in many forms:
|
||||
|
||||
* Reporting [issues](https://github.com/allegroai/clearml/issues) you've come upon
|
||||
* Participating in issue discussions in the [issue tracker](https://github.com/allegroai/clearml/issues) and the
|
||||
|
@ -286,7 +286,7 @@ To fix this, the registered URL of each model needs to be replaced with its curr
|
||||
|
||||
This message is only a warning. ClearML not only detects your current repository and git commit, but also warns you
|
||||
if you are using uncommitted code. ClearML does this because uncommitted code means this experiment will be difficult
|
||||
to reproduce. You can see uncommitted changes in the ClearML Web UI, in the EXECUTION tab of the experiment info panel.
|
||||
to reproduce. You can see uncommitted changes in the ClearML Web UI, in the **EXECUTION** tab of the experiment info panel.
|
||||
|
||||
#### I do not use argparse for hyperparameters. Do you have a solution? <a id="dont-want-argparser"></a>
|
||||
|
||||
|
@ -30,7 +30,7 @@ pointing directly to the original training task's model.
|
||||
### Output Models
|
||||
|
||||
ClearML stores training results as output models. The `OutputModel` object is instantiated with a task object as an
|
||||
argument (see [`task`](../references/sdk/model_outputmodel.md) parameter), so it's automatically registered as the Task’s
|
||||
argument (see [`task`](../references/sdk/model_outputmodel.md) parameter), so it's automatically registered as the Task's
|
||||
output model. Since OutputModel objects are connected to tasks, the models are traceable in experiments.
|
||||
|
||||
Output models are read-write so weights can be updated throughout training. Additionally, users can specify a model's
|
||||
|
@ -43,7 +43,7 @@ The preceding diagram demonstrates the typical flow of hyperparameter optimizati
|
||||
|
||||
### Supported Optimizers
|
||||
|
||||
The `HyperParameterOptimizer` class contains ClearML’s hyperparameter optimization modules. Its modular design enables
|
||||
The `HyperParameterOptimizer` class contains ClearML's hyperparameter optimization modules. Its modular design enables
|
||||
using different optimizers, including existing software frameworks, enabling simple, accurate, and fast hyperparameter
|
||||
optimization.
|
||||
|
||||
|
@ -9,7 +9,7 @@ ClearML supports tracking and managing hyperparameters in each experiment and pr
|
||||
optimization module](hpo.md). With ClearML's logging and tracking capabilities, experiments can be reproduced, and their
|
||||
hyperparameters and results can be saved and compared, which is key to understanding model behavior.
|
||||
|
||||
ClearML lets you easily try out different hyperparameter values without changing your original code. ClearML’s [execution
|
||||
ClearML lets you easily try out different hyperparameter values without changing your original code. ClearML's [execution
|
||||
agent](../clearml_agent.md) will override the original values with any new ones you specify through the web UI (see
|
||||
[Configuration](../webapp/webapp_exp_tuning.md#configuration) in the Tuning Experiments page). It's also possible to
|
||||
programmatically set experiment parameters.
|
||||
@ -93,7 +93,7 @@ making it easier to search / filter experiments. Add user properties to an exper
|
||||
|
||||
### Accessing Parameters
|
||||
|
||||
ClearML provides methods to directly access a task’s logged parameters.
|
||||
ClearML provides methods to directly access a task's logged parameters.
|
||||
|
||||
To get all of a task's parameters and properties (hyperparameters, configuration objects, and user properties), use the
|
||||
[`Task.get_parameters`](../references/sdk/task.md#get_parameters) method, which will return a dictionary with the parameters,
|
||||
|
@ -63,7 +63,7 @@ The captured [execution output](../webapp/webapp_exp_track_visual.md#experiment-
|
||||
To view a more in depth description of each task section, see [Tracking Experiments and Visualizing Results](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
### Execution Configuration
|
||||
ClearML logs a task’s hyperparameters specified as command line arguments, environment or code level variables. This
|
||||
ClearML logs a task's hyperparameters specified as command line arguments, environment or code level variables. This
|
||||
allows experiments to be reproduced, and their hyperparameters and results can be saved and compared, which is key to
|
||||
understanding model behavior.
|
||||
|
||||
@ -82,7 +82,7 @@ See [Hyperparameters](hyperparameters.md) for more information.
|
||||
ClearML allows easy storage of experiments' output products as artifacts that can later be accessed easily and used,
|
||||
through the [web UI](../webapp/webapp_overview.md) or programmatically.
|
||||
|
||||
ClearML provides methods to easily track files generated throughout your experiments’ execution such as:
|
||||
ClearML provides methods to easily track files generated throughout your experiments' execution such as:
|
||||
|
||||
- Numpy objects
|
||||
- Pandas DataFrames
|
||||
@ -91,7 +91,7 @@ ClearML provides methods to easily track files generated throughout your experim
|
||||
- Python objects
|
||||
- and more!
|
||||
|
||||
Most importantly, ClearML also logs experiments’ input and output models as well as interim model snapshots (see
|
||||
Most importantly, ClearML also logs experiments' input and output models as well as interim model snapshots (see
|
||||
[Models](artifacts.md)).
|
||||
|
||||
#### Logging Artifacts
|
||||
@ -121,7 +121,7 @@ Available task types are:
|
||||
* *training* (default) - Training a model
|
||||
* *testing* - Testing a component, for example model performance
|
||||
* *inference* - Model inference job (e.g. offline / batch model execution)
|
||||
* *controller* - A task that lays out the logic for other tasks’ interactions, manual or automatic (e.g. a pipeline
|
||||
* *controller* - A task that lays out the logic for other tasks' interactions, manual or automatic (e.g. a pipeline
|
||||
controller)
|
||||
* *optimizer* - A specific type of controller for optimization tasks (e.g. [hyperparameter optimization](hpo.md))
|
||||
* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc.)
|
||||
|
@ -20,7 +20,7 @@ code.
|
||||
ClearML logs everything needed to reproduce your experiment and its environment (uncommitted changes, used packages, and
|
||||
more), making it easy to reproduce your experiment's execution environment using ClearML.
|
||||
|
||||
You can reproduce the execution environment of any experiment you’ve run with ClearML on any workload:
|
||||
You can reproduce the execution environment of any experiment you've run with ClearML on any workload:
|
||||
|
||||
1. Go to the experiment page of the task you want to reproduce in the [ClearML WebApp](../../webapp/webapp_overview.md),
|
||||
:::tip
|
||||
|
@ -8,7 +8,7 @@ code. When ClearML is installed in an environment, the Trainer by default uses t
|
||||
so ClearML automatically logs Transformers models, parameters, scalars, and more.
|
||||
|
||||
When the example runs, it creates a ClearML task called `Trainer` in the `HuggingFace Transformers` projects. To change
|
||||
the task’s name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK` environment variables respectively.
|
||||
the task's name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK` environment variables respectively.
|
||||
|
||||
For more information about integrating ClearML into your Transformers code, see [HuggingFace Transformers](../../../integrations/transformers.md).
|
||||
|
||||
|
@ -123,7 +123,7 @@ Dataset.delete(
|
||||
```
|
||||
|
||||
This supports deleting sources located in AWS S3, GCP, and Azure Storage (not local storage). The `delete_sources`
|
||||
parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process’ progress by passing
|
||||
parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process' progress by passing
|
||||
`show_progress=True` (`tqdm` required).
|
||||
|
||||
### Tagging Datasets
|
||||
|
@ -26,7 +26,7 @@ In the UI, you can view the mapping in a dataset version's [Metadata](webapp/web
|
||||
|
||||

|
||||
|
||||
When viewing a frame with a mask corresponding with the version’s mask-label mapping, the UI arbitrarily assigns a color
|
||||
When viewing a frame with a mask corresponding with the version's mask-label mapping, the UI arbitrarily assigns a color
|
||||
to each label. The color assignment can be [customized](webapp/webapp_datasets_frames.md#labels).
|
||||
|
||||
For example:
|
||||
@ -34,7 +34,7 @@ For example:
|
||||
|
||||

|
||||
|
||||
* Frame image with the semantic segmentation mask enabled. Labels are applied according to the dataset version’s
|
||||
* Frame image with the semantic segmentation mask enabled. Labels are applied according to the dataset version's
|
||||
mask-label mapping:
|
||||
|
||||

|
||||
@ -66,7 +66,7 @@ The frame's sources array contains a masks list of dictionaries that looks somet
|
||||
}
|
||||
```
|
||||
|
||||
The masks dictionary includes the frame's masks’ URIs and IDs.
|
||||
The masks dictionary includes the frame's masks' URIs and IDs.
|
||||
|
||||
## Alpha Channel Masks
|
||||
For alpha channel, mask RGB pixel values are interpreted as opacity values so that when the mask is applied, only the
|
||||
@ -133,10 +133,10 @@ version.set_masks_labels(
|
||||
)
|
||||
```
|
||||
|
||||
The relevant label is applied to all masks in the version according to the version’s mask-label mapping dictionary.
|
||||
The relevant label is applied to all masks in the version according to the version's mask-label mapping dictionary.
|
||||
|
||||
### Registering Frames with Multiple Masks
|
||||
Frames can contain multiple masks. To add multiple masks, use the SingleFrame’s `masks_source` property. Input one of
|
||||
Frames can contain multiple masks. To add multiple masks, use the SingleFrame's `masks_source` property. Input one of
|
||||
the following:
|
||||
* A dictionary with mask string ID keys and mask URI values
|
||||
* A list of mask URIs. Number IDs are automatically assigned to the masks ("00", "01", etc.)
|
||||
|
@ -103,7 +103,7 @@ The panel below describes the details contained within a `frame`:
|
||||
|
||||
:::info
|
||||
The `mask` dictionary is deprecated. Mask labels and their associated pixel values are now stored in the dataset
|
||||
version’s metadata. See [Masks](masks.md).
|
||||
version's metadata. See [Masks](masks.md).
|
||||
:::
|
||||
|
||||
* `poly` (*[int]*) - Bounding area vertices.
|
||||
|
@ -9,7 +9,7 @@ Use annotation tasks to efficiently organize the annotation of frames in Dataset
|
||||
|
||||

|
||||
|
||||
Click on an annotation task card to open the frame viewer, where you can view the task’s frames and annotate them.
|
||||
Click on an annotation task card to open the frame viewer, where you can view the task's frames and annotate them.
|
||||
|
||||
## Annotation Task Actions
|
||||
Click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" /> on the top right
|
||||
@ -18,7 +18,7 @@ of an annotation task card to open its context menu and access annotation task a
|
||||

|
||||
|
||||
* **Annotate** - Go to annotation task frame viewer
|
||||
* **Info** - View annotation task’s definitions: dataset versions, filters, and frame iteration specification
|
||||
* **Info** - View annotation task's definitions: dataset versions, filters, and frame iteration specification
|
||||
* **Complete** - Mark annotation task as Completed
|
||||
* **Delete** - Delete annotation task
|
||||
|
||||
@ -112,11 +112,11 @@ or the arrow keys on the keyboard). Closing the frame editor will prompt you to
|
||||
| Icon (when applicable) | Action | Description |
|
||||
|---|---|---|
|
||||
|| Move annotation | Click on a bounded area and drag it. |
|
||||
|| Resize annotation| Select an annotation, then click on a bounded area’s vertex and drag it. |
|
||||
|| Resize annotation| Select an annotation, then click on a bounded area's vertex and drag it. |
|
||||
|<img src="/docs/latest/icons/ico-metadata.svg" alt="edit metadata" className="icon size-md space-sm" />|Edit metadata|Hover over an annotation in the list and click the icon to open the edit window. Input the metadata dictionary in JSON format. This metadata is specific to the selected annotation, not the entire frame.|
|
||||
|<img src="/docs/latest/icons/ico-lock-open.svg" alt="Lock annotation" className="icon size-md space-sm" />|Lock / Unlock annotation |Click the button on a specific annotation to make it uneditable. You can also click the button on top of the annotations list to lock all annotations in the frame.|
|
||||
|<img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-sm space-sm" />|Delete annotation|Click the annotation or bounded area in the frame and then click the button to delete the annotation.|
|
||||
|<img src="/docs/latest/icons/ico-show.svg" alt="Eye Show All" className="icon size-md space-sm" />|Show/hide all annotations |Click the button to view the frame without annotations. When annotations are hidden, they can’t be modified. |
|
||||
|<img src="/docs/latest/icons/ico-show.svg" alt="Eye Show All" className="icon size-md space-sm" />|Show/hide all annotations |Click the button to view the frame without annotations. When annotations are hidden, they can't be modified. |
|
||||
||Delete label |In the relevant annotation, click **x** on the label you want to remove.|
|
||||
|
||||
### Frame Labels
|
||||
|
@ -9,10 +9,10 @@ or in List view <img src="/docs/latest/icons/ico-flat-view.svg" alt="List view"
|
||||
view, all hyper-datasets are shown side-by-side. In Project view, hyper-datasets are organized according to their projects, and
|
||||
top-level projects are displayed. Click on a project card to view the project's hyper-datasets.
|
||||
|
||||
Click on a Hyper-Dataset card to open the dataset’s [version list](webapp_datasets_versioning.md), where you can view
|
||||
Click on a Hyper-Dataset card to open the dataset's [version list](webapp_datasets_versioning.md), where you can view
|
||||
and manage the dataset versions' lineage and contents.
|
||||
|
||||
Filter the hyper-datasets to find the one you’re looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
Filter the hyper-datasets to find the one you're looking for more easily. These filters can be applied by clicking <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />:
|
||||
* My Work - Show only hyper-datasets that you created
|
||||
* Tags - Choose which tags to filter by from a list of tags used in the hyper-datasets.
|
||||
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR"
|
||||
@ -24,7 +24,7 @@ Filter the hyper-datasets to find the one you’re looking for more easily. Thes
|
||||
|
||||
## Project Cards
|
||||
|
||||
In Project view, project cards display a project’s summarized hyper-dataset information:
|
||||
In Project view, project cards display a project's summarized hyper-dataset information:
|
||||
|
||||
<div class="max-w-50">
|
||||
|
||||
@ -76,7 +76,7 @@ of a dataset card to open its context menu and access dataset actions:
|
||||
|
||||
</div>
|
||||
|
||||
* **Rename** - Change the dataset’s name
|
||||
* **Rename** - Change the dataset's name
|
||||
* **Add Tag** - Add label to the dataset to help easily classify groups of datasets.
|
||||
* **Edit Metadata** - Modify dataset-level metadata. This will open the metadata edit window, where you can edit the section
|
||||
|
||||
|
@ -177,11 +177,11 @@ or the arrow keys on the keyboard). Closing the frame editor will prompt you to
|
||||
| Icon (when applicable) | Action | Description |
|
||||
|---|---|---|
|
||||
|| Move annotation | Click on a bounded area and drag it. |
|
||||
|| Resize annotation| Select an annotation, then click on a bounded area’s vertex and drag it. |
|
||||
|| Resize annotation| Select an annotation, then click on a bounded area's vertex and drag it. |
|
||||
|<img src="/docs/latest/icons/ico-metadata.svg" alt="edit metadata" className="icon size-md space-sm" />|Edit metadata|Hover over an annotation in the list and click the icon to open the edit window. Input the metadata dictionary in JSON format. This metadata is specific to the selected annotation, not the entire frame.|
|
||||
|<img src="/docs/latest/icons/ico-lock-open.svg" alt="Lock annotation" className="icon size-md space-sm" />|Lock / Unlock annotation |Click the button on a specific annotation to make it uneditable. You can also click the button on top of the annotations list to lock all annotations in the frame.|
|
||||
|<img src="/docs/latest/icons/ico-trash.svg" alt="Trash" className="icon size-md space-sm" />|Delete annotation|Click the annotation or bounded area in the frame and then click the button to delete the annotation.|
|
||||
|<img src="/docs/latest/icons/ico-show.svg" alt="Eye Show All" className="icon size-md space-sm" />|Show/hide all annotations |Click the button to view the frame without annotations. When annotations are hidden, they can’t be modified. |
|
||||
|<img src="/docs/latest/icons/ico-show.svg" alt="Eye Show All" className="icon size-md space-sm" />|Show/hide all annotations |Click the button to view the frame without annotations. When annotations are hidden, they can't be modified. |
|
||||
||Delete label |In the relevant annotation, click **x** on the label you want to remove.|
|
||||
|
||||
### Frame Labels
|
||||
|
@ -30,7 +30,7 @@ In tree view, parent versions that do not match the query where a child version
|
||||
|
||||
Access dataset version actions, by right-clicking a version, or through the menu button <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" /> (available on hover).
|
||||
|
||||
* **Rename** - Change the version’s name
|
||||
* **Rename** - Change the version's name
|
||||
* **Create New Version** - Creates a child version of a *Published* dataset version. The new version is created in a *draft*
|
||||
state, and inherits all the parent version's frames
|
||||
* **Delete** - Delete the version. Only *Draft* versions can be deleted.
|
||||
@ -66,7 +66,7 @@ the previews.
|
||||
|
||||

|
||||
|
||||
Use the table view to list the version’s frames in a customizable table. Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
|
||||
Use the table view to list the version's frames in a customizable table. Click <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
|
||||
for column customization options.
|
||||
|
||||

|
||||
@ -122,7 +122,7 @@ Multiple frame filters are applied with a logical OR operator.
|
||||
|
||||
For example, the dataset version in the image below has two frame filters. "Frame Filter 1" has the same two rules
|
||||
described in the example above. "Frame Filter 2" specifies an ROI rule for the frame to contain an ROI with the label
|
||||
`dog`. So the frames returned are those that match ALL of Frame Filter 1’s rules OR ALL of Frame Filter 2’s rules.
|
||||
`dog`. So the frames returned are those that match ALL of Frame Filter 1's rules OR ALL of Frame Filter 2's rules.
|
||||
|
||||

|
||||
|
||||
@ -193,7 +193,7 @@ Use the **Grouping** menu to select one of the following options:
|
||||
* Group by URL - Show a single preview for all FrameGroups with the same context
|
||||
|
||||
#### Preview Source
|
||||
When using multi-source FrameGroups, users can choose which of the FrameGroups’ sources will be displayed as the preview.
|
||||
When using multi-source FrameGroups, users can choose which of the FrameGroups' sources will be displayed as the preview.
|
||||
|
||||
Select a source from the **PREVIEW SOURCE** menu.
|
||||
Choose the `Default preview source` option to present the first available source.
|
||||
|
@ -11,7 +11,7 @@ to the specific task's **DATAVIEWS** tab (see [Experiment Dataviews](webapp_exp_
|
||||
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
||||
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
|
||||
using the buttons on the top left of the page. Use the table view for a comparative view of your Dataviews according to
|
||||
columns of interest. Use the details view to access a selected Dataview’s details, while keeping the Dataview list in view.
|
||||
columns of interest. Use the details view to access a selected Dataview's details, while keeping the Dataview list in view.
|
||||
Details view can also be accessed by double-clicking a specific Dataview in the table view to open its details view.
|
||||
|
||||
You can archive Dataviews so the Dataviews table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
|
||||
|
@ -16,7 +16,7 @@ The comparison page opens in the **DETAILS** tab, showing a column for each expe
|
||||
|
||||
## Dataviews
|
||||
|
||||
In the **Details** tab, you can view differences in the experiments’ nominal values. Each experiment’s information is
|
||||
In the **Details** tab, you can view differences in the experiments' nominal values. Each experiment's information is
|
||||
displayed in a column, so each field is lined up side-by-side. Expand the **DATAVIEWS**
|
||||
section to view all the Dataview fields side-by-side (filters, iterations, label enumeration, etc.). The differences between the
|
||||
experiments are highlighted. Obscure identical fields by switching on the `Hide Identical Fields` toggle.
|
||||
|
@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* AutoKeras model files
|
||||
|
@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* CatBoost model files
|
||||
@ -115,6 +115,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* `fastai` model files
|
||||
|
@ -41,10 +41,10 @@ The agent executes the code with the modifications you made in the UI, even over
|
||||
|
||||
Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways:
|
||||
* Modify the OmegaConf directly:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
1. In the experiment's **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment's **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
* Add an experiment hyperparameter:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
1. In the experiment's **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
to `False`
|
||||
1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf
|
||||
that you want to modify using dot notation. For example, if your OmegaConf looks like this:
|
||||
|
@ -8,7 +8,7 @@ instructions.
|
||||
:::
|
||||
|
||||
[PyTorch Ignite](https://pytorch.org/ignite/index.html) is a library for training and evaluating neural networks in
|
||||
PyTorch. You can integrate ClearML into your code using Ignite’s built-in loggers: [TensorboardLogger](#tensorboardlogger)
|
||||
PyTorch. You can integrate ClearML into your code using Ignite's built-in loggers: [TensorboardLogger](#tensorboardlogger)
|
||||
and [ClearMLLogger](#clearmllogger).
|
||||
|
||||
## TensorboardLogger
|
||||
@ -92,7 +92,7 @@ Integrate ClearML with the following steps:
|
||||
# Attach the logger to the trainer to log model's weights as a histogram
|
||||
clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
|
||||
|
||||
# Attach the logger to the trainer to log model’s gradients as scalars
|
||||
# Attach the logger to the trainer to log model's gradients as scalars
|
||||
clearml_logger.attach(
|
||||
trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
|
||||
)
|
||||
|
@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* Keras models
|
||||
@ -77,7 +77,7 @@ See [Explicit Reporting Tutorial](../guides/reporting/explicit_reporting.md).
|
||||
|
||||
## Examples
|
||||
|
||||
Take a look at ClearML’s Keras examples. The examples use Keras and ClearML in different configurations with
|
||||
Take a look at ClearML's Keras examples. The examples use Keras and ClearML in different configurations with
|
||||
additional tools like TensorBoard and Matplotlib:
|
||||
* [Keras with Tensorboard](../guides/frameworks/keras/keras_tensorboard.md) - Demonstrates ClearML logging a Keras model,
|
||||
and plots and scalars logged to TensorBoard
|
||||
@ -127,6 +127,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -36,7 +36,7 @@ Integrate ClearML into your Keras Tuner optimization script by doing the followi
|
||||
)
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Output Keras model
|
||||
* Optimization trial scalars - scalar plot showing metrics for all runs
|
||||
* Hyperparameter optimization summary plot - Tabular summary of hyperparameters tested and their metrics by trial ID
|
||||
|
@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* LightGBM model files
|
||||
@ -116,6 +116,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* MegEngine model files
|
||||
@ -112,6 +112,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -66,7 +66,7 @@ change the task's name or project, use the `task_name` and `project_name` parame
|
||||
The task captures the images logged by the image handler, metrics logged with the stats handler, as well as source code,
|
||||
uncommitted changes, installed packages, console output, and more.
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
View the logged images in the WebApp, in the experiment's **Debug Samples** tab.
|
||||
|
||||
|
@ -6,7 +6,7 @@ title: Optuna
|
||||
which makes use of different samplers such as grid search, random, bayesian, and evolutionary algorithms. You can integrate
|
||||
Optuna into ClearML's automated hyperparameter optimization.
|
||||
|
||||
The [HyperParameterOptimizer](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class contains ClearML’s
|
||||
The [HyperParameterOptimizer](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class contains ClearML's
|
||||
hyperparameter optimization modules. Its modular design enables using different optimizers, including existing software
|
||||
frameworks, like Optuna, enabling simple,
|
||||
accurate, and fast hyperparameter optimization. The Optuna ([`automation.optuna.OptimizerOptuna`](../references/sdk/hpo_optuna_optuna_optimizeroptuna.md)),
|
||||
|
@ -16,7 +16,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* PyTorch models
|
||||
@ -86,7 +86,7 @@ Take a look at ClearML's PyTorch examples. The examples use PyTorch and ClearML
|
||||
additional tools, like argparse, TensorBoard, and matplotlib:
|
||||
|
||||
* [PyTorch MNIST](../guides/frameworks/pytorch/pytorch_mnist.md) - Demonstrates ClearML automatically logging models created with PyTorch, and `argparse` command line parameters
|
||||
* [PyTorch with Matplotlib](../guides/frameworks/pytorch/pytorch_matplotlib.md) - Demonstrates ClearML’s automatic logging PyTorch models and matplotlib images. The images are stored in the resulting ClearML experiment's **Debug Samples**
|
||||
* [PyTorch with Matplotlib](../guides/frameworks/pytorch/pytorch_matplotlib.md) - Demonstrates ClearML's automatic logging PyTorch models and matplotlib images. The images are stored in the resulting ClearML experiment's **Debug Samples**
|
||||
* [PyTorch with TensorBoard](../guides/frameworks/pytorch/pytorch_tensorboard.md) - Demonstrates ClearML automatically logging PyTorch models, and scalars, debug samples, and text logged using TensorBoard's `SummaryWriter`
|
||||
* [PyTorch TensorBoard Toy](../guides/frameworks/pytorch/tensorboard_toy_pytorch.md) - Demonstrates ClearML automatically logging debug samples logged using TensorBoard's `SummaryWriter`
|
||||
* [PyTorch TensorBoardX](../guides/frameworks/pytorch/pytorch_tensorboardx.md) - Demonstrates ClearML automatically logging PyTorch models, and scalars, debug samples, and text logged using TensorBoardX's `SummaryWriter`
|
||||
|
@ -18,7 +18,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* PyTorch Models
|
||||
@ -43,8 +43,7 @@ To control a task's framework logging, use the `auto_connect_frameworks` paramet
|
||||
Completely disable all automatic logging by setting the parameter to `False`. For finer grained control of logged
|
||||
frameworks, input a dictionary, with framework-boolean pairs.
|
||||
|
||||
For example, the following code will log PyTorch models, but will not log any information reported to TensorBoard.
|
||||
:
|
||||
For example, the following code will log PyTorch models, but will not log any information reported to TensorBoard:
|
||||
|
||||
```python
|
||||
auto_connect_frameworks={
|
||||
@ -143,7 +142,7 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
||||
|
@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* Joblib model files
|
||||
|
@ -56,7 +56,7 @@ See more information about explicitly logging information to a ClearML Task:
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
### Examples
|
||||
Take a look at ClearML’s TensorBoard examples:
|
||||
Take a look at ClearML's TensorBoard examples:
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates logging TensorBoard outputs and TensorFlow flags
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates logging TensorBoard histograms, scalars, images, text, and TensorFlow flags
|
||||
* [Tensorboard with PyTorch](../guides/frameworks/pytorch/pytorch_tensorboard.md) - Demonstrates logging TensorBoard scalars, debug samples, and text integrated in code that uses PyTorch
|
@ -56,7 +56,7 @@ See more information about explicitly logging information to a ClearML Task:
|
||||
|
||||
### Examples
|
||||
|
||||
Take a look at ClearML’s TensorboardX examples:
|
||||
Take a look at ClearML's TensorboardX examples:
|
||||
|
||||
* [TensorboardX with PyTorch](../guides/frameworks/tensorboardx/tensorboardx.md) - Demonstrates ClearML logging TensorboardX scalars, debug
|
||||
samples, and text in code using PyTorch
|
||||
|
@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* TensorFlow definitions
|
||||
@ -75,17 +75,17 @@ See [Explicit Reporting Tutorial](../guides/reporting/explicit_reporting.md).
|
||||
|
||||
## Examples
|
||||
|
||||
Take a look at ClearML’s TensorFlow examples. The examples use TensorFlow and ClearML in different configurations with
|
||||
Take a look at ClearML's TensorFlow examples. The examples use TensorFlow and ClearML in different configurations with
|
||||
additional tools, like Abseil and TensorBoard:
|
||||
|
||||
* [TensorFlow MNIST](../guides/frameworks/tensorflow/tensorflow_mnist.md) - Demonstrates ClearML's automatic logging of
|
||||
model checkpoints, TensorFlow definitions, and scalars logged using TensorFlow methods
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates ClearML’s automatic
|
||||
* [TensorBoard PR Curve](../guides/frameworks/tensorflow/tensorboard_pr_curve.md) - Demonstrates ClearML's automatic
|
||||
logging of TensorBoard output and TensorFlow definitions.
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates ClearML’s automatic logging of
|
||||
* [TensorBoard Toy](../guides/frameworks/tensorflow/tensorboard_toy.md) - Demonstrates ClearML's automatic logging of
|
||||
TensorBoard scalars, histograms, images, and text, as well as all console output and TensorFlow Definitions.
|
||||
* [Absl flags](https://github.com/allegroai/clearml/blob/master/examples/frameworks/tensorflow/absl_flags.py) - Demonstrates
|
||||
ClearML’s automatic logging of parameters defined using `absl.flags`
|
||||
ClearML's automatic logging of parameters defined using `absl.flags`
|
||||
|
||||
## Remote Execution
|
||||
ClearML logs all the information required to reproduce an experiment on a different machine (installed packages,
|
||||
@ -129,6 +129,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -25,7 +25,7 @@ All you have to do is install and set up ClearML:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! In every training run from now on, the ClearML experiment
|
||||
That's it! In every training run from now on, the ClearML experiment
|
||||
manager will capture:
|
||||
* Source code and uncommitted changes
|
||||
* Hyperparameters - PyTorch trainer [parameters](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/trainer#transformers.TrainingArguments)
|
||||
@ -38,7 +38,7 @@ and TensorFlow definitions
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md). By default, a task called `Trainer` is created
|
||||
in the `HuggingFace Transformers` project. To change the task’s name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK`
|
||||
in the `HuggingFace Transformers` project. To change the task's name or project, use the `CLEARML_PROJECT` and `CLEARML_TASK`
|
||||
environment variables
|
||||
|
||||
:::tip project names
|
||||
@ -48,7 +48,7 @@ task within the `example` project.
|
||||
|
||||
In order to log the models created during training, set the `CLEARML_LOG_MODEL` environment variable to `True`.
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||

|
||||
|
||||
@ -79,7 +79,7 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||

|
||||
|
||||
Use ClearML’s web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
* Clone the experiment
|
||||
* Edit the hyperparameters and/or other details
|
||||
@ -88,6 +88,6 @@ with the new configuration on a remote machine:
|
||||
The ClearML Agent executing the task will use the new values to [override any hard coded values](../clearml_agent.md).
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -17,7 +17,7 @@ from clearml import Task
|
||||
task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
And that’s it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* XGBoost model files
|
||||
@ -143,6 +143,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
|
||||
for more information.
|
||||
|
@ -27,7 +27,7 @@ built in logger:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! Now, whenever you train a model using YOLOv5, the run will be captured and tracked by ClearML – no additional
|
||||
That's it! Now, whenever you train a model using YOLOv5, the run will be captured and tracked by ClearML – no additional
|
||||
code necessary.
|
||||
|
||||
## Training YOLOv5 with ClearML
|
||||
@ -54,7 +54,7 @@ manager will capture:
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md). By default, a task called `Training` is created
|
||||
in the `YOLOv5` project. To change the task’s name or project, use the `--project` and `--name` arguments when running
|
||||
in the `YOLOv5` project. To change the task's name or project, use the `--project` and `--name` arguments when running
|
||||
the `train.py` script.
|
||||
|
||||
```commandline
|
||||
@ -66,7 +66,7 @@ ClearML uses `/` as a delimiter for subprojects: using `example/sample` as a nam
|
||||
task within the `example` project.
|
||||
:::
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
Additionally, you can view all of your YOLOv5 runs tracked by ClearML in the [Experiments Table](../webapp/webapp_model_table.md).
|
||||
Add custom columns to the table, such as mAP values, so you can easily sort and see what is the best performing model.
|
||||
You can also select multiple experiments and directly [compare](../webapp/webapp_exp_comparing.md) them.
|
||||
@ -94,7 +94,7 @@ dataset using the link in the yaml file or the scripts provided by YOLOv5, you g
|
||||
```
|
||||
|
||||
You can use any dataset, as long as you maintain this folder structure.
|
||||
Copy the dataset’s corresponding yaml file to the root of the dataset folder.
|
||||
Copy the dataset's corresponding yaml file to the root of the dataset folder.
|
||||
|
||||
```
|
||||
..
|
||||
@ -171,7 +171,7 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||

|
||||
|
||||
Use ClearML’s web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
* Clone the experiment
|
||||
* Edit the hyperparameters and/or other details
|
||||
@ -200,7 +200,7 @@ if RANK in {-1, 0}:
|
||||
```
|
||||
|
||||
## Hyperparameter Optimization
|
||||
Use ClearML’s [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
|
||||
the hyperparameter values that yield the best performing models.
|
||||
|
||||
To run hyperparameter optimization locally, you can use the [template script](https://github.com/ultralytics/yolov5/blob/master/utils/loggers/clearml/hpo.py)
|
||||
|
@ -38,7 +38,7 @@ segmentation, and classification. Get the most out of YOLOv8 with ClearML:
|
||||
clearml-init
|
||||
```
|
||||
|
||||
That’s it! Now, whenever you train a model using YOLOv8, the run will be captured and tracked by ClearML – no additional
|
||||
That's it! Now, whenever you train a model using YOLOv8, the run will be captured and tracked by ClearML – no additional
|
||||
code necessary.
|
||||
|
||||
## Training YOLOv8 with ClearML
|
||||
@ -64,7 +64,7 @@ manager will capture:
|
||||
* And more
|
||||
|
||||
All of this is captured into a [ClearML Task](../fundamentals/task.md): a task with your training script's name
|
||||
created in a `YOLOv8` ClearML project. To change the task’s name or project, pass the `name` and `project` arguments in one of
|
||||
created in a `YOLOv8` ClearML project. To change the task's name or project, pass the `name` and `project` arguments in one of
|
||||
the following ways:
|
||||
* Via the SDK:
|
||||
|
||||
@ -89,7 +89,7 @@ ClearML uses `/` as a delimiter for subprojects: using `example/sample` as a nam
|
||||
task within the `example` project.
|
||||
:::
|
||||
|
||||
You can see all the captured data in the task’s page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
Additionally, you can view all of your YOLOv8 runs tracked by ClearML in the [Experiments Table](../webapp/webapp_model_table.md).
|
||||
Add custom columns to the table, such as mAP values, so you can easily sort and see what is the best performing model.
|
||||
You can also select multiple experiments and directly [compare](../webapp/webapp_exp_comparing.md) them.
|
||||
@ -115,7 +115,7 @@ shuts down instances as needed, according to a resource budget that you set.
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||
ClearML logs all the information required to reproduce an experiment, but you may also want to change a few parameters
|
||||
and task details when you re-run an experiment, which you can do through ClearML’s UI.
|
||||
and task details when you re-run an experiment, which you can do through ClearML's UI.
|
||||
|
||||
In order to be able to override parameters via the UI,
|
||||
you have to run your code to [create a ClearML Task](../clearml_sdk/task_sdk.md#task-creation), which will log all the
|
||||
|
@ -40,14 +40,15 @@ need to do is [instantiate a ClearML Task](clearml_sdk/task_sdk.md#task-creation
|
||||
framework's training results as output models.
|
||||
|
||||
Automatic logging is supported for the following frameworks:
|
||||
* TensorFlow (see [code example](guides/frameworks/tensorflow/tensorflow_mnist.md))
|
||||
* Keras (see [code example](guides/frameworks/keras/keras_tensorboard.md))
|
||||
* PyTorch (see [code example](guides/frameworks/pytorch/pytorch_mnist.md))
|
||||
* scikit-learn (only using joblib) (see [code example](guides/frameworks/scikit-learn/sklearn_joblib_example.md))
|
||||
* XGBoost (only using joblib) (see [code example](guides/frameworks/xgboost/xgboost_sample.md))
|
||||
* FastAI (see [code example](guides/frameworks/fastai/fastai_with_tensorboard.md))
|
||||
* MegEngine (see [code example](guides/frameworks/megengine/megengine_mnist.md))
|
||||
* CatBoost (see [code example](guides/frameworks/catboost/catboost.md))
|
||||
* [TensorFlow](integrations/tensorflow.md)
|
||||
* [Keras](integrations/keras.md)
|
||||
* [PyTorch](integrations/pytorch.md)
|
||||
* [scikit-learn](integrations/scikit_learn.md) (only using joblib)
|
||||
* [XGBoost](integrations/xgboost.md) (only using joblib)
|
||||
* [Fast.ai](integrations/fastai.md)
|
||||
* [MegEngine](integrations/megengine.md)
|
||||
* [CatBoost](integrations/catboost.md)
|
||||
* [MONAI](integrations/monai.md)
|
||||
|
||||
You may want more control over which models are logged. Use the `auto_connect_framework` parameter of [`Task.init()`](references/sdk/task.md#taskinit)
|
||||
to control automatic logging of frameworks.
|
||||
|
@ -3,7 +3,7 @@ title: Comparing Experiments
|
||||
---
|
||||
|
||||
The ClearML Web UI provides features for comparing experiments, allowing to locate, visualize, and analyze the
|
||||
differences in experiments’ results and their causes. You can view the differences in:
|
||||
differences in experiments' results and their causes. You can view the differences in:
|
||||
* [Details](#side-by-side-textual-comparison) - Compare experiment source code, package versions, models, configuration
|
||||
objects, and other details.
|
||||
* Hyperparameters
|
||||
@ -87,7 +87,7 @@ navigate between search results.
|
||||
|
||||
|
||||
### Tabular Scalar Comparison
|
||||
The **Scalars** tab **Values** view lays out the experiments’ metric values in a table: a row per metric/variant and a
|
||||
The **Scalars** tab **Values** view lays out the experiments' metric values in a table: a row per metric/variant and a
|
||||
column for each experiment. Select from the dropdown menu which metric values to display:
|
||||
* Last Values: The last reported values for each experiment
|
||||
* Min Values: The minimal value reported throughout the experiment execution
|
||||
@ -101,7 +101,7 @@ Switch on the **Show row extremes** toggle to highlight each variant's maximum a
|
||||
|
||||
### Parallel Coordinates Mode
|
||||
|
||||
The **Hyperparameters** tab's **Parallel Coordinates** comparison shows experiments’ hyperparameter impact on a specific metric.
|
||||
The **Hyperparameters** tab's **Parallel Coordinates** comparison shows experiments' hyperparameter impact on a specific metric.
|
||||
|
||||
**To compare by metric:**
|
||||
1. Under **Performance Metric**, select a metric to compare for
|
||||
@ -122,7 +122,7 @@ To focus on a specific experiment, hover over its name in the graph legend.
|
||||
To hide an experiment, click its name in the graph legend (click again to bring back).
|
||||
|
||||
### Plot Comparison
|
||||
The **Scalars** (Graph view) and **Plots** tabs compare experiments’ plots.
|
||||
The **Scalars** (Graph view) and **Plots** tabs compare experiments' plots.
|
||||
|
||||
The **Scalars** tab displays scalar values as time series line charts. The **Plots** tab compares the last reported
|
||||
iteration sample of each metric/variant combination per compared experiment.
|
||||
|
Loading…
Reference in New Issue
Block a user