mirror of
https://github.com/clearml/clearml-docs
synced 2025-03-03 02:32:49 +00:00
Small edits (#668)
This commit is contained in:
parent
dec2ff2e1e
commit
d2dbd30bb4
@ -387,9 +387,7 @@ ClearML Agent uses the provided default Docker container, which can be overridde
|
||||
You can set the docker container via the UI:
|
||||
1. Clone the experiment
|
||||
2. Set the Docker in the cloned task's **Execution** tab **> Container** section
|
||||
|
||||

|
||||
|
||||
3. Enqueue the cloned task
|
||||
|
||||
The task will be executed in the container specified in the UI.
|
||||
|
@ -334,7 +334,7 @@ Note that in offline mode, any methods that require communicating with the serve
|
||||
Upload the offline dataset to the ClearML Server using [`Dataset.import_offline_session()`](../references/sdk/dataset.md#datasetimport_offline_session).
|
||||
|
||||
```python
|
||||
Dataset.import_offline_session(session_folder_zip="<path_to_offline_dataset>", upload=True, finalize=True")
|
||||
Dataset.import_offline_session(session_folder_zip="<path_to_offline_dataset>", upload=True, finalize=True)
|
||||
```
|
||||
|
||||
In the `session_folder_zip` argument, insert the path to the zip folder containing the dataset. To [upload](#uploading-files)
|
||||
|
@ -707,7 +707,7 @@ This configuration option is experimental, and has not been vigorously tested, s
|
||||
**`api.credentials`** (*dict*)
|
||||
|
||||
* Dictionary of API credentials.
|
||||
Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY` to override these keys.
|
||||
Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY` / `CLEARML_API_SECRET_KEY` to override these keys.
|
||||
|
||||
|
||||
---
|
||||
|
@ -14,6 +14,6 @@ Solutions combined with the clearml-server control plane.
|
||||
|
||||
## YouTube Playlist
|
||||
|
||||
The first video in the ClearML YouTube **Getting Started** playlist covers these modules in more detail, feel free to check out the video below.
|
||||
The first video in the ClearML YouTube **Getting Started** playlist covers these modules in more detail. Feel free to check out the video below.
|
||||
|
||||
[](https://www.youtube.com/watch?v=s3k9ntmQmD4&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=1)
|
@ -41,7 +41,7 @@ yields the best performing model for your task!
|
||||
- You should continue coding while experiments are being executed without interrupting them.
|
||||
- Stop optimizing your code because your machine struggles, and run it on a beefier machine (cloud / on-prem).
|
||||
|
||||
Visualization and comparisons dashboards keep your sanity at bay! In this stage you usually have a docker container with all the binaries
|
||||
Visualization and comparison dashboards keep your sanity at bay! At this stage you usually have a docker container with all the binaries
|
||||
that you need.
|
||||
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) ensures that all the metrics, parameters and Models are automatically logged and can later be
|
||||
accessed, [compared](../../webapp/webapp_exp_comparing.md) and [tracked](../../webapp/webapp_exp_track_visual.md).
|
||||
|
@ -186,6 +186,6 @@ or check these pages out:
|
||||
|
||||
## YouTube Playlist
|
||||
|
||||
All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube, go check it out :)
|
||||
All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube. Go check it out :)
|
||||
|
||||
[](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)
|
@ -11,16 +11,16 @@ This example accomplishes the automated random parameter search by doing the fol
|
||||
1. Creating a template Task named `Keras HP optimization base`. To create it, run the [base_template_keras_simple.py](https://github.com/allegroai/clearml/blob/master/examples/optimization/hyper-parameter-optimization/base_template_keras_simple.py)
|
||||
script. This experiment must be executed first, so it will be stored in the server, and then it can be accessed, cloned,
|
||||
and modified by another Task.
|
||||
1. Creating a parameter dictionary, which is connected to the Task by calling [Task.connect](../../references/sdk/task.md#connect)
|
||||
1. Creating a parameter dictionary, which is connected to the Task by calling [`Task.connect()`](../../references/sdk/task.md#connect)
|
||||
so that the parameters are logged by ClearML.
|
||||
1. Adding the random search hyperparameters and parameters defining the search (e.g., the experiment name, and number of
|
||||
times to run the experiment).
|
||||
1. Creating a Task object referencing the template experiment, `Keras HP optimization base`. See [Task.get_task](../../references/sdk/task.md#taskget_task).
|
||||
1. Creating a Task object referencing the template experiment, `Keras HP optimization base`. See [`Task.get_task`](../../references/sdk/task.md#taskget_task).
|
||||
1. For each set of parameters:
|
||||
1. Cloning the Task object. See [Task.clone](../../references/sdk/task.md#taskclone).
|
||||
1. Getting the newly cloned Task's parameters. See [Task.get_parameters](../../references/sdk/task.md#get_parameters)
|
||||
1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1). See [Task.set_parameters](../../references/sdk/task.md#set_parameters).
|
||||
1. Enqueuing the newly cloned Task to execute. See [Task.enqueue](../../references/sdk/task.md#taskenqueue).
|
||||
1. Cloning the Task object. See [`Task.clone`](../../references/sdk/task.md#taskclone).
|
||||
1. Getting the newly cloned Task's parameters. See [`Task.get_parameters`](../../references/sdk/task.md#get_parameters).
|
||||
1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1). See [`Task.set_parameters`](../../references/sdk/task.md#set_parameters).
|
||||
1. Enqueuing the newly cloned Task to execute. See [`Task.enqueue`](../../references/sdk/task.md#taskenqueue).
|
||||
|
||||
When the example script runs, it creates an experiment named `Random Hyper-Parameter Search Example` in
|
||||
the `examples` project. This starts the parameter search, and creates the experiments:
|
||||
|
@ -14,15 +14,15 @@ dataset), and reports (uploads) the following to the main Task:
|
||||
* Scalars - Loss reported as a scalar during training in each Task in a subprocess.
|
||||
* Hyperparameters - Hyperparameters created in each Task are added to the hyperparameters in the main Task.
|
||||
|
||||
Each Task in a subprocess references the main Task by calling [Task.current_task](../../references/sdk/task.md#taskcurrent_task), which always returns
|
||||
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task), which always returns
|
||||
the main Task.
|
||||
|
||||
When the script runs, it creates an experiment named `test torch distributed` in the `examples` project.
|
||||
|
||||
## Artifacts
|
||||
|
||||
The example uploads a dictionary as an artifact in the main Task by calling the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
|
||||
method on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
|
||||
The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact)
|
||||
on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
|
||||
of the subprocess, making each unique.
|
||||
|
||||
```python
|
||||
@ -38,8 +38,8 @@ All of these artifacts appear in the main Task under **ARTIFACTS** **>** **OTHER
|
||||
|
||||
## Scalars
|
||||
|
||||
Loss is reported to the main Task by calling the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
|
||||
method on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called
|
||||
Loss is reported to the main Task by calling the [`Logger.report_scalar()`](../../references/sdk/logger.md#report_scalar)
|
||||
on `Task.current_task().get_logger()`, which is the logger for the main Task. Since `Logger.report_scalar` is called
|
||||
with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are
|
||||
logged together.
|
||||
|
||||
|
@ -5,7 +5,7 @@ title: Subprocess
|
||||
The [subprocess_example.py](https://github.com/allegroai/clearml/blob/master/examples/distributed/subprocess_example.py)
|
||||
script demonstrates multiple subprocesses interacting and reporting to a main Task. The following happens in the script:
|
||||
* This script initializes a main Task and spawns subprocesses, each for an instances of that Task.
|
||||
* Each Task in a subprocess references the main Task by calling [Task.current_task](../../references/sdk/task.md#taskcurrent_task),
|
||||
* Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task),
|
||||
which always returns the main Task.
|
||||
* The Task in each subprocess reports the following to the main Task:
|
||||
* Hyperparameters - Additional, different hyperparameters.
|
||||
@ -15,7 +15,7 @@ which always returns the main Task.
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
|
||||
connecting it to the Task using a call to the [`Task.connect`](../../references/sdk/task.md#connect) method.
|
||||
connecting it to the Task using [`Task.connect()`](../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
additional_parameters = {
|
||||
|
@ -38,7 +38,7 @@ The example calls Matplotlib methods to log debug sample images. They appear in
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task, by
|
||||
calling the [`Task.connect`](../../../references/sdk/task.md#connect) method.
|
||||
calling [`Task.connect()`](../../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
task_params = {'num_scatter_samples': 60, 'sin_max_value': 20, 'sin_steps': 30}
|
||||
|
@ -53,12 +53,11 @@ Text printed to the console for training progress, as well as all other console
|
||||
|
||||
## Configuration Objects
|
||||
|
||||
In the experiment code, a configuration dictionary is connected to the Task by calling the [`Task.connect`](../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
In the experiment code, a configuration dictionary is connected to the Task by calling [`Task.connect()`](../../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
task.connect_configuration(
|
||||
name="MyConfig"
|
||||
name="MyConfig",
|
||||
configuration={'test': 1337, 'nested': {'key': 'value', 'number': 1}}
|
||||
)
|
||||
```
|
||||
|
@ -30,7 +30,7 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using
|
||||
a call to the [Task.connect](../../../../../references/sdk/task.md#connect) method.
|
||||
[`Task.connect()`](../../../../../references/sdk/task.md#connect).
|
||||
|
||||
configuration_dict = {'number_of_epochs': 3, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
|
||||
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
|
||||
|
@ -14,15 +14,14 @@ The example code preprocesses the downloaded data using Pandas DataFrames, and s
|
||||
* `Outcome dictionary` - Label enumeration for training.
|
||||
* `Processed data` - A dictionary containing the paths of the training and validation data.
|
||||
|
||||
Each artifact is uploaded by calling the [Task.upload_artifact](../../../../../references/sdk/task.md#upload_artifact)
|
||||
method. Artifacts appear in the **ARTIFACTS** tab.
|
||||
Each artifact is uploaded by calling [`Task.upload_artifact()`](../../../../../references/sdk/task.md#upload_artifact).
|
||||
Artifacts appear in the **ARTIFACTS** tab.
|
||||
|
||||

|
||||
|
||||
## Plots (tables)
|
||||
|
||||
The example code explicitly reports the data in Pandas DataFrames by calling the [Logger.report_table](../../../../../references/sdk/logger.md#report_table)
|
||||
method.
|
||||
The example code explicitly reports the data in Pandas DataFrames by calling [`Logger.report_table()`](../../../../../references/sdk/logger.md#report_table).
|
||||
|
||||
For example, the raw data is read into a Pandas DataFrame named `train_set`, and the `head` of the DataFrame is reported.
|
||||
|
||||
@ -39,8 +38,7 @@ The tables appear in **PLOTS**.
|
||||
|
||||
## Hyperparameters
|
||||
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
A parameter dictionary is logged by connecting it to the Task using [`Task.connect()`](../../../../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
logger = task.get_logger()
|
||||
|
@ -15,8 +15,7 @@ Accuracy, learning rate, and training loss appear in **SCALARS**, along with the
|
||||
## Hyperparameters
|
||||
|
||||
ClearML automatically logs the command line options, because the example code uses `argparse`. A parameter dictionary
|
||||
is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
is logged by connecting it to the Task using [`Task.connect()`](../../../../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
configuration_dict = {
|
||||
|
@ -10,8 +10,7 @@ The example script does the following:
|
||||
dataset
|
||||
* Creates an experiment named `pytorch mnist train with abseil` in the `examples` project
|
||||
* ClearML automatically logs the absl.flags, and the models (and their snapshots) created by PyTorch
|
||||
* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar)
|
||||
method
|
||||
* Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar)
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -16,15 +16,15 @@ The script does the following:
|
||||
* Hyperparameters - Hyperparameters created in each subprocess Task are added to the main Task's hyperparameters.
|
||||
|
||||
|
||||
Each Task in a subprocess references the main Task by calling [Task.current_task](../../../references/sdk/task.md#taskcurrent_task),
|
||||
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task),
|
||||
which always returns the main Task.
|
||||
|
||||
1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**.
|
||||
|
||||
### Artifacts
|
||||
|
||||
The example uploads a dictionary as an artifact in the main Task by calling the [Task.upload_artifact](../../../references/sdk/task.md#upload_artifact)
|
||||
method on `Task.current_task` (the main Task). The dictionary contains the `dist.rank` of the subprocess, making each unique.
|
||||
The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../../references/sdk/task.md#upload_artifact)
|
||||
on `Task.current_task` (the main Task). The dictionary contains the `dist.rank` of the subprocess, making each unique.
|
||||
|
||||
Task.current_task().upload_artifact(
|
||||
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
|
||||
@ -35,7 +35,7 @@ All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
|
||||
|
||||
## Scalars
|
||||
|
||||
Report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method
|
||||
Report loss to the main Task by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar)
|
||||
on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called with the
|
||||
same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together.
|
||||
|
||||
@ -50,8 +50,7 @@ The single scalar plot for loss appears in **SCALARS**.
|
||||
|
||||
ClearML automatically logs the command line options defined using `argparse`.
|
||||
|
||||
A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../references/sdk/task.md#connect)
|
||||
method.
|
||||
A parameter dictionary is logged by connecting it to the Task using [`Task.connect()`](../../../references/sdk/task.md#connect).
|
||||
|
||||
```python
|
||||
param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))}
|
||||
|
@ -10,7 +10,7 @@ The example script does the following:
|
||||
dataset.
|
||||
* Creates an experiment named `pytorch mnist train` in the `examples` project.
|
||||
* ClearML automatically logs `argparse` command line options, and models (and their snapshots) created by PyTorch
|
||||
* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method.
|
||||
* Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar).
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -71,7 +71,7 @@ def job_complete_callback(
|
||||
Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once, it
|
||||
can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../../../webapp/webapp_exp_tuning.md).
|
||||
|
||||
We set the Task type to optimizer, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`).
|
||||
Set the Task type to `optimizer`, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`).
|
||||
|
||||
When the code runs, it creates an experiment named **Automatic Hyper-Parameter Optimization** that is associated with
|
||||
the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML Web UI**.
|
||||
|
@ -187,7 +187,7 @@ def test(args, model, device, test_loader):
|
||||
### Log Text
|
||||
|
||||
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use [`Logger.report_text()`](../../references/sdk/logger.md#report_text)
|
||||
and its argument `level` to report a debugging message.
|
||||
and its `level` argument to report a debugging message.
|
||||
|
||||
```python
|
||||
logger.report_text(
|
||||
|
@ -11,14 +11,13 @@ demonstrates reporting (uploading) images in several formats, including:
|
||||
* Local files.
|
||||
|
||||
ClearML uploads images to the bucket specified in the ClearML [configuration file](../../configs/clearml_conf.md),
|
||||
or ClearML can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
|
||||
or ClearML can be configured for image storage, see [`Logger.set_default_upload_destination()`](../../references/sdk/logger.md#set_default_upload_destination)
|
||||
(storage for [artifacts](../../clearml_sdk/task_sdk.md#setting-upload-destination) is different). Set credentials for
|
||||
storage in the ClearML configuration file.
|
||||
|
||||
When the script runs, it creates an experiment named `image reporting` in the `examples` project.
|
||||
|
||||
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
|
||||
method:
|
||||
Report images using several formats by calling [`Logger.report_image()`](../../references/sdk/logger.md#report_image):
|
||||
|
||||
```python
|
||||
# report image as float image
|
||||
|
@ -51,7 +51,7 @@ The **Frames** tab displays the contents of the selected dataset version.
|
||||
|
||||
View the version's frames as thumbnail previews or in a table. Use the view toggle to switch between thumbnail
|
||||
view <img src="/docs/latest/icons/ico-grid-view.svg" alt="thumbnail view" className="icon size-md space-sm" /> and
|
||||
table view <img src="/docs/latest/icons/ico-table-view.svg" alt="table view" className="icon size-md space-sm" /> .
|
||||
table view <img src="/docs/latest/icons/ico-table-view.svg" alt="table view" className="icon size-md space-sm" />.
|
||||
|
||||
Use the thumbnail view for a visual preview of the version's frames. You can increase <img src="/docs/latest/icons/ico-zoom-in.svg" alt="Zoom in" className="icon size-md space-sm" />
|
||||
and decrease <img src="/docs/latest/icons/ico-zoom-out.svg" alt="Zoom out" className="icon size-md space-sm" /> the size of
|
||||
|
@ -88,7 +88,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -86,7 +86,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -98,7 +98,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -87,7 +87,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -84,7 +84,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -107,7 +107,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -90,7 +90,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -100,7 +100,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -114,7 +114,7 @@ following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -162,7 +162,7 @@ the following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up
|
||||
and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -107,7 +107,7 @@ the following command on it:
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the
|
||||
Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the
|
||||
cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and
|
||||
shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
|
@ -56,7 +56,7 @@ On the right side of the dataset version panel, view the **VERSION INFO** which
|
||||
* Number of files modified
|
||||
* Number of files removed
|
||||
* Change in size
|
||||
* Version description - to modify, hover over description and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit pencil" className="icon size-md space-sm" /> ,
|
||||
* Version description - to modify, hover over description and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit pencil" className="icon size-md space-sm" />,
|
||||
which opens the edit window
|
||||
|
||||
<div class="max-w-50">
|
||||
@ -101,7 +101,7 @@ Access these actions with the context menu by right-clicking a version on the da
|
||||
|Add Tag |User-defined labels added to versions for grouping and organization. |
|
||||
|Archive| Move dataset versions to the dataset's archive. |
|
||||
|Restore|Action available in the archive. Restore a version to the active dataset versions table.|
|
||||
|Delete| Delete an archived version and its artifacts. This action is available only from the dataset’s archive |
|
||||
|Delete| Delete an archived version and its artifacts. This action is available only from the dataset's archive. |
|
||||
|
||||

|
||||
|
||||
|
@ -322,7 +322,7 @@ These controls allow you to better analyze the results. Hover over a plot, and t
|
||||
| <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-sm space-sm" /> | Pan around plot. Click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-sm space-sm" />, click the plot, and then drag. |
|
||||
| <img src="/docs/latest/icons/ico-dotted-box.svg" alt="Dotted box icon" className="icon size-sm space-sm" /> | To examine an area, draw a dotted box around it. Click <img src="/docs/latest/icons/ico-dotted-box.svg" alt="Dotted box icon" className="icon size-sm space-sm" /> and then drag. |
|
||||
| <img src="/docs/latest/icons/ico-dotted-lasso.svg" alt="Dotted lasso icon" className="icon size-sm space-sm" /> | To examine an area, draw a dotted lasso around it. Click <img src="/docs/latest/icons/ico-dotted-lasso.svg" alt="Dotted lasso icon" className="icon size-sm space-sm" /> and then drag. |
|
||||
| <img src="/docs/latest/icons/ico-zoom.svg" alt="Zoom icon" className="icon size-sm space-sm" /> | Zoom into a section of a plot. Zoom in - Click <img src="/docs/latest/icons/ico-zoom.svg" alt="Zoom icon" className="icon size-sm space-sm" /> and drag over a section of the plot. Reset to original scale - Click <img src="/docs/latest/icons/ico-reset-autoscale.svg" alt="Reset autoscale icon" className="icon size-sm space-sm" /> . |
|
||||
| <img src="/docs/latest/icons/ico-zoom.svg" alt="Zoom icon" className="icon size-sm space-sm" /> | Zoom into a section of a plot. Zoom in - Click <img src="/docs/latest/icons/ico-zoom.svg" alt="Zoom icon" className="icon size-sm space-sm" /> and drag over a section of the plot. Reset to original scale - Click <img src="/docs/latest/icons/ico-reset-autoscale.svg" alt="Reset autoscale icon" className="icon size-sm space-sm" />. |
|
||||
| <img src="/docs/latest/icons/ico-zoom-in-square.svg" alt="Zoom-in icon" className="icon size-sm space-sm" /> | Zoom in. |
|
||||
| <img src="/docs/latest/icons/ico-zoom-out-square.svg" alt="Zoom-out icon" className="icon size-sm space-sm" /> | Zoom out. |
|
||||
| <img src="/docs/latest/icons/ico-reset-autoscale.svg" alt="Reset autoscale icon" className="icon size-sm space-sm" /> | Reset to autoscale after zooming ( <img src="/docs/latest/icons/ico-zoom.svg" alt="Zoom icon" className="icon size-sm space-sm" />, <img src="/docs/latest/icons/ico-zoom-in-square.svg" alt="Zoom-in icon" className="icon size-sm space-sm" />, or <img src="/docs/latest/icons/ico-zoom-out-square.svg" alt="Zoom-out icon" className="icon size-sm space-sm" />). |
|
||||
|
Loading…
Reference in New Issue
Block a user