diff --git a/docs/clearml_agent.md b/docs/clearml_agent.md index da66ed65..ada04c44 100644 --- a/docs/clearml_agent.md +++ b/docs/clearml_agent.md @@ -387,9 +387,7 @@ ClearML Agent uses the provided default Docker container, which can be overridde You can set the docker container via the UI: 1. Clone the experiment 2. Set the Docker in the cloned task's **Execution** tab **> Container** section - ![Container section](img/webapp_exp_container.png) - 3. Enqueue the cloned task The task will be executed in the container specified in the UI. diff --git a/docs/clearml_data/clearml_data_sdk.md b/docs/clearml_data/clearml_data_sdk.md index a4be0ca0..75615d8b 100644 --- a/docs/clearml_data/clearml_data_sdk.md +++ b/docs/clearml_data/clearml_data_sdk.md @@ -334,7 +334,7 @@ Note that in offline mode, any methods that require communicating with the serve Upload the offline dataset to the ClearML Server using [`Dataset.import_offline_session()`](../references/sdk/dataset.md#datasetimport_offline_session). ```python -Dataset.import_offline_session(session_folder_zip="", upload=True, finalize=True") +Dataset.import_offline_session(session_folder_zip="", upload=True, finalize=True) ``` In the `session_folder_zip` argument, insert the path to the zip folder containing the dataset. To [upload](#uploading-files) diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md index c76cbd70..9c00ad82 100644 --- a/docs/configs/clearml_conf.md +++ b/docs/configs/clearml_conf.md @@ -707,7 +707,7 @@ This configuration option is experimental, and has not been vigorously tested, s **`api.credentials`** (*dict*) * Dictionary of API credentials. - Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY` to override these keys. + Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY` / `CLEARML_API_SECRET_KEY` to override these keys. --- diff --git a/docs/getting_started/architecture.md b/docs/getting_started/architecture.md index c52009e9..1343e688 100644 --- a/docs/getting_started/architecture.md +++ b/docs/getting_started/architecture.md @@ -14,6 +14,6 @@ Solutions combined with the clearml-server control plane. ## YouTube Playlist -The first video in the ClearML YouTube **Getting Started** playlist covers these modules in more detail, feel free to check out the video below. +The first video in the ClearML YouTube **Getting Started** playlist covers these modules in more detail. Feel free to check out the video below. [![Watch the video](https://img.youtube.com/vi/s3k9ntmQmD4/hqdefault.jpg)](https://www.youtube.com/watch?v=s3k9ntmQmD4&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=1) \ No newline at end of file diff --git a/docs/getting_started/ds/best_practices.md b/docs/getting_started/ds/best_practices.md index 2b9165f1..3bd78682 100644 --- a/docs/getting_started/ds/best_practices.md +++ b/docs/getting_started/ds/best_practices.md @@ -41,7 +41,7 @@ yields the best performing model for your task! - You should continue coding while experiments are being executed without interrupting them. - Stop optimizing your code because your machine struggles, and run it on a beefier machine (cloud / on-prem). -Visualization and comparisons dashboards keep your sanity at bay! In this stage you usually have a docker container with all the binaries +Visualization and comparison dashboards keep your sanity at bay! At this stage you usually have a docker container with all the binaries that you need. - [ClearML SDK](../../clearml_sdk/clearml_sdk.md) ensures that all the metrics, parameters and Models are automatically logged and can later be accessed, [compared](../../webapp/webapp_exp_comparing.md) and [tracked](../../webapp/webapp_exp_track_visual.md). diff --git a/docs/getting_started/ds/ds_second_steps.md b/docs/getting_started/ds/ds_second_steps.md index 7c7f1a0d..60ae9c2f 100644 --- a/docs/getting_started/ds/ds_second_steps.md +++ b/docs/getting_started/ds/ds_second_steps.md @@ -186,6 +186,6 @@ or check these pages out: ## YouTube Playlist -All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube, go check it out :) +All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube. Go check it out :) [![Watch the video](https://img.youtube.com/vi/kyOfwVg05EM/hqdefault.jpg)](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3) \ No newline at end of file diff --git a/docs/guides/automation/manual_random_param_search_example.md b/docs/guides/automation/manual_random_param_search_example.md index 0a6e7da5..d3006f94 100644 --- a/docs/guides/automation/manual_random_param_search_example.md +++ b/docs/guides/automation/manual_random_param_search_example.md @@ -11,16 +11,16 @@ This example accomplishes the automated random parameter search by doing the fol 1. Creating a template Task named `Keras HP optimization base`. To create it, run the [base_template_keras_simple.py](https://github.com/allegroai/clearml/blob/master/examples/optimization/hyper-parameter-optimization/base_template_keras_simple.py) script. This experiment must be executed first, so it will be stored in the server, and then it can be accessed, cloned, and modified by another Task. -1. Creating a parameter dictionary, which is connected to the Task by calling [Task.connect](../../references/sdk/task.md#connect) +1. Creating a parameter dictionary, which is connected to the Task by calling [`Task.connect()`](../../references/sdk/task.md#connect) so that the parameters are logged by ClearML. 1. Adding the random search hyperparameters and parameters defining the search (e.g., the experiment name, and number of times to run the experiment). -1. Creating a Task object referencing the template experiment, `Keras HP optimization base`. See [Task.get_task](../../references/sdk/task.md#taskget_task). +1. Creating a Task object referencing the template experiment, `Keras HP optimization base`. See [`Task.get_task`](../../references/sdk/task.md#taskget_task). 1. For each set of parameters: - 1. Cloning the Task object. See [Task.clone](../../references/sdk/task.md#taskclone). - 1. Getting the newly cloned Task's parameters. See [Task.get_parameters](../../references/sdk/task.md#get_parameters) - 1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1). See [Task.set_parameters](../../references/sdk/task.md#set_parameters). - 1. Enqueuing the newly cloned Task to execute. See [Task.enqueue](../../references/sdk/task.md#taskenqueue). + 1. Cloning the Task object. See [`Task.clone`](../../references/sdk/task.md#taskclone). + 1. Getting the newly cloned Task's parameters. See [`Task.get_parameters`](../../references/sdk/task.md#get_parameters). + 1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1). See [`Task.set_parameters`](../../references/sdk/task.md#set_parameters). + 1. Enqueuing the newly cloned Task to execute. See [`Task.enqueue`](../../references/sdk/task.md#taskenqueue). When the example script runs, it creates an experiment named `Random Hyper-Parameter Search Example` in the `examples` project. This starts the parameter search, and creates the experiments: diff --git a/docs/guides/distributed/distributed_pytorch_example.md b/docs/guides/distributed/distributed_pytorch_example.md index 73f2b1ad..540fb70d 100644 --- a/docs/guides/distributed/distributed_pytorch_example.md +++ b/docs/guides/distributed/distributed_pytorch_example.md @@ -14,15 +14,15 @@ dataset), and reports (uploads) the following to the main Task: * Scalars - Loss reported as a scalar during training in each Task in a subprocess. * Hyperparameters - Hyperparameters created in each Task are added to the hyperparameters in the main Task. -Each Task in a subprocess references the main Task by calling [Task.current_task](../../references/sdk/task.md#taskcurrent_task), which always returns +Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task), which always returns the main Task. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project. ## Artifacts -The example uploads a dictionary as an artifact in the main Task by calling the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact) -method on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank) +The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact) +on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank) of the subprocess, making each unique. ```python @@ -38,8 +38,8 @@ All of these artifacts appear in the main Task under **ARTIFACTS** **>** **OTHER ## Scalars -Loss is reported to the main Task by calling the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar) -method on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called +Loss is reported to the main Task by calling the [`Logger.report_scalar()`](../../references/sdk/logger.md#report_scalar) +on `Task.current_task().get_logger()`, which is the logger for the main Task. Since `Logger.report_scalar` is called with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together. diff --git a/docs/guides/distributed/subprocess_example.md b/docs/guides/distributed/subprocess_example.md index efec01cf..2c61860f 100644 --- a/docs/guides/distributed/subprocess_example.md +++ b/docs/guides/distributed/subprocess_example.md @@ -5,7 +5,7 @@ title: Subprocess The [subprocess_example.py](https://github.com/allegroai/clearml/blob/master/examples/distributed/subprocess_example.py) script demonstrates multiple subprocesses interacting and reporting to a main Task. The following happens in the script: * This script initializes a main Task and spawns subprocesses, each for an instances of that Task. -* Each Task in a subprocess references the main Task by calling [Task.current_task](../../references/sdk/task.md#taskcurrent_task), +* Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task), which always returns the main Task. * The Task in each subprocess reports the following to the main Task: * Hyperparameters - Additional, different hyperparameters. @@ -15,7 +15,7 @@ which always returns the main Task. ## Hyperparameters ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by -connecting it to the Task using a call to the [`Task.connect`](../../references/sdk/task.md#connect) method. +connecting it to the Task using [`Task.connect()`](../../references/sdk/task.md#connect). ```python additional_parameters = { diff --git a/docs/guides/frameworks/keras/jupyter.md b/docs/guides/frameworks/keras/jupyter.md index 5244293c..090e3325 100644 --- a/docs/guides/frameworks/keras/jupyter.md +++ b/docs/guides/frameworks/keras/jupyter.md @@ -38,7 +38,7 @@ The example calls Matplotlib methods to log debug sample images. They appear in ## Hyperparameters ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task, by -calling the [`Task.connect`](../../../references/sdk/task.md#connect) method. +calling [`Task.connect()`](../../../references/sdk/task.md#connect). ```python task_params = {'num_scatter_samples': 60, 'sin_max_value': 20, 'sin_steps': 30} diff --git a/docs/guides/frameworks/keras/keras_tensorboard.md b/docs/guides/frameworks/keras/keras_tensorboard.md index 51030e9c..97a69055 100644 --- a/docs/guides/frameworks/keras/keras_tensorboard.md +++ b/docs/guides/frameworks/keras/keras_tensorboard.md @@ -53,12 +53,11 @@ Text printed to the console for training progress, as well as all other console ## Configuration Objects -In the experiment code, a configuration dictionary is connected to the Task by calling the [`Task.connect`](../../../references/sdk/task.md#connect) -method. +In the experiment code, a configuration dictionary is connected to the Task by calling [`Task.connect()`](../../../references/sdk/task.md#connect). ```python task.connect_configuration( - name="MyConfig" + name="MyConfig", configuration={'test': 1337, 'nested': {'key': 'value', 'number': 1}} ) ``` diff --git a/docs/guides/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.md b/docs/guides/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.md index 25329ef1..f6329aac 100644 --- a/docs/guides/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.md +++ b/docs/guides/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.md @@ -30,7 +30,7 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v ## Hyperparameters ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using -a call to the [Task.connect](../../../../../references/sdk/task.md#connect) method. +[`Task.connect()`](../../../../../references/sdk/task.md#connect). configuration_dict = {'number_of_epochs': 3, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001} configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml diff --git a/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md b/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md index bc13acc8..272047a9 100644 --- a/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md +++ b/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md @@ -14,15 +14,14 @@ The example code preprocesses the downloaded data using Pandas DataFrames, and s * `Outcome dictionary` - Label enumeration for training. * `Processed data` - A dictionary containing the paths of the training and validation data. -Each artifact is uploaded by calling the [Task.upload_artifact](../../../../../references/sdk/task.md#upload_artifact) -method. Artifacts appear in the **ARTIFACTS** tab. +Each artifact is uploaded by calling [`Task.upload_artifact()`](../../../../../references/sdk/task.md#upload_artifact). +Artifacts appear in the **ARTIFACTS** tab. ![image](../../../../../img/download_and_preprocessing_02.png) ## Plots (tables) -The example code explicitly reports the data in Pandas DataFrames by calling the [Logger.report_table](../../../../../references/sdk/logger.md#report_table) -method. +The example code explicitly reports the data in Pandas DataFrames by calling [`Logger.report_table()`](../../../../../references/sdk/logger.md#report_table). For example, the raw data is read into a Pandas DataFrame named `train_set`, and the `head` of the DataFrame is reported. @@ -39,8 +38,7 @@ The tables appear in **PLOTS**. ## Hyperparameters -A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect) -method. +A parameter dictionary is logged by connecting it to the Task using [`Task.connect()`](../../../../../references/sdk/task.md#connect). ```python logger = task.get_logger() diff --git a/docs/guides/frameworks/pytorch/notebooks/text/text_classification_AG_NEWS.md b/docs/guides/frameworks/pytorch/notebooks/text/text_classification_AG_NEWS.md index 9c3b8278..998dd3f8 100644 --- a/docs/guides/frameworks/pytorch/notebooks/text/text_classification_AG_NEWS.md +++ b/docs/guides/frameworks/pytorch/notebooks/text/text_classification_AG_NEWS.md @@ -15,8 +15,7 @@ Accuracy, learning rate, and training loss appear in **SCALARS**, along with the ## Hyperparameters ClearML automatically logs the command line options, because the example code uses `argparse`. A parameter dictionary -is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect) -method. +is logged by connecting it to the Task using [`Task.connect()`](../../../../../references/sdk/task.md#connect). ```python configuration_dict = { diff --git a/docs/guides/frameworks/pytorch/pytorch_abseil.md b/docs/guides/frameworks/pytorch/pytorch_abseil.md index 8d95a023..3d09b176 100644 --- a/docs/guides/frameworks/pytorch/pytorch_abseil.md +++ b/docs/guides/frameworks/pytorch/pytorch_abseil.md @@ -10,8 +10,7 @@ The example script does the following: dataset * Creates an experiment named `pytorch mnist train with abseil` in the `examples` project * ClearML automatically logs the absl.flags, and the models (and their snapshots) created by PyTorch -* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) - method +* Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar) ## Scalars diff --git a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md index 77a97337..5233c209 100644 --- a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md +++ b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md @@ -16,15 +16,15 @@ The script does the following: * Hyperparameters - Hyperparameters created in each subprocess Task are added to the main Task's hyperparameters. - Each Task in a subprocess references the main Task by calling [Task.current_task](../../../references/sdk/task.md#taskcurrent_task), + Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task), which always returns the main Task. 1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**. ### Artifacts -The example uploads a dictionary as an artifact in the main Task by calling the [Task.upload_artifact](../../../references/sdk/task.md#upload_artifact) -method on `Task.current_task` (the main Task). The dictionary contains the `dist.rank` of the subprocess, making each unique. +The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../../references/sdk/task.md#upload_artifact) +on `Task.current_task` (the main Task). The dictionary contains the `dist.rank` of the subprocess, making each unique. Task.current_task().upload_artifact( 'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()}) @@ -35,7 +35,7 @@ All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**. ## Scalars -Report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method +Report loss to the main Task by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar) on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together. @@ -50,8 +50,7 @@ The single scalar plot for loss appears in **SCALARS**. ClearML automatically logs the command line options defined using `argparse`. -A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../references/sdk/task.md#connect) -method. +A parameter dictionary is logged by connecting it to the Task using [`Task.connect()`](../../../references/sdk/task.md#connect). ```python param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))} diff --git a/docs/guides/frameworks/pytorch/pytorch_mnist.md b/docs/guides/frameworks/pytorch/pytorch_mnist.md index 64ccff75..b82089fd 100644 --- a/docs/guides/frameworks/pytorch/pytorch_mnist.md +++ b/docs/guides/frameworks/pytorch/pytorch_mnist.md @@ -10,7 +10,7 @@ The example script does the following: dataset. * Creates an experiment named `pytorch mnist train` in the `examples` project. * ClearML automatically logs `argparse` command line options, and models (and their snapshots) created by PyTorch -* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method. +* Additional metrics are logged by calling [`Logger.report_scalar()`](../../../references/sdk/logger.md#report_scalar). ## Scalars diff --git a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md index aeb9411d..375627c8 100644 --- a/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md +++ b/docs/guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt.md @@ -71,7 +71,7 @@ def job_complete_callback( Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once, it can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../../../webapp/webapp_exp_tuning.md). -We set the Task type to optimizer, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`). +Set the Task type to `optimizer`, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`). When the code runs, it creates an experiment named **Automatic Hyper-Parameter Optimization** that is associated with the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML Web UI**. diff --git a/docs/guides/reporting/explicit_reporting.md b/docs/guides/reporting/explicit_reporting.md index 1fdfe5ef..d1228d33 100644 --- a/docs/guides/reporting/explicit_reporting.md +++ b/docs/guides/reporting/explicit_reporting.md @@ -187,7 +187,7 @@ def test(args, model, device, test_loader): ### Log Text Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use [`Logger.report_text()`](../../references/sdk/logger.md#report_text) -and its argument `level` to report a debugging message. +and its `level` argument to report a debugging message. ```python logger.report_text( diff --git a/docs/guides/reporting/image_reporting.md b/docs/guides/reporting/image_reporting.md index 26940551..46645457 100644 --- a/docs/guides/reporting/image_reporting.md +++ b/docs/guides/reporting/image_reporting.md @@ -11,14 +11,13 @@ demonstrates reporting (uploading) images in several formats, including: * Local files. ClearML uploads images to the bucket specified in the ClearML [configuration file](../../configs/clearml_conf.md), -or ClearML can be configured for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination) +or ClearML can be configured for image storage, see [`Logger.set_default_upload_destination()`](../../references/sdk/logger.md#set_default_upload_destination) (storage for [artifacts](../../clearml_sdk/task_sdk.md#setting-upload-destination) is different). Set credentials for storage in the ClearML configuration file. When the script runs, it creates an experiment named `image reporting` in the `examples` project. -Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image) -method: +Report images using several formats by calling [`Logger.report_image()`](../../references/sdk/logger.md#report_image): ```python # report image as float image diff --git a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md index 5caeae3e..e8c6b052 100644 --- a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md +++ b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md @@ -51,7 +51,7 @@ The **Frames** tab displays the contents of the selected dataset version. View the version's frames as thumbnail previews or in a table. Use the view toggle to switch between thumbnail view thumbnail view and -table view table view . +table view table view. Use the thumbnail view for a visual preview of the version's frames. You can increase Zoom in and decrease Zoom out the size of diff --git a/docs/integrations/autokeras.md b/docs/integrations/autokeras.md index eac70e08..d71ba043 100644 --- a/docs/integrations/autokeras.md +++ b/docs/integrations/autokeras.md @@ -88,7 +88,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/catboost.md b/docs/integrations/catboost.md index e50dd159..d3e350a4 100644 --- a/docs/integrations/catboost.md +++ b/docs/integrations/catboost.md @@ -86,7 +86,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/keras.md b/docs/integrations/keras.md index 19a8fe6b..f9f2b144 100644 --- a/docs/integrations/keras.md +++ b/docs/integrations/keras.md @@ -98,7 +98,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/lightgbm.md b/docs/integrations/lightgbm.md index 6b81d88f..5ac47f8c 100644 --- a/docs/integrations/lightgbm.md +++ b/docs/integrations/lightgbm.md @@ -87,7 +87,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/megengine.md b/docs/integrations/megengine.md index fd928a22..419eed2f 100644 --- a/docs/integrations/megengine.md +++ b/docs/integrations/megengine.md @@ -84,7 +84,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/pytorch.md b/docs/integrations/pytorch.md index da3bc1e1..4a671b9c 100644 --- a/docs/integrations/pytorch.md +++ b/docs/integrations/pytorch.md @@ -107,7 +107,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/scikit_learn.md b/docs/integrations/scikit_learn.md index 7cb4bfda..5960f724 100644 --- a/docs/integrations/scikit_learn.md +++ b/docs/integrations/scikit_learn.md @@ -90,7 +90,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/tensorflow.md b/docs/integrations/tensorflow.md index a99289f8..19bd8f19 100644 --- a/docs/integrations/tensorflow.md +++ b/docs/integrations/tensorflow.md @@ -100,7 +100,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/xgboost.md b/docs/integrations/xgboost.md index b3c282cd..8e3e5b0a 100644 --- a/docs/integrations/xgboost.md +++ b/docs/integrations/xgboost.md @@ -114,7 +114,7 @@ following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/yolov5.md b/docs/integrations/yolov5.md index ea75e0b0..96236628 100644 --- a/docs/integrations/yolov5.md +++ b/docs/integrations/yolov5.md @@ -162,7 +162,7 @@ the following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/integrations/yolov8.md b/docs/integrations/yolov8.md index ff7f8fee..7a3630f3 100644 --- a/docs/integrations/yolov8.md +++ b/docs/integrations/yolov8.md @@ -107,7 +107,7 @@ the following command on it: clearml-agent daemon --queue [--docker] ``` -Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md), to help you manage cloud workloads in the +Use the ClearML [Autoscalers](../cloud_autoscaling/autoscaling_overview.md) to help you manage cloud workloads in the cloud of your choice (AWS, GCP, Azure) and automatically deploy ClearML agents: the autoscaler automatically spins up and shuts down instances as needed, according to a resource budget that you set. diff --git a/docs/webapp/datasets/webapp_dataset_viewing.md b/docs/webapp/datasets/webapp_dataset_viewing.md index 0cddcb1a..57bcebc7 100644 --- a/docs/webapp/datasets/webapp_dataset_viewing.md +++ b/docs/webapp/datasets/webapp_dataset_viewing.md @@ -56,7 +56,7 @@ On the right side of the dataset version panel, view the **VERSION INFO** which * Number of files modified * Number of files removed * Change in size -* Version description - to modify, hover over description and click Edit pencil , +* Version description - to modify, hover over description and click Edit pencil, which opens the edit window
@@ -101,7 +101,7 @@ Access these actions with the context menu by right-clicking a version on the da |Add Tag |User-defined labels added to versions for grouping and organization. | |Archive| Move dataset versions to the dataset's archive. | |Restore|Action available in the archive. Restore a version to the active dataset versions table.| -|Delete| Delete an archived version and its artifacts. This action is available only from the dataset’s archive | +|Delete| Delete an archived version and its artifacts. This action is available only from the dataset's archive. | ![Dataset actions](../../img/webapp_dataset_actions.png) diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md index efa6617e..d247e5c5 100644 --- a/docs/webapp/webapp_exp_track_visual.md +++ b/docs/webapp/webapp_exp_track_visual.md @@ -322,7 +322,7 @@ These controls allow you to better analyze the results. Hover over a plot, and t | Pan icon | Pan around plot. Click Pan icon, click the plot, and then drag. | | Dotted box icon | To examine an area, draw a dotted box around it. Click Dotted box icon and then drag. | | Dotted lasso icon | To examine an area, draw a dotted lasso around it. Click Dotted lasso icon and then drag. | -| Zoom icon | Zoom into a section of a plot. Zoom in - Click Zoom icon and drag over a section of the plot. Reset to original scale - Click Reset autoscale icon . | +| Zoom icon | Zoom into a section of a plot. Zoom in - Click Zoom icon and drag over a section of the plot. Reset to original scale - Click Reset autoscale icon. | | Zoom-in icon | Zoom in. | | Zoom-out icon | Zoom out. | | Reset autoscale icon | Reset to autoscale after zooming ( Zoom icon, Zoom-in icon, or Zoom-out icon). |