From 439d86a46b0921a8755a8fe578f173c5876a7335 Mon Sep 17 00:00:00 2001
From: pollfly <75068813+pollfly@users.noreply.github.com>
Date: Tue, 27 Dec 2022 16:01:47 +0200
Subject: [PATCH] Small edits (#420)
---
docs/clearml_sdk/task_sdk.md | 2 +-
docs/clearml_serving/clearml_serving.md | 2 +-
docs/clearml_serving/clearml_serving_setup.md | 2 +-
docs/configs/clearml_conf.md | 2 +-
docs/fundamentals/hpo.md | 2 +-
docs/fundamentals/task.md | 6 ++---
docs/getting_started/ds/ds_first_steps.md | 4 +--
docs/getting_started/ds/ds_second_steps.md | 4 +--
.../mlops/mlops_best_practices.md | 2 +-
.../mlops/mlops_first_steps.md | 4 +--
.../mlops/mlops_second_steps.md | 2 +-
docs/guides/distributed/subprocess_example.md | 2 +-
docs/guides/frameworks/keras/jupyter.md | 2 +-
.../frameworks/keras/keras_tensorboard.md | 2 +-
.../audio_classification_UrbanSound8K.md | 12 ++++++---
.../notebooks/image/hyperparameter_search.md | 2 +-
.../table/download_and_preprocessing.md | 22 ++++++++++------
.../pytorch/pytorch_distributed_example.md | 2 +-
docs/guides/main.md | 2 +-
docs/guides/reporting/explicit_reporting.md | 2 +-
docs/guides/reporting/hyper_parameters.md | 4 +--
docs/guides/reporting/image_reporting.md | 2 +-
docs/guides/reporting/media_reporting.md | 4 +--
docs/guides/services/slack_alerts.md | 2 +-
docs/guides/storage/examples_storagehelper.md | 26 ++++++++++++++-----
docs/hyperdatasets/dataviews.md | 2 +-
docs/hyperdatasets/webapp/webapp_dataviews.md | 2 +-
docs/pipelines/pipelines.md | 2 +-
docs/pipelines/pipelines_sdk_tasks.md | 4 +--
.../webapp/pipelines/webapp_pipeline_table.md | 2 +-
docs/webapp/webapp_archiving.md | 2 +-
docs/webapp/webapp_exp_reproducing.md | 2 +-
docs/webapp/webapp_exp_table.md | 2 +-
docs/webapp/webapp_exp_tuning.md | 2 +-
34 files changed, 81 insertions(+), 57 deletions(-)
diff --git a/docs/clearml_sdk/task_sdk.md b/docs/clearml_sdk/task_sdk.md
index fca9fe50..90f4a7c1 100644
--- a/docs/clearml_sdk/task_sdk.md
+++ b/docs/clearml_sdk/task_sdk.md
@@ -126,7 +126,7 @@ auto_connect_frameworks={'tensorboard': {'report_hparams': False}}
Every `Task.init` call will create a new task for the current execution.
In order to mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if:
* The last time it was executed (on this machine) was under 72 hours ago (configurable, see
- [`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) of
+ [`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) in
the ClearML configuration reference)
* The previous task execution did not have any artifacts / models
diff --git a/docs/clearml_serving/clearml_serving.md b/docs/clearml_serving/clearml_serving.md
index b0fbe013..805ef49f 100644
--- a/docs/clearml_serving/clearml_serving.md
+++ b/docs/clearml_serving/clearml_serving.md
@@ -46,7 +46,7 @@ solution.
* **Serving Service Task** - Control plane object storing configuration on all the endpoints. Support multiple separated
instance, deployed on multiple clusters.
-* **Inference Services** - Inference containers, performing model serving pre/post processing. Also supports CPU model
+* **Inference Services** - Inference containers, performing model serving pre/post-processing. Also supports CPU model
inferencing.
* **Serving Engine Services** - Inference engine containers (e.g. Nvidia Triton, TorchServe etc.) used by the Inference
diff --git a/docs/clearml_serving/clearml_serving_setup.md b/docs/clearml_serving/clearml_serving_setup.md
index f1225dd3..dcc5afbb 100644
--- a/docs/clearml_serving/clearml_serving_setup.md
+++ b/docs/clearml_serving/clearml_serving_setup.md
@@ -72,7 +72,7 @@ The following page goes over how to set up and upgrade `clearml-serving`.
```
:::note
-Any model that registers with Triton engine will run the pre/post processing code on the Inference service container,
+Any model that registers with Triton engine will run the pre/post-processing code on the Inference service container,
and the model inference itself will be executed on the Triton Engine container.
:::
diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md
index 6efeb5fc..48e937d4 100644
--- a/docs/configs/clearml_conf.md
+++ b/docs/configs/clearml_conf.md
@@ -414,7 +414,7 @@ match_rules: [
**`agent.package_manager`** (*dict*)
* Dictionary containing the options for the Python package manager. The currently supported package managers are pip, conda,
- and, if the repository contains a poetry.lock file, poetry.
+ and, if the repository contains a `poetry.lock` file, poetry.
---
diff --git a/docs/fundamentals/hpo.md b/docs/fundamentals/hpo.md
index 6642a0d8..579ad0de 100644
--- a/docs/fundamentals/hpo.md
+++ b/docs/fundamentals/hpo.md
@@ -90,7 +90,7 @@ optimization.
optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=TEMPLATE_TASK_ID,
- # setting the hyper-parameters to optimize
+ # setting the hyperparameters to optimize
hyper_parameters=[
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
diff --git a/docs/fundamentals/task.md b/docs/fundamentals/task.md
index c430c323..4a217c33 100644
--- a/docs/fundamentals/task.md
+++ b/docs/fundamentals/task.md
@@ -7,11 +7,11 @@ title: Tasks
A Task is a single code execution session, which can represent an experiment, a step in a workflow, a workflow controller,
or any custom implementation you choose.
-To transform an existing script into a **ClearML Task**, one must call the [Task.init()](../references/sdk/task.md#taskinit) method
+To transform an existing script into a **ClearML Task**, one must call the [`Task.init()`](../references/sdk/task.md#taskinit) method
and specify a task name and its project. This creates a Task object that automatically captures code execution
information as well as execution outputs.
-All the information captured by a task is by default uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md)
+All the information captured by a task is by default uploaded to the [ClearML Server](../deploying_clearml/clearml_server.md),
and it can be visualized in the [ClearML WebApp](../webapp/webapp_overview.md) (UI). ClearML can also be configured to upload
model checkpoints, artifacts, and charts to cloud storage (see [Storage](../integrations/storage.md)). Additionally,
you can work with tasks in Offline Mode, in which all information is saved in a local folder (see
@@ -110,7 +110,7 @@ Available task types are:
* *controller* - A task that lays out the logic for other tasks’ interactions, manual or automatic (e.g. a pipeline
controller)
* *optimizer* - A specific type of controller for optimization tasks (e.g. [hyperparameter optimization](hpo.md))
-* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc)
+* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc.)
* *monitor* - A specific type of service for monitoring
* *application* - A task implementing custom applicative logic, like [auto-scaler](../guides/services/aws_autoscaler.md)
or [clearml-session](../apps/clearml_session.md)
diff --git a/docs/getting_started/ds/ds_first_steps.md b/docs/getting_started/ds/ds_first_steps.md
index 6cbb3a69..54897495 100644
--- a/docs/getting_started/ds/ds_first_steps.md
+++ b/docs/getting_started/ds/ds_first_steps.md
@@ -132,8 +132,8 @@ Now, [command-line arguments](../../fundamentals/hyperparameters.md#tracking-hyp
Sit back, relax, and watch your models converge :) or continue to see what else can be done with ClearML [here](ds_second_steps.md).
-## Youtube Playlist
+## YouTube Playlist
-Or watch the Youtube Getting Started Playlist on our Youtube Channel!
+Or watch the YouTube Getting Started Playlist on our YouTube Channel!
[](https://www.youtube.com/watch?v=bjWwZAzDxTY&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=2)
diff --git a/docs/getting_started/ds/ds_second_steps.md b/docs/getting_started/ds/ds_second_steps.md
index 2b62678e..7a8cbb98 100644
--- a/docs/getting_started/ds/ds_second_steps.md
+++ b/docs/getting_started/ds/ds_second_steps.md
@@ -181,8 +181,8 @@ or check these pages out:
- Improve your experiments with [HyperParameter Optimization](../../fundamentals/hpo.md)
- Check out ClearML's integrations to [external libraries](../../integrations/libraries.md).
-## Youtube Playlist
+## YouTube Playlist
-All these tips and tricks are also covered by our Youtube Getting Started series, go check it out :)
+All these tips and tricks are also covered by our YouTube Getting Started series, go check it out :)
[](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)
\ No newline at end of file
diff --git a/docs/getting_started/mlops/mlops_best_practices.md b/docs/getting_started/mlops/mlops_best_practices.md
index bb037cd2..d2c64b62 100644
--- a/docs/getting_started/mlops/mlops_best_practices.md
+++ b/docs/getting_started/mlops/mlops_best_practices.md
@@ -11,7 +11,7 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
- Track the code base. There is no reason not to add metrics to any process in your workflow, even if it is not directly ML. Visibility is key to iterative improvement of your code / workflow.
- Create per-project [leaderboards](../../guides/ui/building_leader_board.md) based on custom columns
- (hyper parameters and performance accuracy), and bookmark them (full URL will always reproduce the same view & table).
+ (hyperparameters and performance accuracy), and bookmark them (full URL will always reproduce the same view & table).
- Share experiments with your colleagues and team-leaders.
Invite more people to see how your project is progressing, and suggest they add metric reporting for their own.
These metrics can later be part of your own in-house monitoring solution, don't let good data go to waste :)
diff --git a/docs/getting_started/mlops/mlops_first_steps.md b/docs/getting_started/mlops/mlops_first_steps.md
index 2a6a545a..2f23bae2 100644
--- a/docs/getting_started/mlops/mlops_first_steps.md
+++ b/docs/getting_started/mlops/mlops_first_steps.md
@@ -64,7 +64,7 @@ Cloning a task duplicates the task’s configuration, but not its outputs.
**To clone an experiment in the ClearML WebApp:**
1. Click on any project card to open its [experiments table](../../webapp/webapp_exp_table.md)
-1. Right click one of the experiments on the table
+1. Right-click one of the experiments on the table
1. Click **Clone** in the context menu, which will open a **CLONE EXPERIMENT** window.
1. Click **CLONE** in the window.
@@ -76,7 +76,7 @@ Docker container image to be used, or change the hyperparameters and configurati
Once you have set up an experiment, it is now time to execute it.
**To execute an experiment through the ClearML WebApp:**
-1. Right click your draft experiment (the context menu is also available through the
+1. Right-click your draft experiment (the context menu is also available through the
button on the top right of the experiment’s info panel)
1. Click **ENQUEUE,** which will open the **ENQUEUE EXPERIMENT** window
1. In the window, select `default` in the queue menu
diff --git a/docs/getting_started/mlops/mlops_second_steps.md b/docs/getting_started/mlops/mlops_second_steps.md
index 0c52c36c..0b607e5c 100644
--- a/docs/getting_started/mlops/mlops_second_steps.md
+++ b/docs/getting_started/mlops/mlops_second_steps.md
@@ -27,7 +27,7 @@ clearml-data sync --folder ./from_production
We could also add a Tag `latest` to the Dataset, marking it as the latest version.
### Preprocessing Data
-The second step is to preprocess the date. First we need to access it, then we want to modify it
+The second step is to preprocess the date. First we need to access it, then we want to modify it,
and lastly we want to create a new version of the data.
```python
diff --git a/docs/guides/distributed/subprocess_example.md b/docs/guides/distributed/subprocess_example.md
index 9607f0d4..ff15caad 100644
--- a/docs/guides/distributed/subprocess_example.md
+++ b/docs/guides/distributed/subprocess_example.md
@@ -15,7 +15,7 @@ which always returns the main Task.
## Hyperparameters
ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
-connecting it to the Task using a call to the [Task.connect](../../references/sdk/task.md#connect) method.
+connecting it to the Task using a call to the [`Task.connect`](../../references/sdk/task.md#connect) method.
```python
additional_parameters = {
diff --git a/docs/guides/frameworks/keras/jupyter.md b/docs/guides/frameworks/keras/jupyter.md
index c8f40c52..0e787360 100644
--- a/docs/guides/frameworks/keras/jupyter.md
+++ b/docs/guides/frameworks/keras/jupyter.md
@@ -38,7 +38,7 @@ The example calls Matplotlib methods to log debug sample images. They appear in
## Hyperparameters
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task, by
-calling the [Task.connect](../../../references/sdk/task.md#connect) method.
+calling the [`Task.connect`](../../../references/sdk/task.md#connect) method.
```python
task_params = {'num_scatter_samples': 60, 'sin_max_value': 20, 'sin_steps': 30}
diff --git a/docs/guides/frameworks/keras/keras_tensorboard.md b/docs/guides/frameworks/keras/keras_tensorboard.md
index 3f79f88a..1787faa9 100644
--- a/docs/guides/frameworks/keras/keras_tensorboard.md
+++ b/docs/guides/frameworks/keras/keras_tensorboard.md
@@ -53,7 +53,7 @@ Text printed to the console for training progress, as well as all other console
## Configuration Objects
-In the experiment code, a configuration dictionary is connected to the Task by calling the [Task.connect](../../../references/sdk/task.md#connect)
+In the experiment code, a configuration dictionary is connected to the Task by calling the [`Task.connect`](../../../references/sdk/task.md#connect)
method.
```python
diff --git a/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md b/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md
index e42580a3..2f215c99 100644
--- a/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md
+++ b/docs/guides/frameworks/pytorch/notebooks/audio/audio_classification_UrbanSound8K.md
@@ -33,9 +33,15 @@ By doubling clicking a thumbnail, you can view a spectrogram plot in the image v
ClearML automatically logs TensorFlow Definitions. A parameter dictionary is logged by connecting it to the Task using
a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect) method.
- configuration_dict = {'number_of_epochs': 10, 'batch_size': 4, 'dropout': 0.25, 'base_lr': 0.001}
- configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
-
+```python
+configuration_dict = {
+ 'number_of_epochs': 10,
+ 'batch_size': 4,
+ 'dropout': 0.25,
+ 'base_lr': 0.001
+}
+configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
+```
Parameter dictionaries appear in **CONFIGURATION** **>** **HYPER PARAMETERS** **>** **General**.

diff --git a/docs/guides/frameworks/pytorch/notebooks/image/hyperparameter_search.md b/docs/guides/frameworks/pytorch/notebooks/image/hyperparameter_search.md
index 6dd89a2a..bc9ef0b5 100644
--- a/docs/guides/frameworks/pytorch/notebooks/image/hyperparameter_search.md
+++ b/docs/guides/frameworks/pytorch/notebooks/image/hyperparameter_search.md
@@ -27,7 +27,7 @@ optimizer task's **CONFIGURATION** **>** **HYPER PARAMETERS**.
```python
optimizer = HyperParameterOptimizer(
base_task_id=TEMPLATE_TASK_ID, # This is the experiment we want to optimize
- # here we define the hyper-parameters to optimize
+ # here we define the hyperparameters to optimize
hyper_parameters=[
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
diff --git a/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md b/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md
index 2f095eb6..e6c39236 100644
--- a/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md
+++ b/docs/guides/frameworks/pytorch/notebooks/table/download_and_preprocessing.md
@@ -26,22 +26,28 @@ method.
For example, the raw data is read into a Pandas DataFrame named `train_set`, and the `head` of the DataFrame is reported.
- train_set = pd.read_csv(Path(path_to_ShelterAnimal) / 'train.csv')
- Logger.current_logger().report_table(title='ClearMLet - raw',series='pandas DataFrame',iteration=0, table_plot=train_set.head())
-
+```python
+train_set = pd.read_csv(Path(path_to_ShelterAnimal) / 'train.csv')
+Logger.current_logger().report_table(
+ title='ClearMLet - raw',series='pandas DataFrame',iteration=0, table_plot=train_set.head()
+)
+```
+
The tables appear in **PLOTS**.

## Hyperparameters
-A parameter dictionary is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect)
+A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../../../references/sdk/task.md#connect)
method.
- logger = task.get_logger()
- configuration_dict = {'test_size': 0.1, 'split_random_state': 0}
- configuration_dict = task.connect(configuration_dict)
-
+```python
+logger = task.get_logger()
+configuration_dict = {'test_size': 0.1, 'split_random_state': 0}
+configuration_dict = task.connect(configuration_dict)
+```
+
Parameter dictionaries appear in the **General** subsection.

diff --git a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
index 73f62eec..e52fa666 100644
--- a/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
+++ b/docs/guides/frameworks/pytorch/pytorch_distributed_example.md
@@ -50,7 +50,7 @@ The single scalar plot for loss appears in **SCALARS**.
ClearML automatically logs the command line options defined using `argparse`.
-A parameter dictionary is logged by connecting it to the Task using a call to the [Task.connect](../../../references/sdk/task.md#connect)
+A parameter dictionary is logged by connecting it to the Task using a call to the [`Task.connect`](../../../references/sdk/task.md#connect)
method.
```python
diff --git a/docs/guides/main.md b/docs/guides/main.md
index d463b81f..d3109134 100644
--- a/docs/guides/main.md
+++ b/docs/guides/main.md
@@ -8,6 +8,6 @@ slug: /guides
To help learn and use ClearML, we provide example scripts that demonstrate how to use ClearML's various features.
Examples scripts are in the [examples](https://github.com/allegroai/clearml/tree/master/examples) folder of the GitHub `clearml`
-repository. They are also pre-loaded in the **ClearML Server**:
+repository. They are also preloaded in the **ClearML Server**:
Each examples folder in the GitHub ``clearml`` repository contains a ``requirements.txt`` file for example scripts in that folder.
diff --git a/docs/guides/reporting/explicit_reporting.md b/docs/guides/reporting/explicit_reporting.md
index 88d4fe87..eac979d2 100644
--- a/docs/guides/reporting/explicit_reporting.md
+++ b/docs/guides/reporting/explicit_reporting.md
@@ -37,7 +37,7 @@ experiment runs. Some possible destinations include:
* Google Cloud Storage
* Azure Storage.
-Specify the output location in the `output_uri` parameter of the [Task.init](../../references/sdk/task.md#taskinit) method.
+Specify the output location in the `output_uri` parameter of the [`Task.init`](../../references/sdk/task.md#taskinit) method.
In this tutorial, we specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
diff --git a/docs/guides/reporting/hyper_parameters.md b/docs/guides/reporting/hyper_parameters.md
index d0970fa1..534917ea 100644
--- a/docs/guides/reporting/hyper_parameters.md
+++ b/docs/guides/reporting/hyper_parameters.md
@@ -40,7 +40,7 @@ ClearML automatically logs TensorFlow Definitions, whether they are defined befo
flags.DEFINE_string('echo', None, 'Text to echo.')
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
-task = Task.init(project_name='examples', task_name='hyper-parameters example')
+task = Task.init(project_name='examples', task_name='hyperparameters example')
flags.DEFINE_integer('echo3', 3, 'Text to echo.')
@@ -54,7 +54,7 @@ TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
## Parameter Dictionaries
-Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
+Connect a parameter dictionary to a Task by calling the [`Task.connect`](../../references/sdk/task.md#connect)
method, and ClearML logs the parameters. ClearML also tracks changes to the parameters.
```python
diff --git a/docs/guides/reporting/image_reporting.md b/docs/guides/reporting/image_reporting.md
index f70f8792..3962116e 100644
--- a/docs/guides/reporting/image_reporting.md
+++ b/docs/guides/reporting/image_reporting.md
@@ -53,6 +53,6 @@ ClearML reports these images as debug samples in the **ClearML Web UI**, under t

-Double click a thumbnail, and the image viewer opens.
+Double-click a thumbnail, and the image viewer opens.

\ No newline at end of file
diff --git a/docs/guides/reporting/media_reporting.md b/docs/guides/reporting/media_reporting.md
index 0960509a..93992d9d 100644
--- a/docs/guides/reporting/media_reporting.md
+++ b/docs/guides/reporting/media_reporting.md
@@ -38,7 +38,7 @@ Logger.current_logger().report_media(
)
```
-The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double click a thumbnail, and the audio player opens.
+The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the audio player opens.

@@ -55,6 +55,6 @@ Logger.current_logger().report_media(
)
```
-The reported video can be viewed in the **DEBUG SAMPLES** tab. Double click a thumbnail, and the video player opens.
+The reported video can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the video player opens.

diff --git a/docs/guides/services/slack_alerts.md b/docs/guides/services/slack_alerts.md
index eeac5cb6..dad7d357 100644
--- a/docs/guides/services/slack_alerts.md
+++ b/docs/guides/services/slack_alerts.md
@@ -75,7 +75,7 @@ The script supports the following additional command line options:
Mutually exclusive to `exclude_users`.
* `exclude_users` - Only report tasks that were NOT initiated by these users (usernames and user IDs are accepted).
Mutually exclusive to `include_users`.
-* `verbose` - If `True`, will increase verbosity of messages (such as when when tasks are polled but filtered away).
+* `verbose` - If `True`, will increase verbosity of messages (such as when tasks are polled but filtered away).
## Configuration
diff --git a/docs/guides/storage/examples_storagehelper.md b/docs/guides/storage/examples_storagehelper.md
index 5d4fc51a..a335abe0 100644
--- a/docs/guides/storage/examples_storagehelper.md
+++ b/docs/guides/storage/examples_storagehelper.md
@@ -21,10 +21,12 @@ class. The storage examples include:
To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
method, and specify the destination location as the `remote_url` argument:
- # create a StorageManager instance
- manager = StorageManager()
+```python
+# create a StorageManager instance
+manager = StorageManager()
- manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip")
+manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip")
+```
:::note
Zip and tar.gz files will be automatically extracted to cache. This can be controlled with the`extract_archive` flag.
@@ -32,11 +34,15 @@ Zip and tar.gz files will be automatically extracted to cache. This can be contr
To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument:
- manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test")
+```python
+manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test")
+```
To download a non-compressed file, set the `extract_archive` argument to `False`.
- manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False)
+```python
+manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False)
+```
By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the
[`StorageManager.set_report_download_chunk_size`](../../references/sdk/storage.md#storagemanagerset_report_download_chunk_size)
@@ -48,7 +54,11 @@ To upload a file to storage, call the [StorageManager.upload_file](../../referen
method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url`
argument.
- manager.upload_file(local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder")
+```python
+manager.upload_file(
+ local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder"
+)
+```
Use the `retries parameter` to set the number of times file upload should be retried in case of failure.
@@ -63,4 +73,6 @@ To set a limit on the number of files cached, call the [StorageManager.set_cache
method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size,
only the number of files.
- new_cache_limit = manager.set_cache_file_limit(cache_file_limit=100)
\ No newline at end of file
+```python
+new_cache_limit = manager.set_cache_file_limit(cache_file_limit=100)
+```
\ No newline at end of file
diff --git a/docs/hyperdatasets/dataviews.md b/docs/hyperdatasets/dataviews.md
index 8350e958..3bc63c6d 100644
--- a/docs/hyperdatasets/dataviews.md
+++ b/docs/hyperdatasets/dataviews.md
@@ -495,7 +495,7 @@ myDataView.add_mapping_rule(
### Accessing Frames
-Dataview objects can be retrieved by the Dataview ID or name using the [DataView.get](../references/hyperdataset/dataview.md#dataviewget)
+Dataview objects can be retrieved by the Dataview ID or name using the [`DataView.get`](../references/hyperdataset/dataview.md#dataviewget)
class method.
```python
diff --git a/docs/hyperdatasets/webapp/webapp_dataviews.md b/docs/hyperdatasets/webapp/webapp_dataviews.md
index e38f49fb..7df2590b 100644
--- a/docs/hyperdatasets/webapp/webapp_dataviews.md
+++ b/docs/hyperdatasets/webapp/webapp_dataviews.md
@@ -67,7 +67,7 @@ Access these actions with the context menu in any of the following ways:
| ClearML Action | Description |
|---|---|
-| Details | View Dataview details, including input datasets, label mapping, augmentation operations, and iteration control. Can also be accessed by double clicking a Dataview in the Dataviews table. |
+| Details | View Dataview details, including input datasets, label mapping, augmentation operations, and iteration control. Can also be accessed by double-clicking a Dataview in the Dataviews table. |
| Archive | To more easily work with active Dataviews, move a Dataview to the archive, removing it from the active Dataview table. |
| Restore | Action available in the archive. Restore a Dataview to the active Dataviews table. |
| Clone | Make an exact copy of a Dataview that is editable. |
diff --git a/docs/pipelines/pipelines.md b/docs/pipelines/pipelines.md
index ffdbd4e1..d68c2f8f 100644
--- a/docs/pipelines/pipelines.md
+++ b/docs/pipelines/pipelines.md
@@ -87,7 +87,7 @@ if there is a change in the pipeline code. If there is no change, the pipeline r
### Tracking Pipeline Progress
ClearML automatically tracks a pipeline’s progress percentage: the number of pipeline steps completed out of the total
number of steps. For example, if a pipeline consists of 4 steps, after the first step completes, ClearML automatically
-sets its progress value to 25. Once a pipeline has started to run but is yet to successfully finish, , the WebApp will
+sets its progress value to 25. Once a pipeline has started to run but is yet to successfully finish, the WebApp will
show the pipeline’s progress indication in the pipeline runs table, next to the run’s status.
## Examples
diff --git a/docs/pipelines/pipelines_sdk_tasks.md b/docs/pipelines/pipelines_sdk_tasks.md
index b5df3021..de1e8f86 100644
--- a/docs/pipelines/pipelines_sdk_tasks.md
+++ b/docs/pipelines/pipelines_sdk_tasks.md
@@ -157,8 +157,8 @@ arguments.
#### pre_execute_callback & post_execute_callback
Callbacks can be utilized to control pipeline execution flow.
-A `pre_execute_callback` function is called when the step is created and before it is sent for execution. This allows a
-user to modify the task before launch. Use node.job to access the [ClearmlJob](../references/sdk/automation_job_clearmljob.md)
+A `pre_execute_callback` function is called when the step is created, and before it is sent for execution. This allows a
+user to modify the task before launch. Use `node.job` to access the [ClearmlJob](../references/sdk/automation_job_clearmljob.md)
object, or node.job.task to directly access the Task object. Parameters are the configuration arguments passed to the
ClearmlJob.
diff --git a/docs/webapp/pipelines/webapp_pipeline_table.md b/docs/webapp/pipelines/webapp_pipeline_table.md
index d15e2921..1a05ecc0 100644
--- a/docs/webapp/pipelines/webapp_pipeline_table.md
+++ b/docs/webapp/pipelines/webapp_pipeline_table.md
@@ -100,7 +100,7 @@ Access these actions with the context menu in any of the following ways:
| Action | Description | States Valid for the Action | State Transition |
|---|---|---|---|
-| Details | View pipeline details. Can also be accessed by double clicking a run in the pipeline runs table. | Any state | None |
+| Details | View pipeline details. Can also be accessed by double-clicking a run in the pipeline runs table. | Any state | None |
| Run | Create a new pipeline run. Configure and enqueue it for execution. See [Create Run](#create-run). | Any State | *Pending* |
| Abort | Manually stop / cancel a run. | *Running* / *Pending* | *Aborted* |
| Continue | Rerun with the same parameters. | *Aborted* | *Pending* |
diff --git a/docs/webapp/webapp_archiving.md b/docs/webapp/webapp_archiving.md
index 32e547d9..63232eee 100644
--- a/docs/webapp/webapp_archiving.md
+++ b/docs/webapp/webapp_archiving.md
@@ -33,7 +33,7 @@ When archiving an experiment:
* Restore an experiment or model from either the:
- * Experiments or models table - Right click the experiment or model **>** **Restore**.
+ * Experiments or models table - Right-click the experiment or model **>** **Restore**.
* Info panel or full screen details view - Click
(menu) **>** **Restore from Archive**.
diff --git a/docs/webapp/webapp_exp_reproducing.md b/docs/webapp/webapp_exp_reproducing.md
index d6f7fef2..79057774 100644
--- a/docs/webapp/webapp_exp_reproducing.md
+++ b/docs/webapp/webapp_exp_reproducing.md
@@ -33,7 +33,7 @@ Experiments can also be modified and then executed remotely, see [Tuning Experim
The experiment's status becomes *Draft*.
-1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>** **ENQUEUE**.
+1. Enqueue the experiment for execution. Right-click the experiment **>** **Enqueue** **>** Select a queue **>** **ENQUEUE**.
The experiment's status becomes *Pending*. When a worker fetches the Task (experiment), the status becomes *Running*.
The experiment can now be tracked and its results visualized.
\ No newline at end of file
diff --git a/docs/webapp/webapp_exp_table.md b/docs/webapp/webapp_exp_table.md
index 55bde3fc..9449d8be 100644
--- a/docs/webapp/webapp_exp_table.md
+++ b/docs/webapp/webapp_exp_table.md
@@ -137,7 +137,7 @@ Access these actions with the context menu in any of the following ways:
| Action | Description | States Valid for the Action | State Transition |
|---|---|---|---|
-| Details | Open the experiment's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the experiments list in view). Can also be accessed by double clicking an experiment in the experiments table. | Any state | None |
+| Details | Open the experiment's [info panel](webapp_exp_track_visual.md#info-panel) (keeps the experiments list in view). Can also be accessed by double-clicking an experiment in the experiments table. | Any state | None |
| View Full Screen | View experiment details in [full screen](webapp_exp_track_visual.md#full-screen-details-view). | Any state | None |
| Manage Queue | If an experiment is *Pending* in a queue, view the utilization of that queue, manage that queue (remove experiments and change the order of experiments), and view information about the worker(s) listening to the queue. See the [Workers and Queues](webapp_workers_queues.md) page. | *Enqueued* | None |
| View Worker | If an experiment is *Running*, view resource utilization, worker details, and queues to which a worker is listening. | *Running* | None |
diff --git a/docs/webapp/webapp_exp_tuning.md b/docs/webapp/webapp_exp_tuning.md
index cbeaa632..df3979a9 100644
--- a/docs/webapp/webapp_exp_tuning.md
+++ b/docs/webapp/webapp_exp_tuning.md
@@ -26,7 +26,7 @@ Tune experiments and edit an experiment's execution details, then execute the tu
1. Edit the experiment. See [modifying experiments](#modifying-experiments).
-1. Enqueue the experiment for execution. Right click the experiment **>** **Enqueue** **>** Select a queue **>**
+1. Enqueue the experiment for execution. Right-click the experiment **>** **Enqueue** **>** Select a queue **>**
**ENQUEUE**.
The experiment's status becomes *Pending*. When the worker assigned to the queue fetches the Task (experiment), the