--docker --target new_docker
If the container will not make use of a GPU, add the `--cpu-only` flag
:::
-This will create a container with the specified task’s execution environment in the `--target` folder.
+This will create a container with the specified task's execution environment in the `--target` folder.
When the Docker build completes, the console output shows:
```console
@@ -76,7 +76,7 @@ Make use of the container you've just built by having a ClearML agent make use o
:::
This agent will pull the enqueued task and run it using the `new_docker` image to create the execution environment.
- In the task’s **CONSOLE** tab, one of the first logs should be:
+ In the task's **CONSOLE** tab, one of the first logs should be:
```console
Executing: ['docker', 'run', ..., 'CLEARML_DOCKER_IMAGE=new_docker', ...].
diff --git a/docs/guides/frameworks/autokeras/autokeras_imdb_example.md b/docs/guides/frameworks/autokeras/autokeras_imdb_example.md
index 0cb4ab61..c0b6592a 100644
--- a/docs/guides/frameworks/autokeras/autokeras_imdb_example.md
+++ b/docs/guides/frameworks/autokeras/autokeras_imdb_example.md
@@ -32,11 +32,11 @@ Text printed to the console for training progress, as well as all other console
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab.

-Clicking on the model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/frameworks/catboost/catboost.md b/docs/guides/frameworks/catboost/catboost.md
index 05c26f84..d9f0f3f2 100644
--- a/docs/guides/frameworks/catboost/catboost.md
+++ b/docs/guides/frameworks/catboost/catboost.md
@@ -29,12 +29,12 @@ Text printed to the console for training progress, as well as all other console

## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models created using CatBoost.

-Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.

diff --git a/docs/guides/frameworks/lightgbm/lightgbm_example.md b/docs/guides/frameworks/lightgbm/lightgbm_example.md
index 11c0cb1f..f25abd11 100644
--- a/docs/guides/frameworks/lightgbm/lightgbm_example.md
+++ b/docs/guides/frameworks/lightgbm/lightgbm_example.md
@@ -25,7 +25,7 @@ ClearML automatically logs the configurations applied to LightGBM. They appear i
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using LightGBM.

diff --git a/docs/guides/frameworks/megengine/megengine_mnist.md b/docs/guides/frameworks/megengine/megengine_mnist.md
index 200d8ba3..6c3493fa 100644
--- a/docs/guides/frameworks/megengine/megengine_mnist.md
+++ b/docs/guides/frameworks/megengine/megengine_mnist.md
@@ -49,6 +49,6 @@ The model info panel contains the model details, including:
## Console
-All console output during the script’s execution appears in the experiment’s **CONSOLE** page.
+All console output during the script's execution appears in the experiment's **CONSOLE** page.

diff --git a/docs/guides/frameworks/pytorch/model_updating.md b/docs/guides/frameworks/pytorch/model_updating.md
index cbc636a4..fcb97610 100644
--- a/docs/guides/frameworks/pytorch/model_updating.md
+++ b/docs/guides/frameworks/pytorch/model_updating.md
@@ -33,7 +33,7 @@ output_model = OutputModel(task=task)
## Label Enumeration
The label enumeration dictionary is logged using the [`Task.connect_label_enumeration`](../../../references/sdk/task.md#connect_label_enumeration)
-method which will update the task’s resulting model information. The current running task is accessed using the
+method which will update the task's resulting model information. The current running task is accessed using the
[`Task.current_task`](../../../references/sdk/task.md#taskcurrent_task) class method.
```python
@@ -44,7 +44,7 @@ Task.current_task().connect_label_enumeration(enumeration)
```
:::note Directly Setting Model Enumeration
-You can set a model’s label enumeration directly using the [`OutputModel.update_labels`](../../../references/sdk/model_outputmodel.md#update_labels)
+You can set a model's label enumeration directly using the [`OutputModel.update_labels`](../../../references/sdk/model_outputmodel.md#update_labels)
method
:::
@@ -81,20 +81,20 @@ if CONDITION:
```
## WebApp
-The model appears in the task’s **ARTIFACTS** tab.
+The model appears in the task's **ARTIFACTS** tab.

-Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view the
-model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view the
+model's details and access the model.

-The model’s **NETWORK** tab displays its configuration.
+The model's **NETWORK** tab displays its configuration.

-The model’s **LABELS** tab displays its label enumeration.
+The model's **LABELS** tab displays its label enumeration.

diff --git a/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md b/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md
index 8d7b264d..02bab176 100644
--- a/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md
+++ b/docs/guides/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.md
@@ -17,7 +17,7 @@ ClearML automatically logs the audio samples which the example reports by callin
### Audio Samples
-You can play the audio samples by double-clicking the audio thumbnail.
+You can play the audio samples by clicking the audio thumbnail.

diff --git a/docs/guides/frameworks/pytorch/pytorch_mnist.md b/docs/guides/frameworks/pytorch/pytorch_mnist.md
index b82089fd..e65a1b71 100644
--- a/docs/guides/frameworks/pytorch/pytorch_mnist.md
+++ b/docs/guides/frameworks/pytorch/pytorch_mnist.md
@@ -52,12 +52,12 @@ Text printed to the console for training progress, as well as all other console
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks models
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks models
and any snapshots created using PyTorch.

-Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/frameworks/pytorch/pytorch_tensorboard.md b/docs/guides/frameworks/pytorch/pytorch_tensorboard.md
index b71b7cff..8f270531 100644
--- a/docs/guides/frameworks/pytorch/pytorch_tensorboard.md
+++ b/docs/guides/frameworks/pytorch/pytorch_tensorboard.md
@@ -40,12 +40,12 @@ Text printed to the console for training progress, as well as all other console
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using PyTorch.

-Clicking on a model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on a model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/frameworks/pytorch/pytorch_tensorboardx.md b/docs/guides/frameworks/pytorch/pytorch_tensorboardx.md
index bb469bc1..828f6723 100644
--- a/docs/guides/frameworks/pytorch/pytorch_tensorboardx.md
+++ b/docs/guides/frameworks/pytorch/pytorch_tensorboardx.md
@@ -35,12 +35,12 @@ Text printed to the console for training progress, as well as all other console
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using PyTorch.

-Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.

diff --git a/docs/guides/frameworks/pytorch_lightning/pytorch_lightning_example.md b/docs/guides/frameworks/pytorch_lightning/pytorch_lightning_example.md
index ff851dc3..e13193a6 100644
--- a/docs/guides/frameworks/pytorch_lightning/pytorch_lightning_example.md
+++ b/docs/guides/frameworks/pytorch_lightning/pytorch_lightning_example.md
@@ -17,7 +17,7 @@ The test loss and validation loss plots appear in the experiment's page in the C
Resource utilization plots, which are titled **:monitor: machine**, also appear in the **SCALARS** tab. All of these
plots are automatically captured by ClearML.
-
+
## Hyperparameters
@@ -29,12 +29,12 @@ ClearML automatically logs command line options defined with argparse and Tensor
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab.

-Clicking on a model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can view
-the model’s details and access the model.
+Clicking on a model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can view
+the model's details and access the model.
## Console
diff --git a/docs/guides/frameworks/scikit-learn/sklearn_joblib_example.md b/docs/guides/frameworks/scikit-learn/sklearn_joblib_example.md
index dd88ec6f..104f4c31 100644
--- a/docs/guides/frameworks/scikit-learn/sklearn_joblib_example.md
+++ b/docs/guides/frameworks/scikit-learn/sklearn_joblib_example.md
@@ -16,12 +16,12 @@ in the ClearML web UI, under **PLOTS**.
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab.
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab.

-Clicking on the model name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
-view the model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
+view the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/frameworks/tensorboardx/tensorboardx.md b/docs/guides/frameworks/tensorboardx/tensorboardx.md
index a84091da..c3a030f1 100644
--- a/docs/guides/frameworks/tensorboardx/tensorboardx.md
+++ b/docs/guides/frameworks/tensorboardx/tensorboardx.md
@@ -33,13 +33,13 @@ Text printed to the console for training progress, as well as all other console
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using PyTorch.

-Clicking on the model’s name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
-view the model’s details and access the model.
+Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
+view the model's details and access the model.

diff --git a/docs/guides/frameworks/tensorflow/tensorflow_mnist.md b/docs/guides/frameworks/tensorflow/tensorflow_mnist.md
index b13aa133..82a052a1 100644
--- a/docs/guides/frameworks/tensorflow/tensorflow_mnist.md
+++ b/docs/guides/frameworks/tensorflow/tensorflow_mnist.md
@@ -30,13 +30,13 @@ All console output appears in **CONSOLE**.
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using TensorFlow.

-Clicking on a model’s name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
-view the model’s details and access the model.
+Clicking on a model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
+view the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/frameworks/xgboost/xgboost_metrics.md b/docs/guides/frameworks/xgboost/xgboost_metrics.md
index cf5b8b00..b061a460 100644
--- a/docs/guides/frameworks/xgboost/xgboost_metrics.md
+++ b/docs/guides/frameworks/xgboost/xgboost_metrics.md
@@ -29,6 +29,6 @@ To view the model details, click the model name in the **ARTIFACTS** page, which
## Console
-All console output during the script’s execution appears in the experiment’s **CONSOLE** page.
+All console output during the script's execution appears in the experiment's **CONSOLE** page.

\ No newline at end of file
diff --git a/docs/guides/frameworks/xgboost/xgboost_sample.md b/docs/guides/frameworks/xgboost/xgboost_sample.md
index 99aebaf5..cb16108d 100644
--- a/docs/guides/frameworks/xgboost/xgboost_sample.md
+++ b/docs/guides/frameworks/xgboost/xgboost_sample.md
@@ -15,7 +15,7 @@ classification dataset using XGBoost
## Plots
-The feature importance plot and tree plot appear in the project's page in the **ClearML web UI**, under
+The feature importance plot and tree plot appear in the experiment's page in the **ClearML web UI**, under
**PLOTS**.

@@ -31,12 +31,12 @@ All other console output appear in **CONSOLE**.
## Artifacts
-Models created by the experiment appear in the experiment’s **ARTIFACTS** tab. ClearML automatically logs and tracks
+Models created by the experiment appear in the experiment's **ARTIFACTS** tab. ClearML automatically logs and tracks
models and any snapshots created using XGBoost.

-Clicking on the model's name takes you to the [model’s page](../../../webapp/webapp_model_viewing.md), where you can
-view the model’s details and access the model.
+Clicking on the model's name takes you to the [model's page](../../../webapp/webapp_model_viewing.md), where you can
+view the model's details and access the model.

\ No newline at end of file
diff --git a/docs/guides/pipeline/pipeline_controller.md b/docs/guides/pipeline/pipeline_controller.md
index 958f95da..b16b653f 100644
--- a/docs/guides/pipeline/pipeline_controller.md
+++ b/docs/guides/pipeline/pipeline_controller.md
@@ -50,7 +50,7 @@ The sections below describe in more detail what happens in the controller task a
1. Build the pipeline (see [PipelineController.add_step](../../references/sdk/automation_controller_pipelinecontroller.md#add_step)
method for complete reference):
- The pipeline’s [first step](#step-1---downloading-the-datae) uses the pre-existing task
+ The pipeline's [first step](#step-1---downloading-the-datae) uses the pre-existing task
`pipeline step 1 dataset artifact` in the `examples` project. The step uploads local data and stores it as an artifact.
```python
@@ -62,11 +62,11 @@ The sections below describe in more detail what happens in the controller task a
```
The [second step](#step-2---processing-the-data) uses the pre-existing task `pipeline step 2 process dataset` in
- the `examples` project. The second step’s dependency upon the first step’s completion is designated by setting it as
+ the `examples` project. The second step's dependency upon the first step's completion is designated by setting it as
its parent.
Custom configuration values specific to this step execution are defined through the `parameter_override` parameter,
- where the first step’s artifact is fed into the second step.
+ where the first step's artifact is fed into the second step.
Special pre-execution and post-execution logic is added for this step through the use of `pre_execute_callback`
and `post_execute_callback` respectively.
@@ -87,7 +87,7 @@ The sections below describe in more detail what happens in the controller task a
```
The [third step](#step-3---training-the-network) uses the pre-existing task `pipeline step 3 train model` in the
- `examples` projects. The step uses Step 2’s artifacts.
+ `examples` projects. The step uses Step 2's artifacts.
1. Run the pipeline.
@@ -99,7 +99,7 @@ The sections below describe in more detail what happens in the controller task a
## Step 1 - Downloading the Data
-The pipeline’s first step ([step1_dataset_artifact.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step1_dataset_artifact.py))
+The pipeline's first step ([step1_dataset_artifact.py](https://github.com/allegroai/clearml/blob/master/examples/pipeline/step1_dataset_artifact.py))
does the following:
1. Download data using [`StorageManager.get_local_copy`](../../references/sdk/storage.md#storagemanagerget_local_copy)
@@ -209,7 +209,7 @@ does the following:
## WebApp
-When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and
+When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and
pipeline page.
```
@@ -218,13 +218,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
```
-The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
+The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
configuration parameters and output.

-To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open
-the pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
+To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open
+the pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
Click a step to see its summary information.
@@ -232,7 +232,7 @@ Click a step to see its summary information.
### Console
-Click **DETAILS** to view a log of the pipeline controller’s console output.
+Click **DETAILS** to view a log of the pipeline controller's console output.

diff --git a/docs/guides/pipeline/pipeline_decorator.md b/docs/guides/pipeline/pipeline_decorator.md
index fb0b4d0d..d8ac0439 100644
--- a/docs/guides/pipeline/pipeline_decorator.md
+++ b/docs/guides/pipeline/pipeline_decorator.md
@@ -77,7 +77,7 @@ To run the pipeline, call the pipeline controller function.
## WebApp
-When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and pipeline page.
+When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and pipeline page.
```
ClearML Task: created new task id=bc93610688f242ecbbe70f413ff2cf5f
@@ -85,13 +85,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
```
-The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
+The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
configuration parameters and output.

-To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
-pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
+To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
+pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
Click a step to see an overview of its details.
@@ -99,11 +99,11 @@ Click a step to see an overview of its details.
## Console and Code
-Click **DETAILS** to view a log of the pipeline controller’s console output.
+Click **DETAILS** to view a log of the pipeline controller's console output.

-Click on a step to view its console output. You can also view the selected step’s code by clicking **CODE**
+Click on a step to view its console output. You can also view the selected step's code by clicking **CODE**
on top of the console log.

diff --git a/docs/guides/pipeline/pipeline_functions.md b/docs/guides/pipeline/pipeline_functions.md
index 14bcdc08..30492351 100644
--- a/docs/guides/pipeline/pipeline_functions.md
+++ b/docs/guides/pipeline/pipeline_functions.md
@@ -66,7 +66,7 @@ logged as required packages for the pipeline execution step.
)
```
- The second step in the pipeline uses the `step_two` function and uses as its input the first step’s output.This reference
+ The second step in the pipeline uses the `step_two` function and uses as its input the first step's output.This reference
implicitly defines the pipeline structure, making `step_one` the parent step of `step_two`.
Its return object will be stored as an artifact under the name `processed_data`.
@@ -82,7 +82,7 @@ logged as required packages for the pipeline execution step.
)
```
- The third step in the pipeline uses the `step_three` function and uses as its input the second step’s output. This
+ The third step in the pipeline uses the `step_three` function and uses as its input the second step's output. This
reference implicitly defines the pipeline structure, making `step_two` the parent step of `step_three`.
Its return object will be stored as an artifact under the name `model`:
@@ -106,7 +106,7 @@ logged as required packages for the pipeline execution step.
The pipeline will be launched remotely, through the `services` queue, unless otherwise specified.
## WebApp
-When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and pipeline page.
+When the experiment is executed, the console output displays the task ID, and links to the pipeline controller task page and pipeline page.
```
ClearML Task: created new task id=bc93610688f242ecbbe70f413ff2cf5f
@@ -114,13 +114,13 @@ ClearML results page: https://app.clear.ml/projects/462f48dba7b441ffb34bddb78371
ClearML pipeline page: https://app.clear.ml/pipelines/462f48dba7b441ffb34bddb783711da7/experiments/bc93610688f242ecbbe70f413ff2cf5f
```
-The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s
+The pipeline run's page contains the pipeline's structure, the execution status of every step, as well as the run's
configuration parameters and output.

-To view a run’s complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
-pipeline’s [controller task page](../../webapp/webapp_exp_track_visual.md).
+To view a run's complete information, click **Full details** on the bottom of the **Run Info** panel, which will open the
+pipeline's [controller task page](../../webapp/webapp_exp_track_visual.md).
Click a step to see an overview of its details.
@@ -128,11 +128,11 @@ Click a step to see an overview of its details.
## Console and Code
-Click **DETAILS** to view a log of the pipeline controller’s console output.
+Click **DETAILS** to view a log of the pipeline controller's console output.

-Click on a step to view its console output. You can also view the selected step’s code by clicking **CODE**
+Click on a step to view its console output. You can also view the selected step's code by clicking **CODE**
on top of the console log.

diff --git a/docs/guides/reporting/image_reporting.md b/docs/guides/reporting/image_reporting.md
index f9a89b0e..b8bb9b3b 100644
--- a/docs/guides/reporting/image_reporting.md
+++ b/docs/guides/reporting/image_reporting.md
@@ -52,6 +52,6 @@ ClearML reports these images as debug samples in the **ClearML Web UI**, under t

-Double-click a thumbnail, and the image viewer opens.
+Click a thumbnail, and the image viewer opens.

\ No newline at end of file
diff --git a/docs/guides/reporting/manual_matplotlib_reporting.md b/docs/guides/reporting/manual_matplotlib_reporting.md
index cc2e8034..1b1b0546 100644
--- a/docs/guides/reporting/manual_matplotlib_reporting.md
+++ b/docs/guides/reporting/manual_matplotlib_reporting.md
@@ -8,7 +8,7 @@ example demonstrates using ClearML to log plots and images generated by Matplotl
## Plots
The Matplotlib and Seaborn plots that are reported using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
-method appear in the experiment’s **PLOTS**.
+method appear in the experiment's **PLOTS**.

@@ -17,6 +17,6 @@ method appear in the experiment’s **PLOTS**.
## Debug Samples
Matplotlib figures can be logged as images by using the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
-method, and passing `report_image=True`. The images are stored in the experiment’s **DEBUG SAMPLES**.
+method, and passing `report_image=True`. The images are stored in the experiment's **DEBUG SAMPLES**.

\ No newline at end of file
diff --git a/docs/guides/reporting/media_reporting.md b/docs/guides/reporting/media_reporting.md
index 3191af32..482f50f3 100644
--- a/docs/guides/reporting/media_reporting.md
+++ b/docs/guides/reporting/media_reporting.md
@@ -38,7 +38,7 @@ Logger.current_logger().report_media(
)
```
-The reported audio can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the audio player opens.
+The reported audio can be viewed in the **DEBUG SAMPLES** tab. Click a thumbnail, and the audio player opens.

@@ -55,6 +55,6 @@ Logger.current_logger().report_media(
)
```
-The reported video can be viewed in the **DEBUG SAMPLES** tab. Double-click a thumbnail, and the video player opens.
+The reported video can be viewed in the **DEBUG SAMPLES** tab. Click a thumbnail, and the video player opens.

diff --git a/docs/guides/reporting/model_config.md b/docs/guides/reporting/model_config.md
index d2590b5f..bbb9f2b7 100644
--- a/docs/guides/reporting/model_config.md
+++ b/docs/guides/reporting/model_config.md
@@ -25,7 +25,7 @@ output_model = OutputModel(task=task)
## Label Enumeration
-Set the model’s label enumeration using the [`OutputModel.update_labels`](../../references/sdk/model_outputmodel.md#update_labels)
+Set the model's label enumeration using the [`OutputModel.update_labels`](../../references/sdk/model_outputmodel.md#update_labels)
method.
```python
@@ -43,14 +43,14 @@ output_model.update_weights(register_uri=model_url)
```
## WebApp
-The model appears in the task’s **ARTIFACTS** tab.
+The model appears in the task's **ARTIFACTS** tab.

-Clicking on the model name takes you to the [model’s page](../../webapp/webapp_model_viewing.md), where you can view the
-model’s details and access the model.
+Clicking on the model name takes you to the [model's page](../../webapp/webapp_model_viewing.md), where you can view the
+model's details and access the model.
-The model’s **LABELS** tab displays its label enumeration.
+The model's **LABELS** tab displays its label enumeration.

diff --git a/docs/guides/reporting/using_artifacts.md b/docs/guides/reporting/using_artifacts.md
index 4d17d94f..f509f514 100644
--- a/docs/guides/reporting/using_artifacts.md
+++ b/docs/guides/reporting/using_artifacts.md
@@ -6,13 +6,13 @@ The [using_artifacts_example](https://github.com/allegroai/clearml/blob/master/e
script demonstrates uploading a data file to a task as an artifact and then accessing and utilizing the artifact in a different task.
When the script runs it creates two tasks, `create artifact` and `use artifact from other task`, both of which are associated
-with the `examples` project. The first task creates and uploads the artifact, and the second task accesses the first task’s
+with the `examples` project. The first task creates and uploads the artifact, and the second task accesses the first task's
artifact and utilizes it.
## Task 1: Uploading an Artifact
The first task uploads a data file as an artifact using the [`Task.upload_artifact`](../../references/sdk/task.md#upload_artifact)
-method, inputting the artifact’s name and the location of the file.
+method, inputting the artifact's name and the location of the file.
```python
task1.upload_artifact(name='data file', artifact_object='data_samples/sample.json')
@@ -21,7 +21,7 @@ task1.upload_artifact(name='data file', artifact_object='data_samples/sample.jso
The task is then closed, using the [`Task.close`](../../references/sdk/task.md#close) method, so another task can be
initialized in the same script.
-Artifact details (location and size) can be viewed in ClearML’s **web UI > experiment details > ARTIFACTS tab > OTHER section**.
+Artifact details (location and size) can be viewed in ClearML's **web UI > experiment details > ARTIFACTS tab > OTHER section**.

diff --git a/docs/guides/services/aws_autoscaler.md b/docs/guides/services/aws_autoscaler.md
index 0715eada..e4340226 100644
--- a/docs/guides/services/aws_autoscaler.md
+++ b/docs/guides/services/aws_autoscaler.md
@@ -151,9 +151,9 @@ Make sure a `clearml-agent` is assigned to that queue.
## WebApp
### Configuration
-The values configured through the wizard are stored in the task’s hyperparameters and configuration objects by using the
+The values configured through the wizard are stored in the task's hyperparameters and configuration objects by using the
[`Task.connect`](../../references/sdk/task.md#connect) and [`Task.set_configuration_object`](../../references/sdk/task.md#set_configuration_object)
-methods respectively. They can be viewed in the WebApp, in the task’s **CONFIGURATION** page under **HYPERPARAMETERS** and **CONFIGURATION OBJECTS > General**.
+methods respectively. They can be viewed in the WebApp, in the task's **CONFIGURATION** page under **HYPERPARAMETERS** and **CONFIGURATION OBJECTS > General**.
ClearML automatically logs command line arguments defined with argparse. View them in the experiments **CONFIGURATION**
page under **HYPERPARAMETERS > General**.
@@ -161,11 +161,11 @@ page under **HYPERPARAMETERS > General**.

The task can be reused to launch another autoscaler instance: clone the task, then edit its parameters for the instance
-types and budget configuration, and enqueue the task for execution (you’ll typically want to use a ClearML Agent running
+types and budget configuration, and enqueue the task for execution (you'll typically want to use a ClearML Agent running
in [services mode](../../clearml_agent.md#services-mode) for such service tasks).
### Console
-All other console output appears in the experiment’s **CONSOLE**.
+All other console output appears in the experiment's **CONSOLE**.

\ No newline at end of file
diff --git a/docs/guides/services/cleanup_service.md b/docs/guides/services/cleanup_service.md
index 7195ea3c..1f7aaa0f 100644
--- a/docs/guides/services/cleanup_service.md
+++ b/docs/guides/services/cleanup_service.md
@@ -6,7 +6,7 @@ The [cleanup service](https://github.com/allegroai/clearml/blob/master/examples/
demonstrates how to use the `clearml.backend_api.session.client.APIClient` class to implement a service that deletes old
archived tasks and their associated files: model checkpoints, other artifacts, and debug samples.
-Modify the cleanup service’s parameters to specify which archived experiments to delete and when to delete them.
+Modify the cleanup service's parameters to specify which archived experiments to delete and when to delete them.
### Running the Cleanup Service
@@ -52,14 +52,14 @@ an `APIClient` object that establishes a session with the ClearML Server, and ac
* [`Task.delete`](../../references/sdk/task.md#delete) - Delete a Task.
## Configuration
-The experiment’s hyperparameters are explicitly logged to ClearML using the [`Task.connect`](../../references/sdk/task.md#connect)
-method. View them in the WebApp, in the experiment’s **CONFIGURATION** page under **HYPERPARAMETERS > General**.
+The experiment's hyperparameters are explicitly logged to ClearML using the [`Task.connect`](../../references/sdk/task.md#connect)
+method. View them in the WebApp, in the experiment's **CONFIGURATION** page under **HYPERPARAMETERS > General**.
The task can be reused. Clone the task, edit its parameters, and enqueue the task to run in ClearML Agent [services mode](../../clearml_agent.md#services-mode).

## Console
-All console output appears in the experiment’s **CONSOLE**.
+All console output appears in the experiment's **CONSOLE**.

diff --git a/docs/guides/services/slack_alerts.md b/docs/guides/services/slack_alerts.md
index af171925..bfec4007 100644
--- a/docs/guides/services/slack_alerts.md
+++ b/docs/guides/services/slack_alerts.md
@@ -79,17 +79,17 @@ The script supports the following additional command line options:
## Configuration
-ClearML automatically logs command line options defined with argparse. They appear in the experiment’s **CONFIGURATION**
+ClearML automatically logs command line options defined with argparse. They appear in the experiment's **CONFIGURATION**
page under **HYPERPARAMETERS > Args**.

The task can be reused to launch another monitor instance: clone the task, edit its parameters, and enqueue the task for
-execution (you’ll typically want to use a ClearML Agent running in [services mode](../../clearml_agent.md#services-mode)
+execution (you'll typically want to use a ClearML Agent running in [services mode](../../clearml_agent.md#services-mode)
for such service tasks).
## Console
-All console output appears in the experiment’s **CONSOLE** page.
+All console output appears in the experiment's **CONSOLE** page.
## Additional Information about slack_alerts.py
diff --git a/docs/guides/set_offline.md b/docs/guides/set_offline.md
index 9c01d5cc..f011af03 100644
--- a/docs/guides/set_offline.md
+++ b/docs/guides/set_offline.md
@@ -78,7 +78,7 @@ Upload the session's execution data that the Task captured offline to the ClearM
```
You can also use the offline task to update the execution of an existing previously executed task by providing the
- previously executed task’s ID. To avoid overwriting metrics, you can specify the initial iteration offset with
+ previously executed task's ID. To avoid overwriting metrics, you can specify the initial iteration offset with
`iteration_offset`.
```python
diff --git a/docs/integrations/matplotlib.md b/docs/integrations/matplotlib.md
index 7b7f1069..cea1c21f 100644
--- a/docs/integrations/matplotlib.md
+++ b/docs/integrations/matplotlib.md
@@ -50,7 +50,7 @@ Use [`Logger.report_matplotlib_figure()`](../references/sdk/logger.md#report_mat
a matplotlib figure, and specify its title and series names, and iteration number:
-```
+```python
logger = task.get_logger()
area = (40 * np.random.rand(N))**2
diff --git a/docs/webapp/applications/apps_dashboard.md b/docs/webapp/applications/apps_dashboard.md
index 43addff1..cef86738 100644
--- a/docs/webapp/applications/apps_dashboard.md
+++ b/docs/webapp/applications/apps_dashboard.md
@@ -6,7 +6,7 @@ title: Project Dashboard
The ClearML Project Dashboard App is available under the ClearML Pro plan
:::
-The Project Dashboard Application provides an overview of a project or workspace’s progress. It presents an aggregated
+The Project Dashboard Application provides an overview of a project or workspace's progress. It presents an aggregated
view of task status and a chosen metric over time, as well as project GPU and worker usage. It also supports alerts/warnings
on completed/failed Tasks via Slack integration.
@@ -15,7 +15,7 @@ on completed/failed Tasks via Slack integration.
values from the file, which can be modified before launching the app instance
* **Dashboard Title** - Name of the project dashboard instance, which will appear in the instance list
* **Monitoring** - Select what the app instance should monitor. The options are:
- * Project - Monitor a specific project. You can select an option to also monitor the specified project’s subprojects
+ * Project - Monitor a specific project. You can select an option to also monitor the specified project's subprojects
* Entire workspace - Monitor all projects in your workspace
:::caution
@@ -34,7 +34,7 @@ of the chosen metric over time.
* Alert Iteration Threshold - Minimum number of task iterations to trigger Slack alerts (tasks that fail prior to the threshold will be ignored)
* **Additional options**
* Track manual (non agent-run) experiments as well - Select to include in the dashboard experiments that were not executed by an agent
- * Alert on completed experiments - Select to include completed tasks in alerts: in the dashboard’s Task Alerts section and in Slack Alerts.
+ * Alert on completed experiments - Select to include completed tasks in alerts: in the dashboard's Task Alerts section and in Slack Alerts.
* **Export Configuration** - Export the app instance configuration as a JSON file, which you can later import to create
a new instance with the same configuration.
@@ -48,7 +48,7 @@ Once a project dashboard instance is launched, its dashboard displays the follow
* Experiments Summary - Number of tasks by status over time
* Monitoring - GPU utilization and GPU memory usage
* Metric Monitoring - An aggregated view of the values of a metric over time
-* Project’s Active Workers - Number of workers currently executing experiments in the monitored project
+* Project's Active Workers - Number of workers currently executing experiments in the monitored project
* Workers Table - List of active workers
* Task Alerts
* Failed tasks - Failed experiments and their time of failure summary
diff --git a/docs/webapp/applications/apps_overview.md b/docs/webapp/applications/apps_overview.md
index e74e5e58..53c87162 100644
--- a/docs/webapp/applications/apps_overview.md
+++ b/docs/webapp/applications/apps_overview.md
@@ -6,7 +6,7 @@ title: Overview
ClearML Applications are available under the ClearML Pro plan
:::
-Use ClearML’s GUI Applications to manage ML workloads and automatically run your recurring workflows without any coding.
+Use ClearML's GUI Applications to manage ML workloads and automatically run your recurring workflows without any coding.

@@ -18,23 +18,23 @@ ClearML provides the following applications:
* [**AWS Autoscaler**](apps_aws_autoscaler.md) - Optimize AWS EC2 instance usage according to a defined instance budget
* [**GCP Autoscaler**](apps_gcp_autoscaler.md) - Optimize GCP instance usage according to a defined instance budget
* [**Hyperparameter Optimization**](apps_hpo.md) - Find the parameter values that yield the best performing models
-* **Nvidia Clara** - Train models using Nvidia’s Clara framework
+* **Nvidia Clara** - Train models using Nvidia's Clara framework
* [**Project Dashboard**](apps_dashboard.md) - High-level project monitoring with Slack alerts
* [**Task Scheduler**](apps_task_scheduler.md) - Schedule tasks for one-shot and/or periodic execution at specified times (available under ClearML Enterprise Plan)
* [**Trigger Manager**](apps_trigger_manager) - Define tasks to be run when predefined events occur (available under ClearML Enterprise Plan)
## App Pages Layout
-Each application’s page is split into two sections:
+Each application's page is split into two sections:
* App Instance List - Launch new app instances and view previously launched instances. Click on an instance to view its
dashboard. Hover over it to access the [app instance actions](#app-instance-actions).
-* App Instance Dashboard - The main section of the app page: displays the selected app instance’s status and results.
+* App Instance Dashboard - The main section of the app page: displays the selected app instance's status and results.

## Launching an App Instance
1. Choose the desired app
-1. Click the `Launch New` button
to open the app’s configuration wizard
+1. Click the `Launch New` button
to open the app's configuration wizard
1. Fill in the configuration details
1. **Launch**
diff --git a/docs/webapp/datasets/webapp_dataset_page.md b/docs/webapp/datasets/webapp_dataset_page.md
index a511a3ee..7b5a5d20 100644
--- a/docs/webapp/datasets/webapp_dataset_page.md
+++ b/docs/webapp/datasets/webapp_dataset_page.md
@@ -18,7 +18,7 @@ top-level projects are displayed. Click on a project card to view the project's
Click on a dataset card to navigate to its [Version List](webapp_dataset_viewing.md), where you can view the
dataset versions' lineage and contents.
-Filter the datasets to find the one you’re looking for more easily. These filters can be applied by clicking
:
+Filter the datasets to find the one you're looking for more easily. These filters can be applied by clicking
:
* My Work - Show only datasets that you created
* Tags - Choose which tags to filter by from a list of tags used in the datasets.
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR"
@@ -29,7 +29,7 @@ Filter the datasets to find the one you’re looking for more easily. These filt
## Project Cards
-In Project view, project cards display a project’s summarized dataset information:
+In Project view, project cards display a project's summarized dataset information:
@@ -74,7 +74,7 @@ of a dataset card to open its context menu and access dataset actions.
-* **Rename** - Change the dataset’s name
+* **Rename** - Change the dataset's name
* **Add Tag** - Add label to the dataset to help easily classify groups of dataset.
* **Delete** - Delete the dataset and all of its versions. To delete a dataset, all its versions must first be
archived.
\ No newline at end of file
diff --git a/docs/webapp/datasets/webapp_dataset_viewing.md b/docs/webapp/datasets/webapp_dataset_viewing.md
index 57bcebc7..ef106e12 100644
--- a/docs/webapp/datasets/webapp_dataset_viewing.md
+++ b/docs/webapp/datasets/webapp_dataset_viewing.md
@@ -24,7 +24,7 @@ Each node in the graph represents a dataset version, and shows the following det
* Version size
* Version update time
* Version details button - Hover over the version and click
- to view the version’s [details panel](#version-details-panel)
+ to view the version's [details panel](#version-details-panel)
:::tip archiving versions
You can archive dataset versions so the versions list doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of
@@ -65,7 +65,7 @@ On the right side of the dataset version panel, view the **VERSION INFO** which
-To view a version’s detailed information, click **Full details**, which will open the dataset version’s [task page](../webapp_exp_track_visual.md).
+To view a version's detailed information, click **Full details**, which will open the dataset version's [task page](../webapp_exp_track_visual.md).

@@ -84,7 +84,7 @@ to view the version's details panel. The panel includes three tabs:

-* **CONSOLE** - The dataset version’s console output
+* **CONSOLE** - The dataset version's console output

diff --git a/docs/webapp/pipelines/webapp_pipeline_page.md b/docs/webapp/pipelines/webapp_pipeline_page.md
index f1d6a886..32480b15 100644
--- a/docs/webapp/pipelines/webapp_pipeline_page.md
+++ b/docs/webapp/pipelines/webapp_pipeline_page.md
@@ -11,9 +11,9 @@ view, all pipelines are shown side-by-side. In Project view, pipelines are organ
top-level projects are displayed. Click on a project card to view the project's pipelines.
Click on a pipeline card to navigate to its [Pipeline Runs Table](webapp_pipeline_table.md), where you can view the
-pipeline structure, configuration, and outputs of all the pipeline’s runs, as well as create new runs.
+pipeline structure, configuration, and outputs of all the pipeline's runs, as well as create new runs.
-Filter the pipelines to find the one you’re looking for more easily. These filters can be applied by clicking
:
+Filter the pipelines to find the one you're looking for more easily. These filters can be applied by clicking
:
* My Work - Show only pipelines that you created
* Tags - Choose which tags to filter by from a list of tags used in the pipelines.
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR"
@@ -46,7 +46,7 @@ In List view, the pipeline cards display summarized pipeline information:
* Pipeline name
-* Time since the pipeline’s most recent run
+* Time since the pipeline's most recent run
* Run summary - Number of *Running*/*Pending*/*Completed*/*Failed* runs
* Tags
@@ -62,7 +62,7 @@ of a pipeline card to open its context menu and access pipeline actions.
-* **Rename** - Change the pipeline’s name
+* **Rename** - Change the pipeline's name
* **Add Tag** - Add label to the pipeline to help easily classify groups of pipelines.
* **Delete** - Delete the pipeline: delete all its runs and any models/artifacts produced (a list of remaining artifacts
is returned). To delete a pipeline, all its runs must first be archived.
\ No newline at end of file
diff --git a/docs/webapp/pipelines/webapp_pipeline_table.md b/docs/webapp/pipelines/webapp_pipeline_table.md
index aed9b203..23cafdfa 100644
--- a/docs/webapp/pipelines/webapp_pipeline_table.md
+++ b/docs/webapp/pipelines/webapp_pipeline_table.md
@@ -2,14 +2,14 @@
title: The Pipeline Runs Table
---
-The pipeline runs table is a [customizable](#customizing-the-runs-table) list of the pipeline’s runs. Use it to
-view a run’s details, and manage runs (create, continue, or abort). The runs table's auto-refresh allows users
+The pipeline runs table is a [customizable](#customizing-the-runs-table) list of the pipeline's runs. Use it to
+view a run's details, and manage runs (create, continue, or abort). The runs table's auto-refresh allows users
to continually monitor run progress.
View the runs table in table view
or in details view
,
using the buttons on the top left of the page. Use the table view for a comparative view of your runs according to
-columns of interest. Use the details view to access a selected run’s details, while keeping the pipeline runs list in view.
+columns of interest. Use the details view to access a selected run's details, while keeping the pipeline runs list in view.
Details view can also be accessed by double-clicking a specific pipeline run in the table view to open its details view.
You can archive pipeline runs so the runs table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
@@ -32,7 +32,7 @@ The models table contains the following columns:
| Column | Description | Type |
|---|---|---|
| **RUN** | Pipeline run identifier | String |
-| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)’s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)’s `version` parameter | Version string |
+| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)’s `version` parameter | Version string |
| **TAGS** | Descriptive, user-defined, color-coded tags assigned to run. | Tag |
| **STATUS** | Pipeline run's status. See a list of the [task states and state transitions](../../fundamentals/task.md#task-states). For Running, Failed, and Aborted runs, you will also see a progress indicator next to the status. See [here](../../pipelines/pipelines.md#tracking-pipeline-progress). | String |
| **USER** | User who created the run. | String |
diff --git a/docs/webapp/pipelines/webapp_pipeline_viewing.md b/docs/webapp/pipelines/webapp_pipeline_viewing.md
index cd02cd7b..bfeefe4b 100644
--- a/docs/webapp/pipelines/webapp_pipeline_viewing.md
+++ b/docs/webapp/pipelines/webapp_pipeline_viewing.md
@@ -2,7 +2,7 @@
title: Pipeline Run Details
---
-The run details panel shows the pipeline’s structure and the execution status of every step, as well as the run’s
+The run details panel shows the pipeline's structure and the execution status of every step, as well as the run's
configuration parameters and output.

@@ -15,7 +15,7 @@ Each step shows:
* Step status
* Step execution time
* Step log button - Hover over the step and click
- to view the step’s [details panel](#run-and-step-details-panel)
+ to view the step's [details panel](#run-and-step-details-panel)
While the pipeline is running, the steps’ details and colors are updated.