diff --git a/docs/clearml_sdk/task_sdk.md b/docs/clearml_sdk/task_sdk.md
index b0b76731..2e0959db 100644
--- a/docs/clearml_sdk/task_sdk.md
+++ b/docs/clearml_sdk/task_sdk.md
@@ -450,8 +450,7 @@ queue. From there, an agent can pull and launch it.
See the [Remote Execution](../guides/advanced/execute_remotely.md) example.
#### Remote Function Execution
-A specific function can also be launched on a remote machine with the [`Task.create_function_task`](../references/sdk/task.md#create_function_task)
-method.
+A specific function can also be launched on a remote machine with [`Task.create_function_task()`](../references/sdk/task.md#create_function_task).
For example:
```python
@@ -467,7 +466,7 @@ a_func_task = task.create_function_task(
)
```
Arguments passed to the function will be automatically logged in the
-experiment's **CONFIGURATION** tab under the **HYPERPARAMETER > Function** section.
+experiment's **CONFIGURATION** tab under the **HYPERPARAMETERS > Function** section.
Like any other arguments, they can be changed from the UI or programmatically.
:::note Function Task Creation
diff --git a/docs/faq.md b/docs/faq.md
index 8dd837a8..3d0207a2 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -649,7 +649,7 @@ logger.report_scatter2d(
#### Is there something ClearML can do about uncommitted code running?
Yes! ClearML stores the git diff as part of the experiment's information. You can view the git diff in the **ClearML Web UI >**
-experiment' **EXECUTION** tab.
+experiment's **EXECUTION** tab.
diff --git a/docs/getting_started/ds/best_practices.md b/docs/getting_started/ds/best_practices.md
index 1e00bd6d..7219b8e5 100644
--- a/docs/getting_started/ds/best_practices.md
+++ b/docs/getting_started/ds/best_practices.md
@@ -69,7 +69,7 @@ improving your results later on!
While it's possible to track experiments with one tool, and pipeline them with another, having
everything under the same roof has its benefits!
-Being able to track experiment progress and compare experiments, and based on that send experiments to execution on remote
+Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration.
Being able to have visibility in your pipeline, while using experiments already defined in the platform,
diff --git a/docs/getting_started/mlops/mlops_first_steps.md b/docs/getting_started/mlops/mlops_first_steps.md
index 2c50868c..04491bb2 100644
--- a/docs/getting_started/mlops/mlops_first_steps.md
+++ b/docs/getting_started/mlops/mlops_first_steps.md
@@ -55,7 +55,7 @@ required python packages, and execute and monitor the process.
:::tip Agent Deployment Modes
ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments),
the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode),
-the agent executes an experiment inside a Docker container. See all running mode options [here](../../fundamentals/agents_and_queues.md#additional-features).
+the agent executes an experiment inside a Docker container. For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
:::
## Clone an Experiment
diff --git a/docs/getting_started/mlops/mlops_second_steps.md b/docs/getting_started/mlops/mlops_second_steps.md
index 51415849..83e9c212 100644
--- a/docs/getting_started/mlops/mlops_second_steps.md
+++ b/docs/getting_started/mlops/mlops_second_steps.md
@@ -8,9 +8,9 @@ Pipelines provide users with a greater level of abstraction and automation, with
Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
The sections below describe the following scenarios:
-* Dataset creation
-* Data processing and consumption
-* Pipeline building
+* [Dataset creation](#dataset-creation)
+* Data [processing](#preprocessing-data) and [consumption](#training)
+* [Pipeline building](#building-the-pipeline)
## Building Tasks
@@ -46,7 +46,8 @@ dataset_folder = dataset.get_mutable_local_copy(
# create a new version of the dataset with the pickle file
new_dataset = Dataset.create(
- dataset_project='data', dataset_name='dataset_v2',
+ dataset_project='data',
+ dataset_name='dataset_v2',
parent_datasets=[dataset],
use_current_task=True,
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
diff --git a/docs/integrations/monai.md b/docs/integrations/monai.md
index c968dab0..337ff66d 100644
--- a/docs/integrations/monai.md
+++ b/docs/integrations/monai.md
@@ -80,7 +80,7 @@ View the logged metrics in the WebApp, in the experiment's **Scalars** tab.
ClearML automatically logs models saved using the `ModelCheckpoint` handler. Make sure a ClearML Task is instantiated in
your script. If you're already using either `ClearMLStatsHandler` or `ClearMLImageHandler`, you don't have to add any code.
-Otherwise, all you have to is add two lines of code to create a task:
+Otherwise, all you have to do is add two lines of code to create a task:
```python
from clearml import Task
diff --git a/docs/pipelines/pipelines_sdk_tasks.md b/docs/pipelines/pipelines_sdk_tasks.md
index a1fd8dac..6d3ea2bf 100644
--- a/docs/pipelines/pipelines_sdk_tasks.md
+++ b/docs/pipelines/pipelines_sdk_tasks.md
@@ -96,13 +96,13 @@ pipe.add_step(
* `cache_executed_step` – If `True`, the controller will check if an identical task with the same code (including setup,
e.g. required packages, docker image, etc.) and input arguments was already executed. If found, the cached step's
outputs are used instead of launching a new task.
-* `execution_queue` (optional) - the queue to use for executing this specific step. If not provided, the task will be sent to the default execution queue, as defined on the class
-* `parents` – Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution only after all the parent steps have been executed successfully.
+* `execution_queue` (optional) - The queue to use for executing this specific step. If not provided, the task will be sent to the default execution queue, as defined on the class.
+* `parents` (optional) - List of parent steps in the pipeline. The current step in the pipeline will be sent for execution only after all the parent steps have been executed successfully.
* `parameter_override` - Dictionary of parameters and values to override in the current step. See [parameter_override](#parameter_override).
-* `configuration_overrides` - Dictionary of configuration objects and values to override in the current step. See [configuration_overrides](#configuration_overrides)
+* `configuration_overrides` - Dictionary of configuration objects and values to override in the current step. See [configuration_overrides](#configuration_overrides).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).
-See [add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step) for all arguments.
+See [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step) for all arguments.
#### parameter_override
Use the `parameter_override` argument to modify the step's parameter values. The `parameter_override` dictionary key is
@@ -164,13 +164,13 @@ pipe.add_function_step(
(including setup, see task [Execution](../webapp/webapp_exp_track_visual.md#execution)
section) and input arguments was already executed. If found, the cached step's
outputs are used instead of launching a new task.
-* `parents` – Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution
+* `parents` (optional) - List of parent steps in the pipeline. The current step in the pipeline will be sent for execution
only after all the parent steps have been executed successfully.
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a step's execution. See [here](#pre_execute_callback-and-post_execute_callback).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).
-See [add_function_step](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step) for all
+See [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step) for all
arguments.
### Important Arguments