Small edits (#722)

This commit is contained in:
pollfly 2023-11-28 10:03:58 +02:00 committed by GitHub
parent 7afc79f5ce
commit 4b02af91f7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 17 additions and 17 deletions

View File

@ -450,8 +450,7 @@ queue. From there, an agent can pull and launch it.
See the [Remote Execution](../guides/advanced/execute_remotely.md) example. See the [Remote Execution](../guides/advanced/execute_remotely.md) example.
#### Remote Function Execution #### Remote Function Execution
A specific function can also be launched on a remote machine with the [`Task.create_function_task`](../references/sdk/task.md#create_function_task) A specific function can also be launched on a remote machine with [`Task.create_function_task()`](../references/sdk/task.md#create_function_task).
method.
For example: For example:
```python ```python
@ -467,7 +466,7 @@ a_func_task = task.create_function_task(
) )
``` ```
Arguments passed to the function will be automatically logged in the Arguments passed to the function will be automatically logged in the
experiment's **CONFIGURATION** tab under the **HYPERPARAMETER > Function** section. experiment's **CONFIGURATION** tab under the **HYPERPARAMETERS > Function** section.
Like any other arguments, they can be changed from the UI or programmatically. Like any other arguments, they can be changed from the UI or programmatically.
:::note Function Task Creation :::note Function Task Creation

View File

@ -649,7 +649,7 @@ logger.report_scatter2d(
#### Is there something ClearML can do about uncommitted code running? <a id="help-uncommitted-code"></a> #### Is there something ClearML can do about uncommitted code running? <a id="help-uncommitted-code"></a>
Yes! ClearML stores the git diff as part of the experiment's information. You can view the git diff in the **ClearML Web UI >** Yes! ClearML stores the git diff as part of the experiment's information. You can view the git diff in the **ClearML Web UI >**
experiment' **EXECUTION** tab. experiment's **EXECUTION** tab.
<br/> <br/>

View File

@ -69,7 +69,7 @@ improving your results later on!
While it's possible to track experiments with one tool, and pipeline them with another, having While it's possible to track experiments with one tool, and pipeline them with another, having
everything under the same roof has its benefits! everything under the same roof has its benefits!
Being able to track experiment progress and compare experiments, and based on that send experiments to execution on remote Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration. machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration.
Being able to have visibility in your pipeline, while using experiments already defined in the platform, Being able to have visibility in your pipeline, while using experiments already defined in the platform,

View File

@ -55,7 +55,7 @@ required python packages, and execute and monitor the process.
:::tip Agent Deployment Modes :::tip Agent Deployment Modes
ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments), ClearML Agents can be deployed in Virtual Environment Mode or Docker Mode. In [virtual environment mode](../../clearml_agent.md#execution-environments),
the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode), the agent creates a new venv to execute an experiment. In [Docker mode](../../clearml_agent.md#docker-mode),
the agent executes an experiment inside a Docker container. See all running mode options [here](../../fundamentals/agents_and_queues.md#additional-features). the agent executes an experiment inside a Docker container. For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
::: :::
## Clone an Experiment ## Clone an Experiment

View File

@ -8,9 +8,9 @@ Pipelines provide users with a greater level of abstraction and automation, with
Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products. Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
The sections below describe the following scenarios: The sections below describe the following scenarios:
* Dataset creation * [Dataset creation](#dataset-creation)
* Data processing and consumption * Data [processing](#preprocessing-data) and [consumption](#training)
* Pipeline building * [Pipeline building](#building-the-pipeline)
## Building Tasks ## Building Tasks
@ -46,7 +46,8 @@ dataset_folder = dataset.get_mutable_local_copy(
# create a new version of the dataset with the pickle file # create a new version of the dataset with the pickle file
new_dataset = Dataset.create( new_dataset = Dataset.create(
dataset_project='data', dataset_name='dataset_v2', dataset_project='data',
dataset_name='dataset_v2',
parent_datasets=[dataset], parent_datasets=[dataset],
use_current_task=True, use_current_task=True,
# this will make sure we have the creation code and the actual dataset artifacts on the same Task # this will make sure we have the creation code and the actual dataset artifacts on the same Task

View File

@ -80,7 +80,7 @@ View the logged metrics in the WebApp, in the experiment's **Scalars** tab.
ClearML automatically logs models saved using the `ModelCheckpoint` handler. Make sure a ClearML Task is instantiated in ClearML automatically logs models saved using the `ModelCheckpoint` handler. Make sure a ClearML Task is instantiated in
your script. If you're already using either `ClearMLStatsHandler` or `ClearMLImageHandler`, you don't have to add any code. your script. If you're already using either `ClearMLStatsHandler` or `ClearMLImageHandler`, you don't have to add any code.
Otherwise, all you have to is add two lines of code to create a task: Otherwise, all you have to do is add two lines of code to create a task:
```python ```python
from clearml import Task from clearml import Task

View File

@ -96,13 +96,13 @@ pipe.add_step(
* `cache_executed_step` If `True`, the controller will check if an identical task with the same code (including setup, * `cache_executed_step` If `True`, the controller will check if an identical task with the same code (including setup,
e.g. required packages, docker image, etc.) and input arguments was already executed. If found, the cached step's e.g. required packages, docker image, etc.) and input arguments was already executed. If found, the cached step's
outputs are used instead of launching a new task. outputs are used instead of launching a new task.
* `execution_queue` (optional) - the queue to use for executing this specific step. If not provided, the task will be sent to the default execution queue, as defined on the class * `execution_queue` (optional) - The queue to use for executing this specific step. If not provided, the task will be sent to the default execution queue, as defined on the class.
* `parents` Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution only after all the parent steps have been executed successfully. * `parents` (optional) - List of parent steps in the pipeline. The current step in the pipeline will be sent for execution only after all the parent steps have been executed successfully.
* `parameter_override` - Dictionary of parameters and values to override in the current step. See [parameter_override](#parameter_override). * `parameter_override` - Dictionary of parameters and values to override in the current step. See [parameter_override](#parameter_override).
* `configuration_overrides` - Dictionary of configuration objects and values to override in the current step. See [configuration_overrides](#configuration_overrides) * `configuration_overrides` - Dictionary of configuration objects and values to override in the current step. See [configuration_overrides](#configuration_overrides).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics). * `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).
See [add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step) for all arguments. See [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step) for all arguments.
#### parameter_override #### parameter_override
Use the `parameter_override` argument to modify the step's parameter values. The `parameter_override` dictionary key is Use the `parameter_override` argument to modify the step's parameter values. The `parameter_override` dictionary key is
@ -164,13 +164,13 @@ pipe.add_function_step(
(including setup, see task [Execution](../webapp/webapp_exp_track_visual.md#execution) (including setup, see task [Execution](../webapp/webapp_exp_track_visual.md#execution)
section) and input arguments was already executed. If found, the cached step's section) and input arguments was already executed. If found, the cached step's
outputs are used instead of launching a new task. outputs are used instead of launching a new task.
* `parents` Optional list of parent steps in the pipeline. The current step in the pipeline will be sent for execution * `parents` (optional) - List of parent steps in the pipeline. The current step in the pipeline will be sent for execution
only after all the parent steps have been executed successfully. only after all the parent steps have been executed successfully.
* `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called * `pre_execute_callback` and `post_execute_callback` - Control pipeline flow with callback functions that can be called
before and/or after a step's execution. See [here](#pre_execute_callback-and-post_execute_callback). before and/or after a step's execution. See [here](#pre_execute_callback-and-post_execute_callback).
* `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics). * `monitor_models`, `monitor_metrics`, `monitor_artifacts` - see [here](#models-artifacts-and-metrics).
See [add_function_step](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step) for all See [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step) for all
arguments. arguments.
### Important Arguments ### Important Arguments