mirror of
https://github.com/clearml/clearml-docs
synced 2025-02-26 05:59:41 +00:00
Small edits (#241)
This commit is contained in:
parent
5bd8f06f7a
commit
4660fb8ea0
@ -34,7 +34,7 @@ clearml-data create --project <project_name> --name <dataset_name> --parents <ex
|
|||||||
|
|
||||||
|
|
||||||
:::tip Dataset ID
|
:::tip Dataset ID
|
||||||
* To locate a dataset's ID, go to the dataset task's info panel in the [WebApp](../webapp/webapp_overview.md). In the top of the panel,
|
* To locate a dataset's ID, go to the dataset task's info panel in the [WebApp](../webapp/webapp_exp_track_visual.md). In the top of the panel,
|
||||||
to the right of the dataset task name, click `ID` and the dataset ID appears.
|
to the right of the dataset task name, click `ID` and the dataset ID appears.
|
||||||
|
|
||||||
* clearml-data works in a stateful mode so once a new dataset is created, the following commands
|
* clearml-data works in a stateful mode so once a new dataset is created, the following commands
|
||||||
|
@ -176,7 +176,7 @@ clearml-serving model remove [-h] [--endpoint ENDPOINT]
|
|||||||
|Name|Description|Optional|
|
|Name|Description|Optional|
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
|`--endpoint` | Model endpoint name | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
|`--endpoint` | Model endpoint name | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||||
``
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
### upload
|
### upload
|
||||||
|
@ -62,7 +62,7 @@ decorator overrides the default queue value for the specific step for which it w
|
|||||||
|
|
||||||
:::note Execution Modes
|
:::note Execution Modes
|
||||||
ClearML provides different pipeline execution modes to accommodate development and production use cases. For additional
|
ClearML provides different pipeline execution modes to accommodate development and production use cases. For additional
|
||||||
details, see [Execution Modes](../../pipelines/pipelines.md#pipeline-controller-execution-options).
|
details, see [Execution Modes](../../pipelines/pipelines.md#running-your-pipelines).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
To run the pipeline, call the pipeline controller function.
|
To run the pipeline, call the pipeline controller function.
|
||||||
|
@ -86,7 +86,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
|
|||||||
* `packages` - A list of required packages or a local requirements.txt file. Example: `["tqdm>=2.1", "scikit-learn"]` or
|
* `packages` - A list of required packages or a local requirements.txt file. Example: `["tqdm>=2.1", "scikit-learn"]` or
|
||||||
`"./requirements.txt"`. If not provided, packages are automatically added based on the imports used inside the function.
|
`"./requirements.txt"`. If not provided, packages are automatically added based on the imports used inside the function.
|
||||||
* `execution_queue` (Optional) - Queue in which to enqueue the specific step. This overrides the queue set with the
|
* `execution_queue` (Optional) - Queue in which to enqueue the specific step. This overrides the queue set with the
|
||||||
[PipelineDecorator.set_default_execution_queue method](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
[`PipelineDecorator.set_default_execution_queue method`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
||||||
method.
|
method.
|
||||||
* `continue_on_fail` - If `True`, a failed step does not cause the pipeline to stop (or marked as failed). Notice, that
|
* `continue_on_fail` - If `True`, a failed step does not cause the pipeline to stop (or marked as failed). Notice, that
|
||||||
steps that are connected (or indirectly connected) to the failed step are skipped (default `False`)
|
steps that are connected (or indirectly connected) to the failed step are skipped (default `False`)
|
||||||
@ -118,7 +118,7 @@ following arguments:
|
|||||||
artifact).
|
artifact).
|
||||||
* Alternatively, provide a list of pairs (source_artifact_name, target_artifact_name), where the first string is the
|
* Alternatively, provide a list of pairs (source_artifact_name, target_artifact_name), where the first string is the
|
||||||
artifact name as it appears on the component Task, and the second is the target artifact name to put on the Pipeline
|
artifact name as it appears on the component Task, and the second is the target artifact name to put on the Pipeline
|
||||||
Task. Example: [('processed_data', 'final_processed_data'), ]
|
Task. Example: `[('processed_data', 'final_processed_data'), ]`
|
||||||
* `monitor_models` (Optional) - Automatically log the step's output models on the pipeline Task.
|
* `monitor_models` (Optional) - Automatically log the step's output models on the pipeline Task.
|
||||||
* Provided a list of model names created by the step's Task, they will also appear on the Pipeline itself. Example: `['model_weights', ]`
|
* Provided a list of model names created by the step's Task, they will also appear on the Pipeline itself. Example: `['model_weights', ]`
|
||||||
* To select the latest (lexicographic) model use `model_*`, or the last created model with just `*`. Example: `['model_weights_*', ]`
|
* To select the latest (lexicographic) model use `model_*`, or the last created model with just `*`. Example: `['model_weights_*', ]`
|
||||||
@ -127,14 +127,14 @@ following arguments:
|
|||||||
Example: `[('model_weights', 'final_model_weights'), ]`
|
Example: `[('model_weights', 'final_model_weights'), ]`
|
||||||
|
|
||||||
You can also directly upload a model or an artifact from the step to the pipeline controller, using the
|
You can also directly upload a model or an artifact from the step to the pipeline controller, using the
|
||||||
[PipelineDecorator.upload_model](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_model)
|
[`PipelineDecorator.upload_model`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_model)
|
||||||
and [PipelineDecorator.upload_artifact](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_artifact)
|
and [`PipelineDecorator.upload_artifact`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_artifact)
|
||||||
methods respectively.
|
methods respectively.
|
||||||
|
|
||||||
|
|
||||||
## Controlling Pipeline Execution
|
## Controlling Pipeline Execution
|
||||||
### Default Execution Queue
|
### Default Execution Queue
|
||||||
The [PipelineDecorator.set_default_execution_queue](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
The [`PipelineDecorator.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
||||||
method lets you set a default queue through which all pipeline steps
|
method lets you set a default queue through which all pipeline steps
|
||||||
will be executed. Once set, step-specific overrides can be specified through the `@PipelineDecorator.component` decorator.
|
will be executed. Once set, step-specific overrides can be specified through the `@PipelineDecorator.component` decorator.
|
||||||
|
|
||||||
@ -167,7 +167,7 @@ It is possible to run the pipeline logic itself locally, while keeping the pipel
|
|||||||
#### Debugging Mode
|
#### Debugging Mode
|
||||||
In debugging mode, the pipeline controller and all components are treated as regular python functions, with components
|
In debugging mode, the pipeline controller and all components are treated as regular python functions, with components
|
||||||
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
|
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
|
||||||
executed on the developer machine with full ability to debug each function call. Call [PipelineDecorator.debug_pipeline](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
|
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
|
||||||
before the main pipeline logic function call.
|
before the main pipeline logic function call.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
@ -183,7 +183,7 @@ In local mode, the pipeline controller creates Tasks for each component, and com
|
|||||||
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
|
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
|
||||||
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
|
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
|
||||||
itself is local. Notice that each subprocess is using the exact same python environment as the main pipeline logic. Call
|
itself is local. Notice that each subprocess is using the exact same python environment as the main pipeline logic. Call
|
||||||
[PipelineDecorator.run_locally](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
|
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
|
||||||
before the main pipeline logic function.
|
before the main pipeline logic function.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
@ -54,7 +54,7 @@ Creating a pipeline step from an existing ClearML task means that when the step
|
|||||||
new task will be launched through the configured execution queue (the original task is unmodified). The new task’s
|
new task will be launched through the configured execution queue (the original task is unmodified). The new task’s
|
||||||
parameters can be [specified](#parameter_override).
|
parameters can be [specified](#parameter_override).
|
||||||
|
|
||||||
Task steps are added using the [PipelineController.add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
|
Task steps are added using the [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
|
||||||
method:
|
method:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -213,8 +213,8 @@ methods respectively.
|
|||||||
|
|
||||||
The [`PipelineController.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#set_default_execution_queue)
|
The [`PipelineController.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#set_default_execution_queue)
|
||||||
method lets you set a default queue through which all pipeline steps will be executed. Once set, step-specific overrides
|
method lets you set a default queue through which all pipeline steps will be executed. Once set, step-specific overrides
|
||||||
can be specified through `execution_queue` of the [PipelineController.add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
|
can be specified through `execution_queue` of the [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
|
||||||
or [PipelineController.add_function_step](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
|
or [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
|
||||||
methods.
|
methods.
|
||||||
|
|
||||||
### Running the Pipeline
|
### Running the Pipeline
|
||||||
|
Loading…
Reference in New Issue
Block a user