Small edits (#241)

This commit is contained in:
pollfly 2022-05-01 10:06:09 +03:00 committed by GitHub
parent 5bd8f06f7a
commit 4660fb8ea0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 13 additions and 13 deletions

View File

@ -34,7 +34,7 @@ clearml-data create --project <project_name> --name <dataset_name> --parents <ex
:::tip Dataset ID
* To locate a dataset's ID, go to the dataset task's info panel in the [WebApp](../webapp/webapp_overview.md). In the top of the panel,
* To locate a dataset's ID, go to the dataset task's info panel in the [WebApp](../webapp/webapp_exp_track_visual.md). In the top of the panel,
to the right of the dataset task name, click `ID` and the dataset ID appears.
* clearml-data works in a stateful mode so once a new dataset is created, the following commands

View File

@ -176,7 +176,7 @@ clearml-serving model remove [-h] [--endpoint ENDPOINT]
|Name|Description|Optional|
|---|---|---|
|`--endpoint` | Model endpoint name | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
``
</div>
### upload

View File

@ -62,7 +62,7 @@ decorator overrides the default queue value for the specific step for which it w
:::note Execution Modes
ClearML provides different pipeline execution modes to accommodate development and production use cases. For additional
details, see [Execution Modes](../../pipelines/pipelines.md#pipeline-controller-execution-options).
details, see [Execution Modes](../../pipelines/pipelines.md#running-your-pipelines).
:::
To run the pipeline, call the pipeline controller function.

View File

@ -86,7 +86,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
* `packages` - A list of required packages or a local requirements.txt file. Example: `["tqdm>=2.1", "scikit-learn"]` or
`"./requirements.txt"`. If not provided, packages are automatically added based on the imports used inside the function.
* `execution_queue` (Optional) - Queue in which to enqueue the specific step. This overrides the queue set with the
[PipelineDecorator.set_default_execution_queue method](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
[`PipelineDecorator.set_default_execution_queue method`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
method.
* `continue_on_fail` - If `True`, a failed step does not cause the pipeline to stop (or marked as failed). Notice, that
steps that are connected (or indirectly connected) to the failed step are skipped (default `False`)
@ -118,7 +118,7 @@ following arguments:
artifact).
* Alternatively, provide a list of pairs (source_artifact_name, target_artifact_name), where the first string is the
artifact name as it appears on the component Task, and the second is the target artifact name to put on the Pipeline
Task. Example: [('processed_data', 'final_processed_data'), ]
Task. Example: `[('processed_data', 'final_processed_data'), ]`
* `monitor_models` (Optional) - Automatically log the step's output models on the pipeline Task.
* Provided a list of model names created by the step's Task, they will also appear on the Pipeline itself. Example: `['model_weights', ]`
* To select the latest (lexicographic) model use `model_*`, or the last created model with just `*`. Example: `['model_weights_*', ]`
@ -127,14 +127,14 @@ following arguments:
Example: `[('model_weights', 'final_model_weights'), ]`
You can also directly upload a model or an artifact from the step to the pipeline controller, using the
[PipelineDecorator.upload_model](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_model)
and [PipelineDecorator.upload_artifact](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_artifact)
[`PipelineDecorator.upload_model`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_model)
and [`PipelineDecorator.upload_artifact`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_artifact)
methods respectively.
## Controlling Pipeline Execution
### Default Execution Queue
The [PipelineDecorator.set_default_execution_queue](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
The [`PipelineDecorator.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
method lets you set a default queue through which all pipeline steps
will be executed. Once set, step-specific overrides can be specified through the `@PipelineDecorator.component` decorator.
@ -167,7 +167,7 @@ It is possible to run the pipeline logic itself locally, while keeping the pipel
#### Debugging Mode
In debugging mode, the pipeline controller and all components are treated as regular python functions, with components
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
executed on the developer machine with full ability to debug each function call. Call [PipelineDecorator.debug_pipeline](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
before the main pipeline logic function call.
Example:
@ -183,7 +183,7 @@ In local mode, the pipeline controller creates Tasks for each component, and com
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
itself is local. Notice that each subprocess is using the exact same python environment as the main pipeline logic. Call
[PipelineDecorator.run_locally](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
before the main pipeline logic function.
Example:

View File

@ -54,7 +54,7 @@ Creating a pipeline step from an existing ClearML task means that when the step
new task will be launched through the configured execution queue (the original task is unmodified). The new tasks
parameters can be [specified](#parameter_override).
Task steps are added using the [PipelineController.add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
Task steps are added using the [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
method:
```python
@ -213,8 +213,8 @@ methods respectively.
The [`PipelineController.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#set_default_execution_queue)
method lets you set a default queue through which all pipeline steps will be executed. Once set, step-specific overrides
can be specified through `execution_queue` of the [PipelineController.add_step](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
or [PipelineController.add_function_step](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
can be specified through `execution_queue` of the [`PipelineController.add_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_step)
or [`PipelineController.add_function_step`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
methods.
### Running the Pipeline