Pipelines are a way to streamline and connect multiple processes, plugging the output of one process as the input of another.
ClearML Pipelines are implemented by a *Controller Task* that holds the logic of the pipeline steps' interactions. The execution logic
controls which step to launch based on parent steps completing their execution. Depending on the specifications
laid out in the controller task, a step's parameters can be overridden, enabling users to leverage other steps' execution
products such as artifacts and parameters.
When run, the controller will sequentially launch the pipeline steps. The pipeline logic and steps
can be executed locally, or on any machine using the [clearml-agent](../clearml_agent.md).
![Pipeline UI](../img/pipelines_DAG.png)
The [Pipeline Run](../webapp/pipelines/webapp_pipeline_viewing.md) page in the web UI displays the pipeline’s structure
in terms of executed steps and their status, as well as the run’s configuration parameters and output. See [pipeline UI](../webapp/pipelines/webapp_pipeline_page.md)
for more details.
ClearML pipelines are created from code using one of the following:
* [PipelineController](pipelines_sdk_tasks.md) class - A pythonic interface for defining and configuring the pipeline
controller and its steps. The controller and steps can be functions in your python code, or existing [ClearML tasks](../fundamentals/task.md).
* [PipelineDecorator](pipelines_sdk_function_decorators.md) class - A set of Python decorators which transform your
functions into the pipeline controller and steps
When the pipeline runs, corresponding ClearML tasks are created for the controller and steps.
Since a pipeline controller is itself a [ClearML task](../fundamentals/task.md), it can be used as a pipeline step.
This allows to create more complicated workflows, such as pipelines running other pipelines, or pipelines running multiple
tasks concurrently. See the [Tabular training pipeline](../guides/frameworks/pytorch/notebooks/table/tabular_training_pipeline.md)
example of a pipeline with concurrent steps.
## Running Your Pipelines
ClearML supports multiple modes for pipeline execution:
* **Remote Mode** (default) - In this mode, the pipeline controller logic is executed through a designated queue, and all
The Pipeline controller supports step caching, meaning, reusing outputs of previously executed pipeline steps.
Cached pipeline steps are reused when they meet the following criteria:
* The step code is the same, including environment setup (components in the task's [Execution](../webapp/webapp_exp_track_visual.md#execution)
section, like required packages and docker image)
* The step input arguments are unchanged, including step arguments and parameters (anything logged to the task's [Configuration](../webapp/webapp_exp_track_visual.md#configuration)
section)
By default, pipeline steps are not cached. Enable caching when creating a pipeline step (for example, see [@PipelineDecorator.component](pipelines_sdk_function_decorators.md#pipelinedecoratorcomponent)).
The new pipeline run will be executed through the execution queue by a ClearML agent. The agent will rebuild
the pipeline according to the configuration and DAG that was captured in the original run, and override the original
parameters’ value with those input in the **NEW RUN** modal.
One exception is for pipelines [created from functions](pipelines_sdk_tasks.md#steps-from-functions) (adding steps to a
pipeline controller using [`PipelineController.add_function_step()`](../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)):
When you rerun the pipeline through the ClearML WebApp, the pipeline is constructed again at runtime from the executed
code.
To change this behavior, pass `always_create_from_code=False` when instantiating a `PipelineController`. In this case,