Reformat pipeline docs (#239)

This commit is contained in:
pollfly
2022-04-26 13:42:55 +03:00
committed by GitHub
parent ca76d78731
commit 55ba4ec8dd
14 changed files with 530 additions and 462 deletions

View File

@@ -48,7 +48,7 @@ that we need.
- [ClearML Agent](../../clearml_agent.md) does the heavy lifting. It reproduces the execution environment, clones your code,
applies code patches, manages parameters (Including overriding them on the fly), executes the code and queues multiple tasks
It can even [build](../../clearml_agent.md#exporting-a-task-into-a-standalone-docker-container) the docker container for you!
- [ClearML Pipelines](../../fundamentals/pipelines.md) ensure that steps run in the same order,
- [ClearML Pipelines](../../pipelines/pipelines.md) ensure that steps run in the same order,
programmatically chaining tasks together, while giving an overview of the execution pipeline's status.
**Your entire environment should magically be able to run on any machine, without you working hard.**

View File

@@ -176,7 +176,7 @@ or check these pages out:
- Scale you work and deploy [ClearML Agents](../../clearml_agent.md)
- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
- Structure your work and put it into [Pipelines](../../fundamentals/pipelines.md)
- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
- Improve your experiments with [HyperParameter Optimization](../../fundamentals/hpo.md)
- Check out ClearML's integrations to [external libraries](../../integrations/libraries.md).

View File

@@ -26,7 +26,7 @@ Once we have a Task in ClearML, we can clone and edit its definitions in the UI,
## Advanced Automation
- Create daily / weekly cron jobs for retraining best performing models on.
- Create data monitoring & scheduling and launch inference jobs to test performance on any new coming dataset.
- Once there are two or more experiments that run after another, group them together into a [pipeline](../../fundamentals/pipelines.md).
- Once there are two or more experiments that run after another, group them together into a [pipeline](../../pipelines/pipelines.md).
## Manage Your Data
Use [ClearML Data](../../clearml_data/clearml_data.md) to version your data, then link it to running experiments for easy reproduction.

View File

@@ -154,7 +154,7 @@ a_numpy = executed_task.artifacts['numpy'].get()
```
By facilitating the communication of complex objects between tasks, artifacts serve as the foundation of ClearML's [Data Management](../../clearml_data/clearml_data.md)
and [pipeline](../../fundamentals/pipelines.md) solutions.
and [pipeline](../../pipelines/pipelines.md) solutions.
#### Log Models
Logging models into the model repository is the easiest way to integrate the development process directly with production.

View File

@@ -113,4 +113,4 @@ pipe.add_step(
We could also pass the parameters from one step to the other (for example `Task.id`).
In addition to pipelines made up of Task steps, ClearML also supports pipelines consisting of function steps. See more in the
full pipeline documentation [here](../../fundamentals/pipelines.md).
full pipeline documentation [here](../../pipelines/pipelines.md).