edit pipeline example based on code fixes (#120)

This commit is contained in:
pollfly 2021-11-23 10:04:50 +02:00 committed by GitHub
parent 24e1fbf934
commit 8ffd3c1dcf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -4,7 +4,7 @@ title: Tabular Data Pipeline with Concurrent Steps - Jupyter Notebook
This example demonstrates an ML pipeline which preprocesses data in two concurrent steps, trains two networks, where each
network's training depends upon the completion of its own preprocessed data, and picks the best model. It is implemented
using the [automation.controller.PipelineController](../../../../../references/sdk/automation_controller_pipelinecontroller.md)
using the [PipelineController](../../../../../references/sdk/automation_controller_pipelinecontroller.md)
class.
The pipeline uses four Tasks (each Task is created using a different notebook):
@ -14,11 +14,11 @@ The pipeline uses four Tasks (each Task is created using a different notebook):
* A training Task ([train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb))
* A better model comparison Task ([pick_best_model.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/pick_best_model.ipynb))
The `automation.controller.PipelineController` class includes functionality to create a pipeline controller, add steps to the pipeline, pass data from one step to another, control the dependencies of a step beginning only after other steps complete, run the pipeline, wait for it to complete, and cleanup afterwards.
The `PipelineController` class includes functionality to create a pipeline controller, add steps to the pipeline, pass data from one step to another, control the dependencies of a step beginning only after other steps complete, run the pipeline, wait for it to complete, and cleanup afterwards.
In this pipeline example, the data preprocessing Task and training Task are each added to the pipeline twice (each is in two steps). When the pipeline runs, the data preprocessing Task and training Task are cloned twice, and the newly cloned Tasks execute. The Task they are cloned from, called the base Task, does not execute. The pipeline controller passes different data to each cloned Task by overriding parameters. In this way, the same Task can run more than once in the pipeline, but with different data.
:::note
:::note Download Data
The data download Task is not a step in the pipeline, see [download_and_split](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_split.ipynb).
:::
@ -26,20 +26,42 @@ The data download Task is not a step in the pipeline, see [download_and_split](h
In this example, a pipeline controller object is created.
pipe = PipelineController(default_execution_queue='dan_queue', add_pipeline_tags=True)
```python
pipe = PipelineController(
project="Tabular Example",
name="tabular training pipeline",
add_pipeline_tags=True,
version="0.1"
)
```
### Preprocessing Step
Two preprocessing nodes are added to the pipeline. These steps will run concurrently.
pipe.add_step(name='preprocessing_1', base_task_project='Tabular Example', base_task_name='tabular preprocessing',
parameter_override={'General/data_task_id': '39fbf86fc4a341359ac6df4aa70ff91b',
```python
pipe.add_step(
name='preprocessing_1',
base_task_project='Tabular Example',
base_task_name='tabular preprocessing',
parameter_override={
'General/data_task_id': TABULAR_DATASET_ID,
'General/fill_categorical_NA': 'True',
'General/fill_numerical_NA': 'True'})
pipe.add_step(name='preprocessing_2', base_task_project='Tabular Example', base_task_name='tabular preprocessing',
parameter_override={'General/data_task_id': '39fbf86fc4a341359ac6df4aa70ff91b',
'General/fill_numerical_NA': 'True'
}
)
pipe.add_step(
name='preprocessing_2',
base_task_project='Tabular Example',
base_task_name='tabular preprocessing',
parameter_override={
'General/data_task_id': TABULAR_DATASET_ID,
'General/fill_categorical_NA': 'False',
'General/fill_numerical_NA': 'True'})
'General/fill_numerical_NA': 'True'
}
)
```
The preprocessing data Task fills in values of `NaN` data based on the values of the parameters named `fill_categorical_NA`
@ -52,10 +74,14 @@ two sets of data are created in the pipeline.
<div className="cml-expansion-panel-content">
In the preprocessing data Task, the parameter values in ``data_task_id``, ``fill_categorical_NA``, and ``fill_numerical_NA`` are overridden.
configuration_dict = {'data_task_id': '39fbf86fc4a341359ac6df4aa70ff91b',
'fill_categorical_NA': True, 'fill_numerical_NA': True}
```python
configuration_dict = {
'data_task_id': TABULAR_DATASET_ID,
'fill_categorical_NA': True,
'fill_numerical_NA': True
}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
```
**ClearML** tracks and reports each instance of the preprocessing Task.
@ -95,12 +121,26 @@ Each training node depends upon the completion of one preprocessing node. The pa
The ID of a Task whose artifact contains a set of preprocessed data for training will be overridden using the `data_task_id` key. Its value takes the form `${<stage-name>.<part-of-Task>}`. In this case, `${preprocessing_1.id}` is the ID of one of the preprocessing node Tasks. In this way, each training Task consumes its own set of data.
pipe.add_step(name='train_1', parents=['preprocessing_1'],
base_task_project='Tabular Example', base_task_name='tabular prediction',
parameter_override={'General/data_task_id': '${preprocessing_1.id}'})
pipe.add_step(name='train_2', parents=['preprocessing_2'],
base_task_project='Tabular Example', base_task_name='tabular prediction',
parameter_override={'General/data_task_id': '${preprocessing_2.id}'})
```python
pipe.add_step(
name='train_1',
parents=['preprocessing_1'],
base_task_project='Tabular Example',
base_task_name='tabular prediction',
parameter_override={
'General/data_task_id': '${preprocessing_1.id}'
}
)
pipe.add_step(
name='train_2',
parents=['preprocessing_2'],
base_task_project='Tabular Example',
base_task_name='tabular prediction',
parameter_override={
'General/data_task_id': '${preprocessing_2.id}'
}
)
```
<details className="cml-expansion-panel info">
<summary className="cml-expansion-panel-summary">ClearML tracks and reports the training step</summary>
@ -109,9 +149,13 @@ The ID of a Task whose artifact contains a set of preprocessed data for training
In the training Task, the ``data_task_id`` parameter value is overridden. This allows the pipeline controller to pass a
different Task ID to each instance of training, where each Task has an artifact containing different data.
configuration_dict = {'data_task_id': 'b605d76398f941e69fc91b43420151d2',
'number_of_epochs': 15, 'batch_size': 100, 'dropout': 0.3, 'base_lr': 0.1}
```python
configuration_dict = {
'data_task_id': TABULAR_DATASET_ID,
'number_of_epochs': 15, 'batch_size': 100, 'dropout': 0.3, 'base_lr': 0.1
}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
```
**ClearML** tracks and reports the training step with each instance of the newly cloned and executed training Task.
@ -137,9 +181,17 @@ The ID of a Task whose artifact contains a set of preprocessed data for training
The best model step depends upon both training nodes completing and takes the two training node Task IDs to override.
pipe.add_step(name='pick_best', parents=['train_1', 'train_2'],
base_task_project='Tabular Example', base_task_name='pick best model',
parameter_override={'General/train_tasks_ids': '[${train_1.id}, ${train_2.id}]'})
```python
pipe.add_step(
name='pick_best',
parents=['train_1', 'train_2'],
base_task_project='Tabular Example',
base_task_name='pick best model',
parameter_override={
'General/train_tasks_ids': '[${train_1.id}, ${train_2.id}]'
}
)
```
The IDs of the training Tasks from the steps named `train_1` and `train_2` are passed to the best model Task. They take the form `${<stage-name>.<part-of-Task>}`.
@ -149,14 +201,19 @@ The IDs of the training Tasks from the steps named `train_1` and `train_2` are p
In the best model Task, the `train_tasks_ids` parameter is overridden with the Task IDs of the two training tasks.
configuration_dict = {'train_tasks_ids': ['c9bff3d15309487a9e5aaa00358ff091', 'c9bff3d15309487a9e5aaa00358ff091']}
```python
configuration_dict = {
'train_tasks_ids':
['c9bff3d15309487a9e5aaa00358ff091', 'c9bff3d15309487a9e5aaa00358ff091']
}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
```
The logs shows the Task ID and accuracy for the best model in **RESULTS** **>** **LOGS**.
The logs show the Task ID and accuracy for the best model in **RESULTS** **>** **LOGS**.
![image](../../../../../img/tabular_training_pipeline_02.png)
In **ARTIFACTS** **>** **Output Model** is link to the model details.
The link to the model details is in **ARTIFACTS** **>** **Output Model** .
![image](../../../../../img/tabular_training_pipeline_03.png)
@ -172,13 +229,14 @@ The IDs of the training Tasks from the steps named `train_1` and `train_2` are p
Once all steps are added to the pipeline, start it. Wait for it to complete. Finally, cleanup the pipeline processes.
```python
# Starting the pipeline (in the background)
pipe.start()
# Wait until pipeline terminates
pipe.wait()
# cleanup everything
pipe.stop()
```
<details className="cml-expansion-panel info">
<summary className="cml-expansion-panel-summary">ClearML tracks and reports the pipeline's execution</summary>