mirror of
https://github.com/clearml/clearml
synced 2025-04-02 00:26:05 +00:00
Fix links in jupyter notebooks (#505)
This commit is contained in:
parent
600ded8591
commit
bb15ea6666
@ -35,7 +35,7 @@
|
||||
"\n",
|
||||
"A Configuration dictionary is connected to the task using `Task.connect`. This will enable the pipeline controller to access this task's configurations and override the values when the pipeline is executed.\n",
|
||||
"\n",
|
||||
"Notice in the [pipeline controller script](tabular_ml_pipeline.ipynb) that when this task is added as a step in the pipeline, the value of `train_task_ids` is overridden. "
|
||||
"Notice in the [pipeline controller script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb) that when this task is added as a step in the pipeline, the value of `train_task_ids` is overridden. "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -39,7 +39,7 @@
|
||||
"## Configure Task\n",
|
||||
"Instantiate a ClearML Task using `Task.init`. \n",
|
||||
"\n",
|
||||
"A Configuration dictionary is connected to the task using `Task.connect`. This will enable the pipeline controller to access this task's configurations and override the value when the pipeline is executed. "
|
||||
"A Configuration dictionary is connected to the task using `Task.connect`. This will enable the [pipeline controller](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb) to access this task's configurations and override the value when the pipeline is executed. "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -10,9 +10,9 @@
|
||||
"\n",
|
||||
"The pipeline uses four tasks (each Task is created using a different notebook):\n",
|
||||
"* The pipeline controller Task (the current task)\n",
|
||||
"* A data preprocessing Task ([preprocessing_and_encoding.ipynb](preprocessing_and_encoding.ipynb))\n",
|
||||
"* A training Task [(train_tabular_predictor.ipynb](train_tabular_predictor.ipynb))\n",
|
||||
"* A comparison Task ([pick_best_model.ipynb](pick_best_model.ipynb))\n",
|
||||
"* A data preprocessing Task ([preprocessing_and_encoding.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/preprocessing_and_encoding.ipynb))\n",
|
||||
"* A training Task [(train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb))\n",
|
||||
"* A comparison Task ([pick_best_model.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/pick_best_model.ipynb))\n",
|
||||
"\n",
|
||||
"In this pipeline example, the data preprocessing Task and training Task are each added to the pipeline twice (each is in two steps). When the pipeline runs, the data preprocessing Task and training Task are cloned twice, and the newly cloned Tasks execute. The Task they are cloned from, called the base Task, does not execute. The pipeline controller passes different data to each cloned Task by overriding parameters. In this way, the same Task can run more than once in the pipeline, but with different data.\n"
|
||||
]
|
||||
@ -22,26 +22,14 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisite\n",
|
||||
"Make sure to download the data needed for this task. See the [download_and_split.ipynb](download_and_split.ipynb) notebook"
|
||||
"Make sure to download the data needed for this task. See the [download_and_split.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_split.ipynb) notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Requirement already satisfied: pip in /home/revital/PycharmProjects/venvs/clearml/lib/python3.8/site-packages (21.3.1)\r\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# pip install with locked versions\n",
|
||||
"! pip install -U pip\n",
|
||||
@ -103,7 +91,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Add Preprocessing Step\n",
|
||||
"Two preprocessing nodes are added to the pipeline: `preprocessing_1` and `preprocessing_2`. These two nodes will be cloned from the same base task, created from the [preprocessing_and_encoding.ipynb](preprocessing_and_encoding.ipynb) script. These steps will run concurrently.\n",
|
||||
"Two preprocessing nodes are added to the pipeline: `preprocessing_1` and `preprocessing_2`. These two nodes will be cloned from the same base task, created from the [preprocessing_and_encoding.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/preprocessing_and_encoding.ipynb) script. These steps will run concurrently.\n",
|
||||
"\n",
|
||||
"The preprocessing data task fills in values of NaN data based on the values of the parameters named `fill_categorical_NA` and `fill_numerical_NA`. It will connect a parameter dictionary to the task which contains keys with those same names. The pipeline will override the values of those keys when the pipeline executes the cloned tasks of the base Task. In this way, two sets of data are created in the pipeline."
|
||||
]
|
||||
@ -148,7 +136,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Two training nodes are added to the pipeline: `train_1` and `train_2`. These two nodes will be cloned from the same base task, created from the [train_tabular_predictor.ipynb](train_tabular_predictor.ipynb) script.\n",
|
||||
"Two training nodes are added to the pipeline: `train_1` and `train_2`. These two nodes will be cloned from the same base task, created from the [train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb) script.\n",
|
||||
"\n",
|
||||
"Each training node depends upon the completion of one preprocessing node. The `parents` parameter is a list of step names indicating all steps that must complete before the new step starts. In this case, `preprocessing_1` must complete before `train_1` begins, and `preprocessing_2` must complete before `train_2` begins.\n",
|
||||
"\n",
|
||||
|
@ -48,7 +48,7 @@
|
||||
"\n",
|
||||
"A Configuration dictionary is connected to the task using `Task.connect`. This will enable the pipeline controller to access this task's configurations and override the values when the pipeline is executed.\n",
|
||||
"\n",
|
||||
"Notice in the [pipeline controller script](tabular_ml_pipeline.ipynb) that when this task is added as a step in the pipeline, the value of `data_task_id` is overridden with the ID of another task in the pipeline. "
|
||||
"Notice in the [pipeline controller script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb) that when this task is added as a step in the pipeline, the value of `data_task_id` is overridden with the ID of another task in the pipeline. "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user