mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#256)
This commit is contained in:
@@ -10,7 +10,7 @@ example demonstrates:
|
||||
|
||||
This example accomplishes a task pipe by doing the following:
|
||||
|
||||
1. Creating the template Task which is named `Toy Base Task`. It must be stored in **ClearML Server** before instances of
|
||||
1. Creating the template Task which is named `Toy Base Task`. It must be stored in ClearML Server before instances of
|
||||
it can be created. To create it, run another ClearML example script, [toy_base_task.py](https://github.com/allegroai/clearml/blob/master/examples/automation/toy_base_task.py).
|
||||
The template Task has a parameter dictionary, which is connected to the Task: `{'Example_Param': 1}`.
|
||||
1. Back in `programmatic_orchestration.py`, creating a parameter dictionary, which is connected to the Task by calling [Task.connect](../../references/sdk/task.md#connect)
|
||||
|
||||
@@ -33,7 +33,7 @@ from clearml import Task
|
||||
task = Task.init(project_name="myProject", task_name="myExperiment")
|
||||
```
|
||||
|
||||
When the code runs, it initializes a Task in **ClearML Server**. A hyperlink to the experiment's log is output to the console.
|
||||
When the code runs, it initializes a Task in ClearML Server. A hyperlink to the experiment's log is output to the console.
|
||||
|
||||
CLEARML Task: created new task id=c1f1dc6cf2ee4ec88cd1f6184344ca4e
|
||||
CLEARML results page: https://app.clear.ml/projects/1c7a45633c554b8294fa6dcc3b1f2d4d/experiments/c1f1dc6cf2ee4ec88cd1f6184344ca4e/output/log
|
||||
|
||||
@@ -269,7 +269,7 @@ By hovering over a step or path between nodes, you can view information about it
|
||||
1. Run the pipeline controller one of the following two ways:
|
||||
|
||||
* Run the notebook [tabular_ml_pipeline.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb).
|
||||
* Remotely execute the Task - If the Task `tabular training pipeline` which is associated with the project `Tabular Example` already exists in **ClearML Server**, clone it and enqueue it to execute.
|
||||
* Remotely execute the Task - If the Task `tabular training pipeline` which is associated with the project `Tabular Example` already exists in ClearML Server, clone it and enqueue it to execute.
|
||||
|
||||
|
||||
:::note
|
||||
|
||||
@@ -35,7 +35,9 @@ All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
|
||||
|
||||
## Scalars
|
||||
|
||||
We report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method on `Task.current_task().get_logger`, which is the logger for the main Task. Since we call `Logger.report_scalar` with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together.
|
||||
Report loss to the main Task by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method
|
||||
on `Task.current_task().get_logger`, which is the logger for the main Task. Since `Logger.report_scalar` is called with the
|
||||
same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are logged together.
|
||||
|
||||
Task.current_task().get_logger().report_scalar(
|
||||
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
|
||||
|
||||
@@ -7,7 +7,7 @@ compute provided by google.
|
||||
|
||||
Users can transform a Google Colab instance into an available resource in ClearML using [ClearML Agent](../../clearml_agent.md).
|
||||
|
||||
In this tutorial, we will go over how to create a ClearML worker node in a Google Colab notebook. Once the worker is up
|
||||
This tutorial goes over how to create a ClearML worker node in a Google Colab notebook. Once the worker is up
|
||||
and running, users can send Tasks to be executed on the Google Colab's HW.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -68,7 +68,7 @@ def job_complete_callback(
|
||||
|
||||
## Initialize the Optimization Task
|
||||
|
||||
Initialize the Task, which will be stored in **ClearML Server** when the code runs. After the code runs at least once, it
|
||||
Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once, it
|
||||
can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../../../webapp/webapp_exp_tuning.md).
|
||||
|
||||
We set the Task type to optimizer, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`).
|
||||
@@ -92,7 +92,7 @@ Create an arguments dictionary that contains the ID of the Task to optimize, and
|
||||
optimizer will run as a service, see [Running as a service](#running-as-a-service).
|
||||
|
||||
In this example, an experiment named **Keras HP optimization base** is being optimized. The experiment must have run at
|
||||
least once so that it is stored in **ClearML Server**, and, therefore, can be cloned.
|
||||
least once so that it is stored in ClearML Server, and, therefore, can be cloned.
|
||||
|
||||
Since the arguments dictionary is connected to the Task, after the code runs once, the `template_task_id` can be changed
|
||||
to optimize a different experiment.
|
||||
|
||||
@@ -9,7 +9,7 @@ example script from ClearML's GitHub repo:
|
||||
|
||||
* Setting an output destination for model checkpoints (snapshots).
|
||||
* Explicitly logging a scalar, other (non-scalar) data, and logging text.
|
||||
* Registering an artifact, which is uploaded to **ClearML Server**, and ClearML logs changes to it.
|
||||
* Registering an artifact, which is uploaded to [ClearML Server](../../deploying_clearml/clearml_server.md), and ClearML logs changes to it.
|
||||
* Uploading an artifact, which is uploaded, but changes to it are not logged.
|
||||
|
||||
## Prerequisites
|
||||
@@ -202,7 +202,7 @@ logger.report_text(
|
||||
|
||||
## Step 3: Registering Artifacts
|
||||
|
||||
Registering an artifact uploads it to **ClearML Server**, and if it changes, the change is logged in **ClearML Server**.
|
||||
Registering an artifact uploads it to ClearML Server, and if it changes, the change is logged in ClearML Server.
|
||||
Currently, ClearML supports Pandas DataFrames as registered artifacts.
|
||||
|
||||
### Register the Artifact
|
||||
@@ -249,7 +249,7 @@ sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sam
|
||||
|
||||
## Step 4: Uploading Artifacts
|
||||
|
||||
Artifact can be uploaded to the **ClearML Server**, but changes are not logged.
|
||||
Artifact can be uploaded to the ClearML Server, but changes are not logged.
|
||||
|
||||
Supported artifacts include:
|
||||
* Pandas DataFrames
|
||||
|
||||
Reference in New Issue
Block a user