script demonstrates integrating ClearML into code that uses the [PyTorch Distributed Communications Package](https://pytorch.org/docs/stable/distributed.html)
The script initializes a main Task and spawns subprocesses, each for an instance of that Task.
The Task in each subprocess trains a neural network over a partitioned dataset (the torchvision built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist)
dataset), and reports (uploads) the following to the main Task:
* Artifacts - A dictionary containing different key-value pairs.
* Scalars - Loss reported as a scalar during training in each Task in a subprocess.
* Hyperparameters - Hyperparameters created in each Task are added to the hyperparameters in the main Task.
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task), which always returns
The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact)
on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
method is called on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task), they are logged in the main Task. A different hyperparameter key is used in each