mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-12 17:42:10 +00:00
Small edits (#663)
This commit is contained in:
parent
cd12d80e19
commit
4c88cf6393
@ -166,7 +166,7 @@ The Task must be connected to a git repository, since currently single script de
|
||||
| `--packages`| Additional packages to add. Supports version numbers. Example: `--packages torch==1.7 tqdm` | Previously added packages.|
|
||||
| `--git-credentials` | If `True`, local `.git-credentials` file is sent to the interactive session.| `false`|
|
||||
| `--docker`| Select the docker image to use in the interactive session on |`nvidia/cuda:10.1-runtime-ubuntu18.04` or previously used docker image|
|
||||
| `--docker-args ` | Add additional arguments for the docker image to use in the interactive session | `none` or the previously used docker-args |
|
||||
| `--docker-args` | Add additional arguments for the docker image to use in the interactive session | `none` or the previously used docker-args |
|
||||
| `--debugging-session` | Pass existing Task ID, create a copy of the experiment on a remote machine, and launch Jupyter/SSH for interactive access. Example `--debugging-session <task_id>`| `none`|
|
||||
| `--queue`| Select the queue to launch the interactive session on | Previously used queue|
|
||||
| `--interactive`, `-I` | Open the SSH session directly. Notice, quiting the SSH session will not shut down the remote session|`None`|
|
||||
@ -189,7 +189,7 @@ The Task must be connected to a git repository, since currently single script de
|
||||
| `--password`| Set your own SSH password for the interactive session | A randomly generated password or a previously used one |
|
||||
| `--force_dropbear`| Force using `dropbear` instead of SSHd |`None`|
|
||||
| `--version`| Display the clearml-session utility version| N/A|
|
||||
| `--verbose ` | Increase verbosity of logging | `none` |
|
||||
| `--verbose` | Increase verbosity of logging | `none` |
|
||||
| `--yes`, `-y`| Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively |N/A|
|
||||
|
||||
</div>
|
@ -138,7 +138,7 @@ clearml-agent execute [-h] --id TASK_ID [--log-file LOG_FILE] [--disable-monitor
|
||||
|`--log-file`| The log file for Task execution output (stdout / stderr) to a text file.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--log-level`| SDK log level. The values are:<ul><li>`DEBUG`</li><li>`INFO`</li><li>`WARN`</li><li>`WARNING`</li><li>`ERROR`</li><li>`CRITICAL`</li></ul>|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`-O`| Compile optimized pyc code (see python documentation). Repeat for more optimization.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--require-queue`| If the specified task is not queued, the execution will fail. (Used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--require-queue`| If the specified task is not queued, the execution will fail (used for 3rd party scheduler integration, e.g. K8s, SLURM, etc.)|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--standalone-mode`| Do not use any network connects, assume everything is pre-installed|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|
||||
## list
|
||||
|
@ -98,7 +98,7 @@ AZURE_STORAGE_KEY
|
||||
|
||||
1. Take down the serving containers (`docker-compose` or k8s)
|
||||
1. Update the `clearml-serving` CLI `pip3 install -U clearml-serving`
|
||||
1. Re-add a single existing endpoint with `clearml-serving model add ... ` (press yes when asked). It will upgrade the
|
||||
1. Re-add a single existing endpoint with `clearml-serving model add ...` (press yes when asked). It will upgrade the
|
||||
`clearml-serving` session definitions
|
||||
1. Pull the latest serving containers (`docker-compose pull ...` or k8s)
|
||||
1. Re-spin serving containers (`docker-compose` or k8s)
|
||||
|
@ -214,7 +214,7 @@ from `system_site_packages`
|
||||
|
||||
**`agent.extra_docker_arguments`** (*[string]*)
|
||||
|
||||
* Optional arguments to pass to the Docker image. These are local for this agent, and will not be updated in the experiment's `docker_cmd` section. For example, ` ["--ipc=host", ]`.
|
||||
* Optional arguments to pass to the Docker image. These are local for this agent, and will not be updated in the experiment's `docker_cmd` section. For example, `["--ipc=host", ]`.
|
||||
|
||||
---
|
||||
|
||||
@ -707,7 +707,7 @@ This configuration option is experimental, and has not been vigorously tested, s
|
||||
**`api.credentials`** (*dict*)
|
||||
|
||||
* Dictionary of API credentials.
|
||||
Alternatively, specify the environment variable ` CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY` to override these keys.
|
||||
Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY / CLEARML_API_SECRET_KEY` to override these keys.
|
||||
|
||||
|
||||
---
|
||||
|
@ -14,6 +14,6 @@ Solutions combined with the clearml-server control plane.
|
||||
|
||||
## YouTube Playlist
|
||||
|
||||
The first video in our YouTube Getting Started playlist covers these modules in more detail, feel free to check out the video below.
|
||||
The first video in the ClearML YouTube **Getting Started** playlist covers these modules in more detail, feel free to check out the video below.
|
||||
|
||||
[](https://www.youtube.com/watch?v=s3k9ntmQmD4&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=1)
|
@ -41,8 +41,8 @@ yields the best performing model for your task!
|
||||
- You should continue coding while experiments are being executed without interrupting them.
|
||||
- Stop optimizing your code because your machine struggles, and run it on a beefier machine (cloud / on-prem).
|
||||
|
||||
Visualization and comparisons dashboards keep your sanity at bay! In this stage we usually have a docker container with all the binaries
|
||||
that we need.
|
||||
Visualization and comparisons dashboards keep your sanity at bay! In this stage you usually have a docker container with all the binaries
|
||||
that you need.
|
||||
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) ensures that all the metrics, parameters and Models are automatically logged and can later be
|
||||
accessed, [compared](../../webapp/webapp_exp_comparing.md) and [tracked](../../webapp/webapp_exp_track_visual.md).
|
||||
- [ClearML Agent](../../clearml_agent.md) does the heavy lifting. It reproduces the execution environment, clones your code,
|
||||
|
@ -66,11 +66,11 @@ When you access the Dataset, it automatically merges the files from all parent v
|
||||
in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
|
||||
|
||||
### Training
|
||||
We can now train our model with the **latest** Dataset we have in the system.
|
||||
We will do that by getting the instance of the Dataset based on the `latest` tag
|
||||
(if by any chance we have two Datasets with the same tag we will get the newest).
|
||||
Once we have the dataset we can request a local copy of the data. All local copy requests are cached,
|
||||
which means that if we are accessing the same dataset multiple times we will not have any unnecessary downloads.
|
||||
You can now train your model with the **latest** Dataset you have in the system, by getting the instance of the Dataset
|
||||
based on the `latest` tag
|
||||
(if by any chance you have two Datasets with the same tag you will get the newest).
|
||||
Once you have the dataset you can request a local copy of the data. All local copy requests are cached,
|
||||
which means that if you access the same dataset multiple times you will not have any unnecessary downloads.
|
||||
|
||||
```python
|
||||
# create a task for the model training
|
||||
@ -87,7 +87,7 @@ dataset_folder = dataset.get_local_copy()
|
||||
|
||||
## Building the Pipeline
|
||||
|
||||
Now that we have the data creation step, and the data training step, let's create a pipeline that when executed,
|
||||
Now that you have the data creation step, and the data training step, create a pipeline that when executed,
|
||||
will first run the first and then run the second.
|
||||
It is important to remember that pipelines are Tasks by themselves and can also be automated by other pipelines (i.e. pipelines of pipelines).
|
||||
|
||||
|
@ -28,7 +28,7 @@ moved to be executed by a stronger machine.
|
||||
|
||||
During the execution of the example script, the code does the following:
|
||||
* Uses ClearML's automatic and explicit logging.
|
||||
* Creates an experiment named `Remote_execution PyTorch MNIST train`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `Remote_execution PyTorch MNIST train` in the `examples` project.
|
||||
|
||||
|
||||
## Scalars
|
||||
|
@ -22,8 +22,8 @@ script. This experiment must be executed first, so it will be stored in the serv
|
||||
1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 1). See [Task.set_parameters](../../references/sdk/task.md#set_parameters).
|
||||
1. Enqueuing the newly cloned Task to execute. See [Task.enqueue](../../references/sdk/task.md#taskenqueue).
|
||||
|
||||
When the example script runs, it creates an experiment named `Random Hyper-Parameter Search Example` which is associated
|
||||
with the `examples` project. This starts the parameter search, and creates the experiments:
|
||||
When the example script runs, it creates an experiment named `Random Hyper-Parameter Search Example` in
|
||||
the `examples` project. This starts the parameter search, and creates the experiments:
|
||||
|
||||
* `Keras HP optimization base 0`
|
||||
* `Keras HP optimization base 1`
|
||||
|
@ -22,4 +22,4 @@ This example accomplishes a task pipe by doing the following:
|
||||
1. Setting the newly cloned Task's parameters to the search values in the parameter dictionary (Step 2). See [Task.set_parameters](../../references/sdk/task.md#set_parameters).
|
||||
1. Enqueuing the newly cloned Task to execute. See [Task.enqueue](../../references/sdk/task.md#taskenqueue).
|
||||
|
||||
When the example script runs, it creates an instance of the template experiment, named `Auto generated cloned task` which is associated with the `examples` project. In the instance, the value of the customized parameter, `Example_Param` changed to `3`. You can see it in **CONFIGURATION** **>** **HYPERPARAMETERS**.
|
||||
When the example script runs, it creates an instance of the template experiment, named `Auto generated cloned task` in the `examples` project. In the instance, the value of the customized parameter, `Example_Param` changed to `3`. You can see it in **CONFIGURATION** **>** **HYPERPARAMETERS**.
|
@ -17,7 +17,7 @@ dataset), and reports (uploads) the following to the main Task:
|
||||
Each Task in a subprocess references the main Task by calling [Task.current_task](../../references/sdk/task.md#taskcurrent_task), which always returns
|
||||
the main Task.
|
||||
|
||||
When the script runs, it creates an experiment named `test torch distributed`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `test torch distributed` in the `examples` project.
|
||||
|
||||
## Artifacts
|
||||
|
||||
|
@ -10,7 +10,7 @@ which always returns the main Task.
|
||||
* The Task in each subprocess reports the following to the main Task:
|
||||
* Hyperparameters - Additional, different hyperparameters.
|
||||
* Console - Text logged to the console as the Task in each subprocess executes.
|
||||
* When the script runs, it creates an experiment named `Popen example` which is associated with the `examples` project.
|
||||
* When the script runs, it creates an experiment named `Popen example` in the `examples` project.
|
||||
|
||||
## Hyperparameters
|
||||
|
||||
|
@ -9,7 +9,7 @@ The example does the following:
|
||||
the autokeras [TextClassifier](https://autokeras.com/text_classifier/) class, and searches for the best model.
|
||||
* Uses two TensorBoard callbacks, one for training and one for testing.
|
||||
* ClearML automatically logs everything the code sends to TensorBoard.
|
||||
* Creates an experiment named `autokeras imdb example with scalars`, which is associated with the `autokeras` project.
|
||||
* Creates an experiment named `autokeras imdb example with scalars` in the `autokeras` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -12,7 +12,7 @@ The ClearML repository also includes [examples using FastAI v2](https://github.c
|
||||
The example code does the following:
|
||||
1. Trains a simple deep neural network on the fastai built-in MNIST dataset (see the [fast.ai](https://fastai1.fast.ai) documentation).
|
||||
1. Uses the fastai `LearnerTensorboardWriter` callback, and ClearML automatically logs TensorBoard through the callback.
|
||||
1. During script execution, creates an experiment named `fastai with tensorboard callback`, which is associated with the `examples` project.
|
||||
1. During script execution, creates an experiment named `fastai with tensorboard callback` in the `examples` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -12,7 +12,7 @@ The example does the following:
|
||||
|
||||
1. Specifies accuracy as the metric, and uses two callbacks: a TensorBoard callback and a model checkpoint callback.
|
||||
|
||||
1. During script execution, creates an experiment named `notebook example` which is associated with the `examples` project.
|
||||
1. During script execution, creates an experiment named `notebook example` in the `examples` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -16,7 +16,7 @@ The example script does the following:
|
||||
dataset.
|
||||
1. Builds a sequential model using a categorical cross entropy loss objective function.
|
||||
1. Specifies accuracy as the metric, and uses two callbacks: a TensorBoard callback and a model checkpoint callback.
|
||||
1. During script execution, creates an experiment named `Keras with TensorBoard example`, which is associated with the
|
||||
1. During script execution, creates an experiment named `Keras with TensorBoard example` in the
|
||||
`examples` project (in script) or the `Colab notebooks` project (in Jupyter Notebook).
|
||||
|
||||
|
||||
|
@ -9,7 +9,7 @@ The example script does the following:
|
||||
* Creates a dataset for LightGBM to train a model
|
||||
* Specifies configuration which are automatically captured by ClearML
|
||||
* Saves model which ClearML automatically captures
|
||||
* Creates an experiment named `LightGBM`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `LightGBM` in the `examples` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -13,8 +13,8 @@ The example in [Jupyter Notebook](https://github.com/allegroai/clearml/blob/mast
|
||||
includes a clickable icon to open the notebook in Google Colab.
|
||||
:::
|
||||
|
||||
When the example runs, it creates an experiment named `Matplotlib example`,
|
||||
which is associated with the `examples` project (in script) or the `Colab notebooks` project (in Jupyter Notebook).
|
||||
When the example runs, it creates an experiment named `Matplotlib example`
|
||||
in the `examples` project (in script) or the `Colab notebooks` project (in Jupyter Notebook).
|
||||
|
||||
|
||||
|
||||
|
@ -10,7 +10,7 @@ The example script does the following:
|
||||
* Trains a simple deep neural network on MegEngine's built-in [MNIST](https://megengine.org.cn/doc/stable/en/reference/api/megengine.data.dataset.MNIST.html)
|
||||
dataset.
|
||||
* Creates a TensorBoardX `SummaryWriter` object to log scalars during training.
|
||||
* Creates a ClearML experiment named `megengine mnist train`, which is associated with the `examples` project.
|
||||
* Creates a ClearML experiment named `megengine mnist train` in the `examples` project.
|
||||
|
||||
## Hyperparameters
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
title: Audio Classification - Jupyter Notebooks
|
||||
---
|
||||
|
||||
The [audio_classification_UrbanSound8K.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/audio/audio_classifier_UrbanSound8K.ipynb) example script demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `audio classification UrbanSound8K` which is associated with the `Audio Example` project.
|
||||
The [audio_classification_UrbanSound8K.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/audio/audio_classifier_UrbanSound8K.ipynb) example script demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch, TensorBoard, and TorchVision to train a neural network on the UrbanSound8K dataset for audio classification. The example calls TensorBoard methods in training and testing to report scalars, audio debug samples, and spectrogram visualizations. The spectrogram visualizations are plotted by calling Matplotlib methods. The example also demonstrates connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `audio classification UrbanSound8K` in the `Audio Example` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -3,7 +3,7 @@ title: Audio Preprocessing - Jupyter Notebook
|
||||
---
|
||||
|
||||
The example [audio_preprocessing_example.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/audio/audio_preprocessing_example.ipynb)
|
||||
demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch and preprocesses audio samples. ClearML automatically logs spectrogram visualizations reported by calling Matplotlib methods, and audio samples reported by calling TensorBoard methods. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `data pre-processing`, which is associated with the `Audio Example` project.
|
||||
demonstrates integrating ClearML into a Jupyter Notebook which uses PyTorch and preprocesses audio samples. ClearML automatically logs spectrogram visualizations reported by calling Matplotlib methods, and audio samples reported by calling TensorBoard methods. The example also demonstrates connecting parameters to a Task and logging them. When the script runs, it creates an experiment named `data pre-processing` in the `Audio Example` project.
|
||||
|
||||
## Plots
|
||||
|
||||
|
@ -6,8 +6,8 @@ The example [image_classification_CIFAR10.ipynb](https://github.com/allegroai/cl
|
||||
demonstrates integrating ClearML into a Jupyter Notebook, which uses PyTorch, TensorBoard, and TorchVision to train a
|
||||
neural network on the CIFAR10 dataset for image classification. ClearML automatically logs the example script's
|
||||
calls to TensorBoard methods in training and testing which report scalars and image debug samples, as well as the model
|
||||
and console log. In the example, we also demonstrate connecting parameters to a Task and logging them. When the script runs,
|
||||
it creates an experiment named `image_classification_CIFAR10` which is associated with the `Image Example` project.
|
||||
and console log. The example also demonstrates connecting parameters to a Task and logging them. When the script runs,
|
||||
it creates an experiment named `image_classification_CIFAR10` in the `Image Example` project.
|
||||
|
||||
Another example optimizes the hyperparameters for this image classification example (see the [Hyperparameter Optimization - Jupyter Notebook](hyperparameter_search.md) documentation page). This image classification example must run before the hyperparameter optimization example.
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
title: Tabular Data Downloading and Preprocessing - Jupyter Notebook
|
||||
---
|
||||
|
||||
The [download_and_preprocessing.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_preprocessing.ipynb) example demonstrates ClearML storing preprocessed tabular data as artifacts, and explicitly reporting the tabular data in the **ClearML Web UI**. When the script runs, it creates an experiment named `tabular preprocessing` which is associated with the `Table Example` project.
|
||||
The [download_and_preprocessing.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_preprocessing.ipynb) example demonstrates ClearML storing preprocessed tabular data as artifacts, and explicitly reporting the tabular data in the **ClearML Web UI**. When the script runs, it creates an experiment named `tabular preprocessing` in the `Table Example` project.
|
||||
|
||||
This tabular data is prepared for another script, [train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb), which trains a network with it.
|
||||
|
||||
|
@ -4,7 +4,7 @@ title: Text Classification - Jupyter Notebook
|
||||
|
||||
The example [text_classification_AG_NEWS.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/text/text_classification_AG_NEWS.ipynb)
|
||||
demonstrates using Jupyter Notebook for ClearML, and the integration of ClearML into code which trains a network
|
||||
to classify text in the `torchtext` [AG_NEWS](https://pytorch.org/text/stable/datasets.html#ag-news) dataset, and then applies the model to predict the classification of sample text. ClearML automatically logs the scalar and console output by calling TensorBoard methods. In the example, we explicitly log parameters with the Task. When the script runs, it creates an experiment named `text classifier` which is associated with the `Text Example` project.
|
||||
to classify text in the `torchtext` [AG_NEWS](https://pytorch.org/text/stable/datasets.html#ag-news) dataset, and then applies the model to predict the classification of sample text. ClearML automatically logs the scalar and console output by calling TensorBoard methods. The example code explicitly logs parameters to the Task. When the script runs, it creates an experiment named `text classifier` in the `Text Example` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -8,7 +8,7 @@ example demonstrates the integration of ClearML into code that uses PyTorch and
|
||||
The example script does the following:
|
||||
* Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist)
|
||||
dataset
|
||||
* Creates an experiment named `pytorch mnist train with abseil`, which is associated with the `examples` project
|
||||
* Creates an experiment named `pytorch mnist train with abseil` in the `examples` project
|
||||
* ClearML automatically logs the absl.flags, and the models (and their snapshots) created by PyTorch
|
||||
* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar)
|
||||
method
|
||||
|
@ -19,7 +19,7 @@ The script does the following:
|
||||
Each Task in a subprocess references the main Task by calling [Task.current_task](../../../references/sdk/task.md#taskcurrent_task),
|
||||
which always returns the main Task.
|
||||
|
||||
1. When the script runs, it creates an experiment named `test torch distributed` which is associated with the `examples` project in the **ClearML Web UI**.
|
||||
1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**.
|
||||
|
||||
### Artifacts
|
||||
|
||||
|
@ -8,7 +8,7 @@ demonstrates the integration of ClearML into code that uses PyTorch.
|
||||
The example script does the following:
|
||||
* Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist)
|
||||
dataset.
|
||||
* Creates an experiment named `pytorch mnist train`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `pytorch mnist train` in the `examples` project.
|
||||
* ClearML automatically logs `argparse` command line options, and models (and their snapshots) created by PyTorch
|
||||
* Additional metrics are logged by calling the [Logger.report_scalar](../../../references/sdk/logger.md#report_scalar) method.
|
||||
|
||||
|
@ -8,7 +8,7 @@ example demonstrates the integration of ClearML into code that uses PyTorch and
|
||||
The example does the following:
|
||||
* Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist)
|
||||
dataset.
|
||||
* Creates an experiment named `pytorch with tensorboard`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `pytorch with tensorboard` in the `examples` project.
|
||||
* ClearML automatically captures scalars and text logged using the TensorBoard `SummaryWriter` object, and
|
||||
the model created by PyTorch.
|
||||
|
||||
|
@ -7,7 +7,7 @@ script integrates ClearML into code that uses [PyTorch Ignite](https://github.co
|
||||
|
||||
The example script does the following:
|
||||
* Trains a neural network on the CIFAR10 dataset for image classification.
|
||||
* Creates a [ClearML Task](../../../fundamentals/task.md) named `image classification CIFAR10`, which is associated with
|
||||
* Creates a [ClearML Task](../../../fundamentals/task.md) named `image classification CIFAR10` in
|
||||
the `examples` project.
|
||||
* Calls the [`Task.connect`](../../../references/sdk/task.md#connect) method to track experiment configuration.
|
||||
* Uses `ignite`'s `TensorboardLogger` and attaches handlers to it. See [`TensorboardLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tensorboard_logger.py).
|
||||
|
@ -10,7 +10,7 @@ checkpoints during training and validation.
|
||||
|
||||
The example script does the following:
|
||||
* Trains a model to classify images from the MNIST dataset.
|
||||
* Creates a [ClearML Task](../../../fundamentals/task.md) named `ignite`, which is associated with the `examples`
|
||||
* Creates a [ClearML Task](../../../fundamentals/task.md) named `ignite` in the `examples`
|
||||
project. ClearMLLogger connects to ClearML so everything which is logged through it and its handlers
|
||||
is automatically captured by ClearML.
|
||||
* Uses the following ClearMLLogger helper handlers:
|
||||
|
@ -9,7 +9,7 @@ script demonstrates the integration of ClearML into code that uses [PyTorch Ligh
|
||||
The example script does the following:
|
||||
* Trains a simple deep neural network on the PyTorch built-in MNIST dataset
|
||||
* Defines Argparse command line options, which are automatically captured by ClearML
|
||||
* Creates an experiment named `pytorch lightning mnist example`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `pytorch lightning mnist example` in the `examples` project.
|
||||
|
||||
## Scalars
|
||||
|
||||
|
@ -5,7 +5,7 @@ title: Scikit-Learn with Joblib
|
||||
The [sklearn_joblib_example.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py)
|
||||
demonstrates the integration of ClearML into code that uses `scikit-learn` and `joblib` to store a model and model snapshots,
|
||||
and `matplotlib` to create a scatter diagram. When the script runs, it creates an experiment named
|
||||
`scikit-learn joblib example`, which is associated with the `examples` project.
|
||||
`scikit-learn joblib example` in the `examples` project.
|
||||
|
||||
## Plots
|
||||
|
||||
|
@ -7,7 +7,7 @@ example demonstrates the integration of ClearML into code that uses PyTorch and
|
||||
|
||||
The script does the following:
|
||||
* Trains a simple deep neural network on the PyTorch built-in [MNIST](https://pytorch.org/vision/stable/datasets.html#mnist) dataset
|
||||
* Creates an experiment named `pytorch with tensorboardX` which is associated with the `examples` project
|
||||
* Creates an experiment named `pytorch with tensorboardX` in the `examples` project
|
||||
* ClearML automatically captures scalars and text logged using the TensorBoardX `SummaryWriter` object, and
|
||||
the model created by PyTorch
|
||||
|
||||
|
@ -6,7 +6,7 @@ The [moveiepy_tensorboardx.py](https://github.com/allegroai/clearml/blob/master/
|
||||
example demonstrates the integration of ClearML into code, which creates a TensorBoardX `SummaryWriter` object to log
|
||||
video data.
|
||||
|
||||
When the script runs, it creates an experiment named `pytorch with video tensorboardX`, which is associated with
|
||||
When the script runs, it creates an experiment named `pytorch with video tensorboardX` in
|
||||
the `examples` project.
|
||||
|
||||
## Debug Samples
|
||||
|
@ -6,7 +6,7 @@ The [xgboost_metrics.py](https://github.com/allegroai/clearml/blob/master/exampl
|
||||
example demonstrates the integration of ClearML into code that uses XGBoost to train a network on the scikit-learn [iris](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html#sklearn.datasets.load_iris)
|
||||
classification dataset. ClearML automatically captures models and scalars logged with XGBoost.
|
||||
|
||||
When the script runs, it creates a ClearML experiment named `xgboost metric auto reporting`, which is associated with
|
||||
When the script runs, it creates a ClearML experiment named `xgboost metric auto reporting` in
|
||||
the `examples` project.
|
||||
|
||||
## Scalars
|
||||
|
@ -11,7 +11,7 @@ classification dataset using XGBoost
|
||||
* Scores accuracy using scikit-learn
|
||||
* ClearML automatically logs the input model registered by XGBoost, and the output model (and its checkpoints),
|
||||
feature importance plot, and tree plot created with XGBoost.
|
||||
* Creates an experiment named `XGBoost simple example`, which is associated with the `examples` project.
|
||||
* Creates an experiment named `XGBoost simple example` in the `examples` project.
|
||||
|
||||
## Plots
|
||||
|
||||
|
@ -5,7 +5,7 @@ title: 3D Plots Reporting
|
||||
The [3d_plots_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/3d_plots_reporting.py)
|
||||
example demonstrates reporting a series as a surface plot and as a 3D scatter plot.
|
||||
|
||||
When the script runs, it creates an experiment named `3D plot reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `3D plot reporting` in the `examples` project.
|
||||
|
||||
ClearML reports these plots in the experiment's **PLOTS** tab.
|
||||
|
||||
|
@ -22,7 +22,7 @@ is different). Configure ClearML in any of the following ways:
|
||||
* In code, when [initializing a Task](../../references/sdk/task.md#taskinit), use the `output_uri` parameter.
|
||||
* In the **ClearML Web UI**, when [modifying an experiment](../../webapp/webapp_exp_tuning.md#output-destination).
|
||||
|
||||
When the script runs, it creates an experiment named `artifacts example`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `artifacts example` in the `examples` project.
|
||||
|
||||
ClearML reports artifacts in the **ClearML Web UI** **>** experiment details **>** **ARTIFACTS** tab.
|
||||
|
||||
|
@ -37,7 +37,7 @@ experiment runs. Some possible destinations include:
|
||||
* Google Cloud Storage
|
||||
* Azure Storage.
|
||||
|
||||
Specify the output location in the `output_uri` parameter of the [`Task.init`](../../references/sdk/task.md#taskinit) method.
|
||||
Specify the output location in the `output_uri` parameter of [`Task.init()`](../../references/sdk/task.md#taskinit).
|
||||
In this tutorial, specify a local folder destination.
|
||||
|
||||
In `pytorch_mnist_tutorial.py`, change the code from:
|
||||
@ -96,8 +96,7 @@ package contains methods for explicit reporting of plots, log text, media, and t
|
||||
|
||||
### Get a Logger
|
||||
|
||||
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
|
||||
method.
|
||||
First, create a logger for the Task using [`Task.get_logger()`](../../references/sdk/task.md#get_logger):
|
||||
|
||||
```python
|
||||
logger = task.get_logger
|
||||
@ -105,8 +104,8 @@ logger = task.get_logger
|
||||
|
||||
### Plot Scalar Metrics
|
||||
|
||||
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
|
||||
method to report loss metrics.
|
||||
Add scalar metrics using [`Logger.report_scalar()`](../../references/sdk/logger.md#report_scalar)
|
||||
to report loss metrics.
|
||||
|
||||
```python
|
||||
def train(args, model, device, train_loader, optimizer, epoch):
|
||||
@ -187,8 +186,8 @@ def test(args, model, device, test_loader):
|
||||
|
||||
### Log Text
|
||||
|
||||
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use the [Logger.report_text](../../references/sdk/logger.md#report_text)
|
||||
method and its argument `level` to report a debugging message.
|
||||
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use [`Logger.report_text()`](../../references/sdk/logger.md#report_text)
|
||||
and its argument `level` to report a debugging message.
|
||||
|
||||
```python
|
||||
logger.report_text(
|
||||
@ -207,8 +206,8 @@ Currently, ClearML supports Pandas DataFrames as registered artifacts.
|
||||
|
||||
### Register the Artifact
|
||||
|
||||
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
|
||||
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
|
||||
In the tutorial script, `test` function, you can assign the test loss and correct data to a Pandas DataFrame object and register
|
||||
that Pandas DataFrame using [`Task.register_artifact()`](../../references/sdk/task.md#register_artifact).
|
||||
|
||||
```python
|
||||
# Create the Pandas DataFrame
|
||||
@ -234,9 +233,9 @@ task.register_artifact(
|
||||
|
||||
Once an artifact is registered, it can be referenced and utilized in the Python experiment script.
|
||||
|
||||
In the tutorial script, we add [Task.current_task](../../references/sdk/task.md#taskcurrent_task) and
|
||||
[Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
|
||||
methods to take a sample.
|
||||
In the tutorial script, add [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task) and
|
||||
[`Task.get_registered_artifacts()`](../../references/sdk/task.md#get_registered_artifacts)
|
||||
to take a sample.
|
||||
|
||||
```python
|
||||
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
|
||||
@ -259,8 +258,8 @@ Supported artifacts include:
|
||||
* Dictionaries - stored as JSONs
|
||||
* Numpy arrays - stored as NPZ files
|
||||
|
||||
In the tutorial script, upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
|
||||
method with metadata specified in the `metadata` parameter.
|
||||
In the tutorial script, upload the loss data as an artifact using [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact)
|
||||
with metadata specified in the `metadata` parameter.
|
||||
|
||||
```python
|
||||
# Upload test loss as an artifact. Here, the artifact is numpy array
|
||||
|
@ -9,7 +9,7 @@ method.
|
||||
ClearML reports these HTML debug samples in the **ClearML Web UI** **>** experiment details **>**
|
||||
**DEBUG SAMPLES** tab.
|
||||
|
||||
When the script runs, it creates an experiment named `html samples reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `html samples reporting` in the `examples` project.
|
||||
|
||||

|
||||
|
||||
|
@ -11,7 +11,7 @@ Hyperparameters appear in the **web UI** in the experiment's page, under **CONFI
|
||||
Each type is in its own subsection. Parameters from older experiments are grouped together with the ``argparse`` command
|
||||
line options (in the **Args** subsection).
|
||||
|
||||
When the script runs, it creates an experiment named `hyper-parameters example`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `hyper-parameters example` in the `examples` project.
|
||||
|
||||
## Argparse Command Line Options
|
||||
|
||||
|
@ -15,7 +15,7 @@ or ClearML can be configured for image storage, see [Logger.set_default_upload_d
|
||||
(storage for [artifacts](../../clearml_sdk/task_sdk.md#setting-upload-destination) is different). Set credentials for
|
||||
storage in the ClearML configuration file.
|
||||
|
||||
When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `image reporting` in the `examples` project.
|
||||
|
||||
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
|
||||
method:
|
||||
|
@ -16,7 +16,7 @@ ClearML uploads media to the bucket specified in the ClearML configuration file
|
||||
ClearML reports media in the **ClearML Web UI** **>** experiment details **>** **DEBUG SAMPLES**
|
||||
tab.
|
||||
|
||||
When the script runs, it creates an experiment named `audio and video reporting`, which is associated with the `examples`
|
||||
When the script runs, it creates an experiment named `audio and video reporting` in the `examples`
|
||||
project.
|
||||
|
||||
## Reporting (Uploading) Media from a Source by URL
|
||||
|
@ -7,7 +7,7 @@ The [pandas_reporting.py](https://github.com/allegroai/clearml/blob/master/examp
|
||||
ClearML reports these tables in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
|
||||
tab.
|
||||
|
||||
When the script runs, it creates an experiment named `table reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `table reporting` in the `examples` project.
|
||||
|
||||
## Reporting Pandas DataFrames as Tables
|
||||
|
||||
|
@ -31,7 +31,7 @@ task.get_logger().report_plotly(
|
||||
)
|
||||
```
|
||||
|
||||
When the script runs, it creates an experiment named `plotly reporting`, which is associated with the examples project.
|
||||
When the script runs, it creates an experiment named `plotly reporting` in the examples project.
|
||||
|
||||
ClearML reports Plotly plots in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
|
||||
tab.
|
||||
|
@ -6,7 +6,7 @@ The [scalar_reporting.py](https://github.com/allegroai/clearml/blob/master/examp
|
||||
demonstrates explicit scalar reporting. ClearML reports scalars in the **ClearML Web UI** **>** experiment details
|
||||
**>** **SCALARS** tab.
|
||||
|
||||
When the script runs, it creates an experiment named `scalar reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `scalar reporting` in the `examples` project.
|
||||
|
||||
To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
|
||||
method. To report more than one series on the same plot, use the same `title` argument. For different plots, use different
|
||||
|
@ -10,7 +10,7 @@ example demonstrates reporting series data in the following 2D formats:
|
||||
|
||||
ClearML reports these tables in the **ClearML Web UI**, experiment details **>** **PLOTS** tab.
|
||||
|
||||
When the script runs, it creates an experiment named `2D plots reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `2D plots reporting` in the `examples` project.
|
||||
|
||||
## Histograms
|
||||
|
||||
|
@ -8,7 +8,7 @@ method.
|
||||
|
||||
ClearML reports these tables in the **ClearML Web UI**, experiment details, **CONSOLE** tab.
|
||||
|
||||
When the script runs, it creates an experiment named `text reporting`, which is associated with the `examples` project.
|
||||
When the script runs, it creates an experiment named `text reporting` in the `examples` project.
|
||||
|
||||
# report text
|
||||
Logger.current_logger().report_text("hello, this is plain text")
|
||||
|
@ -19,7 +19,7 @@ You can modify the model selection while comparing.
|
||||
table with the currently compared models at the top.
|
||||
1. Find the models to add by sorting and [filtering](webapp_model_table.md#filtering-columns) the models with the
|
||||
appropriate column header controls. Alternatively, use the search bar to find models by name.
|
||||
1. Select models to include in the comparison (and / or clear the selection of any models you wish to remove).
|
||||
1. Select models to include in the comparison (and/or clear the selection of any models you wish to remove).
|
||||
1. Click **APPLY**.
|
||||
|
||||
## Sharing Comparison Page
|
||||
|
Loading…
Reference in New Issue
Block a user