Small edits (#690)

This commit is contained in:
pollfly 2023-10-11 12:29:56 +03:00 committed by GitHub
parent 3a4b10e43b
commit e6257d2843
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 29 additions and 32 deletions

View File

@ -197,7 +197,7 @@ These methods can be used on `Model`, `InputModel`, and/or `OutputModel` objects
* Table - [`report_table`](../references/sdk/model_outputmodel.md#report_table)
* Line plot - [`report_line_plot`](../references/sdk/model_outputmodel.md#report_line_plot)
* Scatter plot - [`report_scatter2d`](../references/sdk/model_outputmodel.md#report_scatter2d)
* Confusion matrix (heat map) - [`report_confusion_matrix`](../references/sdk/model_outputmodel.md#report_confusion_matrix) and [`report_matrix`](../references/sdk/model_outputmodel.md#report_matrix)
* Confusion matrix (heat map) - [`report_confusion_matrix`](../references/sdk/model_outputmodel.md#report_confusion_matrix)
* 3d plots
* Scatter plot - [`report_scatter3d`](../references/sdk/model_outputmodel.md#report_scatter3d)
* Surface plot - [`report_surface`](../references/sdk/model_outputmodel.md#report_surface)

View File

@ -302,7 +302,7 @@ from `system_site_packages`
* `AWS_SECRET_ACCESS_KEY`
* `AZURE_STORAGE_KEY`
* To mask additional environment variables, add their keys to the `extra_keys` list.
* To mask additional environment variables, add their keys to the `extra_keys` list.
For example, to hide the value of a custom environment variable named `MY_SPECIAL_PASSWORD`, set `extra_keys: ["MY_SPECIAL_PASSWORD"]`
* By default, `parse_embedded_urls` is set to `true`, so agent will also hide passwords in URLs and handle environment variables
@ -733,7 +733,7 @@ This configuration option is experimental, and has not been vigorously tested, s
**`api.credentials`** (*dict*)
* Dictionary of API credentials.
* Dictionary of API credentials.
Alternatively, specify the environment variable `CLEARML_API_ACCESS_KEY` / `CLEARML_API_SECRET_KEY` to override these keys.

View File

@ -24,12 +24,12 @@ Once you have a Task object you can query the state of the Task, get its model(s
## Log Hyperparameters
For full reproducibility, it's paramount to save hyperparameters for each experiment. Since hyperparameters can have substantial impact
on Model performance, saving and comparing these between experiments is sometimes the key to understanding model behavior.
on model performance, saving and comparing these between experiments is sometimes the key to understanding model behavior.
ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
It's also possible to log parameter dictionaries (very useful when parsing an external config file and storing as a dict object),
whole configuration files or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
```python
params_dictionary = {'epochs': 3, 'lr': 0.4}
@ -51,7 +51,7 @@ See all [storage capabilities](../../integrations/storage.md).
### Adding Artifacts
Uploading a local file containing the preprocessed results of the data:
Upload a local file containing the preprocessed results of the data:
```python
task.upload_artifact('/path/to/preprocess_data.csv', name='data')
```

View File

@ -17,12 +17,12 @@ clearml-agent daemon --queue default
The script trains a simple deep neural network on the PyTorch built-in MNIST dataset. The following describes the code's
execution flow:
1. The training runs for one epoch.
1. The code passes the `execute_remotely` method which terminates the local execution of the code and enqueues the task
1. The code uses [`Task.execute_remotely()`](../../references/sdk/task.md#execute_remotely), which terminates the local execution of the code and enqueues the task
to the `default` queue, as specified in the `queue_name` parameter.
1. An agent listening to the queue fetches the task and restarts task execution remotely. When the agent executes the task,
the `execute_remotely` is considered no-op.
An execution flow that uses `execute_remotely` method is especially helpful when running code on a development machine for a few iterations
An execution flow that uses `execute_remotely` is especially helpful when running code on a development machine for a few iterations
to debug and to make sure the code doesn't crash, or to set up an environment. After that, the training can be
moved to be executed by a stronger machine.
@ -41,7 +41,7 @@ Logger.current_logger().report_scalar(
)
```
In the `test` method, the code explicitly reports `loss` and `accuracy` scalars.
In the script's `test` function, the code explicitly reports `loss` and `accuracy` scalars.
```python
Logger.current_logger().report_scalar(

View File

@ -22,7 +22,7 @@ When the script runs, it creates an experiment named `test torch distributed` in
## Artifacts
The example uploads a dictionary as an artifact in the main Task by calling [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact)
on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
on [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
of the subprocess, making each unique.
```python
@ -39,7 +39,7 @@ All of these artifacts appear in the main Task under **ARTIFACTS** **>** **OTHER
## Scalars
Loss is reported to the main Task by calling the [`Logger.report_scalar()`](../../references/sdk/logger.md#report_scalar)
on `Task.current_task().get_logger()`, which is the logger for the main Task. Since `Logger.report_scalar` is called
on `Task.current_task().get_logger()`, which is the main Task's logger. Since `Logger.report_scalar` is called
with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are
logged together.

View File

@ -41,7 +41,7 @@ name is "DevOps"
After launching the command, the `clearml-agent` listening to the `default` queue spins a remote Jupyter environment with
the specifications. It will automatically connect to the docker on the remote machine.
The terminal should return output with the session's configuration details, which should look something like this:
The console should display the session's configuration details, which should look something like this:
```console
Interactive session config:

View File

@ -106,13 +106,12 @@ logger.report_surface(
### Confusion Matrices
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
Report confusion matrices by calling [`Logger.report_confusion_matrix()`](../../references/sdk/logger.md#report_confusion_matrix).
```python
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
logger.report_matrix(
logger.report_confusion_matrix(
"example_confusion",
"ignored",
iteration=iteration,
@ -126,8 +125,8 @@ logger.report_matrix(
### Histograms
Report histograms by calling the [Logger.report_histogram](../../references/sdk/logger.md#report_histogram)
method. To report more than one series on the same plot, use the same `title` argument.
Report histograms by calling [`Logger.report_histogram()`](../../references/sdk/logger.md#report_histogram).
To report more than one series on the same plot, use the same `title` argument.
```python
# report a single histogram
@ -170,11 +169,10 @@ logger.report_histogram(
## Media
Report audio, HTML, image, and video by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `local_path` parameter. They appear in **DEBUG SAMPLES**.
Report audio, HTML, image, and video by calling [`Logger.report_media()`](../../references/sdk/logger.md#report_media)
using the `local_path` parameter. They appear in **DEBUG SAMPLES**.
The media for these examples is downloaded using the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
method.
The media for these examples is downloaded using [`StorageManager.get_local_copy()`](../../references/sdk/storage.md#storagemanagerget_local_copy).
For example, to download an image:
@ -224,7 +222,7 @@ logger.report_media('video', 'big bunny', iteration=1, local_path=video_local_co
## Text
Report text messages by calling the [Logger.report_text](../../references/sdk/logger.md#report_text).
Report text messages by calling [`Logger.report_text()`](../../references/sdk/logger.md#report_text).
```python
logger.report_text("hello, this is plain text")

View File

@ -14,9 +14,9 @@ When the script runs, it creates an experiment named `2D plots reporting` in the
## Histograms
Report histograms by calling the [Logger.report_histogram](../../references/sdk/logger.md#report_histogram)
method. To report more than one series on the same plot, use same the `title` argument. For different plots, use different
`title` arguments. Specify the type of histogram with the `mode` parameter. The `mode` values are `group` (the default),
Report histograms by calling [`Logger.report_histogram()`](../../references/sdk/logger.md#report_histogram).
To report more than one series on the same plot, use same the `title` argument. For different plots, use different
`title` arguments. Specify the type of histogram with the `mode` parameter. The `mode` values are `group` (default),
`stack`, and `relative`.
```python
@ -59,13 +59,12 @@ Logger.current_logger().report_histogram(
## Confusion Matrices
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
Report confusion matrices by calling [`Logger.report_confusion_matrix()`](../../references/sdk/logger.md#report_confusion_matrix).
```python
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_matrix(
Logger.current_logger().report_confusion_matrix(
"example_confusion",
"ignored",
iteration=iteration,
@ -79,7 +78,7 @@ Logger.current_logger().report_matrix(
```python
# report confusion matrix with 0,0 is at the top left
Logger.current_logger().report_matrix(
Logger.current_logger().report_confusion_matrix(
"example_confusion_0_0_at_top",
"ignored",
iteration=iteration,
@ -92,8 +91,8 @@ Logger.current_logger().report_matrix(
## 2D Scatter Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
method. Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.
Report 2D scatter plots by calling [`Logger.report_scatter2d()`](../../references/sdk/logger.md#report_scatter2d).
Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.
```python
scatter2d = np.hstack(

View File

@ -4,7 +4,7 @@ title: The Experiments Table
The experiments table is a [customizable](#customizing-the-experiments-table) list of experiments associated with a project. From the experiments
table, view experiment details, and work with experiments (reset, clone, enqueue, create [tracking leaderboards](../guides/ui/building_leader_board.md)
to monitor experimentation, and more). The experiments table's auto-refresh allows users to continually monitor experiment progress.
to monitor experimentation, and more). The experiments table's auto-refresh lets users continually monitor experiment progress.
View the experiments table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,