Small edits (#663)

This commit is contained in:
pollfly
2023-09-04 15:40:42 +03:00
committed by GitHub
parent cd12d80e19
commit 4c88cf6393
49 changed files with 72 additions and 73 deletions

View File

@@ -5,7 +5,7 @@ title: 3D Plots Reporting
The [3d_plots_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/3d_plots_reporting.py)
example demonstrates reporting a series as a surface plot and as a 3D scatter plot.
When the script runs, it creates an experiment named `3D plot reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `3D plot reporting` in the `examples` project.
ClearML reports these plots in the experiment's **PLOTS** tab.

View File

@@ -22,7 +22,7 @@ is different). Configure ClearML in any of the following ways:
* In code, when [initializing a Task](../../references/sdk/task.md#taskinit), use the `output_uri` parameter.
* In the **ClearML Web UI**, when [modifying an experiment](../../webapp/webapp_exp_tuning.md#output-destination).
When the script runs, it creates an experiment named `artifacts example`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `artifacts example` in the `examples` project.
ClearML reports artifacts in the **ClearML Web UI** **>** experiment details **>** **ARTIFACTS** tab.

View File

@@ -37,7 +37,7 @@ experiment runs. Some possible destinations include:
* Google Cloud Storage
* Azure Storage.
Specify the output location in the `output_uri` parameter of the [`Task.init`](../../references/sdk/task.md#taskinit) method.
Specify the output location in the `output_uri` parameter of [`Task.init()`](../../references/sdk/task.md#taskinit).
In this tutorial, specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
@@ -96,8 +96,7 @@ package contains methods for explicit reporting of plots, log text, media, and t
### Get a Logger
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
method.
First, create a logger for the Task using [`Task.get_logger()`](../../references/sdk/task.md#get_logger):
```python
logger = task.get_logger
@@ -105,8 +104,8 @@ logger = task.get_logger
### Plot Scalar Metrics
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method to report loss metrics.
Add scalar metrics using [`Logger.report_scalar()`](../../references/sdk/logger.md#report_scalar)
to report loss metrics.
```python
def train(args, model, device, train_loader, optimizer, epoch):
@@ -187,8 +186,8 @@ def test(args, model, device, test_loader):
### Log Text
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
Extend ClearML by explicitly logging text, including errors, warnings, and debugging statements. Use [`Logger.report_text()`](../../references/sdk/logger.md#report_text)
and its argument `level` to report a debugging message.
```python
logger.report_text(
@@ -207,8 +206,8 @@ Currently, ClearML supports Pandas DataFrames as registered artifacts.
### Register the Artifact
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
In the tutorial script, `test` function, you can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using [`Task.register_artifact()`](../../references/sdk/task.md#register_artifact).
```python
# Create the Pandas DataFrame
@@ -234,9 +233,9 @@ task.register_artifact(
Once an artifact is registered, it can be referenced and utilized in the Python experiment script.
In the tutorial script, we add [Task.current_task](../../references/sdk/task.md#taskcurrent_task) and
[Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
methods to take a sample.
In the tutorial script, add [`Task.current_task()`](../../references/sdk/task.md#taskcurrent_task) and
[`Task.get_registered_artifacts()`](../../references/sdk/task.md#get_registered_artifacts)
to take a sample.
```python
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
@@ -259,8 +258,8 @@ Supported artifacts include:
* Dictionaries - stored as JSONs
* Numpy arrays - stored as NPZ files
In the tutorial script, upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method with metadata specified in the `metadata` parameter.
In the tutorial script, upload the loss data as an artifact using [`Task.upload_artifact()`](../../references/sdk/task.md#upload_artifact)
with metadata specified in the `metadata` parameter.
```python
# Upload test loss as an artifact. Here, the artifact is numpy array

View File

@@ -9,7 +9,7 @@ method.
ClearML reports these HTML debug samples in the **ClearML Web UI** **>** experiment details **>**
**DEBUG SAMPLES** tab.
When the script runs, it creates an experiment named `html samples reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `html samples reporting` in the `examples` project.
![image](../../img/examples_reporting_05.png)

View File

@@ -11,7 +11,7 @@ Hyperparameters appear in the **web UI** in the experiment's page, under **CONFI
Each type is in its own subsection. Parameters from older experiments are grouped together with the ``argparse`` command
line options (in the **Args** subsection).
When the script runs, it creates an experiment named `hyper-parameters example`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `hyper-parameters example` in the `examples` project.
## Argparse Command Line Options

View File

@@ -15,7 +15,7 @@ or ClearML can be configured for image storage, see [Logger.set_default_upload_d
(storage for [artifacts](../../clearml_sdk/task_sdk.md#setting-upload-destination) is different). Set credentials for
storage in the ClearML configuration file.
When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `image reporting` in the `examples` project.
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
method:

View File

@@ -16,7 +16,7 @@ ClearML uploads media to the bucket specified in the ClearML configuration file
ClearML reports media in the **ClearML Web UI** **>** experiment details **>** **DEBUG SAMPLES**
tab.
When the script runs, it creates an experiment named `audio and video reporting`, which is associated with the `examples`
When the script runs, it creates an experiment named `audio and video reporting` in the `examples`
project.
## Reporting (Uploading) Media from a Source by URL

View File

@@ -7,7 +7,7 @@ The [pandas_reporting.py](https://github.com/allegroai/clearml/blob/master/examp
ClearML reports these tables in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.
When the script runs, it creates an experiment named `table reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `table reporting` in the `examples` project.
## Reporting Pandas DataFrames as Tables

View File

@@ -31,7 +31,7 @@ task.get_logger().report_plotly(
)
```
When the script runs, it creates an experiment named `plotly reporting`, which is associated with the examples project.
When the script runs, it creates an experiment named `plotly reporting` in the examples project.
ClearML reports Plotly plots in the **ClearML Web UI** **>** experiment details **>** **PLOTS**
tab.

View File

@@ -6,7 +6,7 @@ The [scalar_reporting.py](https://github.com/allegroai/clearml/blob/master/examp
demonstrates explicit scalar reporting. ClearML reports scalars in the **ClearML Web UI** **>** experiment details
**>** **SCALARS** tab.
When the script runs, it creates an experiment named `scalar reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `scalar reporting` in the `examples` project.
To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method. To report more than one series on the same plot, use the same `title` argument. For different plots, use different

View File

@@ -10,7 +10,7 @@ example demonstrates reporting series data in the following 2D formats:
ClearML reports these tables in the **ClearML Web UI**, experiment details **>** **PLOTS** tab.
When the script runs, it creates an experiment named `2D plots reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `2D plots reporting` in the `examples` project.
## Histograms

View File

@@ -8,7 +8,7 @@ method.
ClearML reports these tables in the **ClearML Web UI**, experiment details, **CONSOLE** tab.
When the script runs, it creates an experiment named `text reporting`, which is associated with the `examples` project.
When the script runs, it creates an experiment named `text reporting` in the `examples` project.
# report text
Logger.current_logger().report_text("hello, this is plain text")