Initial commit

This commit is contained in:
allegroai
2021-05-14 02:48:51 +03:00
parent dc5a4e8a0d
commit 77c9a91a95
645 changed files with 37481 additions and 14 deletions

View File

@@ -0,0 +1,51 @@
---
title: 3D Plots Reporting
---
The [3d_plots_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/3d_plots_reporting.py)
example demonstrates reporting a series as a surface plot and as a 3D scatter plot.
When the script runs, it creates an experiment named `3D plot reporting`, which is associated with the `examples` project.
**ClearML** reports these plots in the **ClearML Web UI** **>** experiment page **>** **RESULTS** tab **>** **PLOTS** sub-tab.
## Surface plot
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_surface(
"example_surface",
"series1",
iteration=iteration,
matrix=surface,
xaxis="title X",
yaxis="title Y",
zaxis="title Z",
)
Visualize the reported surface plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_01.png)
## 3D scatter plot
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
method.
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
Logger.current_logger().report_scatter3d(
"example_scatter_3d",
"series_xyz",
iteration=iteration,
scatter=scatter3d,
xaxis="title x",
yaxis="title y",
zaxis="title z",
)
Visualize the reported 3D scatter plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_02.png)

View File

@@ -0,0 +1,111 @@
---
title: Artifacts Reporting
---
The [artifacts.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/artifacts.py) example demonstrates
uploading objects (other than models) to storage as experiment artifacts.
These artifacts include:
* Pandas DataFrames
* Local files, dictionaries
* Folders
* Numpy objects
* Image files
* Folders.
Artifacts can be uploaded and dynamically tracked, or uploaded without tracking.
<a name="configure_artifact_storage" class="tr_top_negative"></a>
Configure **ClearML** for uploading artifacts to any of the supported types of storage, which include local and shared folders,
S3 buckets, Google Cloud Storage, and Azure Storage ([debug sample storage](../../references/sdk/logger.md#set_default_upload_destination)
is different). Configure **ClearML** in any of the following ways:
* In the configuration file, set [default_output_uri](../../configs/clearml_conf.md#sdkdevelopment).
* In code, when [initializing a Task](../../references/sdk/task.md#taskinit), use the `output_uri` parameter.
* In the **ClearML Web UI**, when [modifying an experiment](../../webapp/webapp_exp_tuning.md#output-destination).
When the script runs, it creates an experiment named `artifacts example`, which is associated with the `examples` project.
**ClearML** reports artifacts in the **ClearML Web UI** **>** experiment details **>** **ARTIFACTS** tab.
![image](../../img/examples_reporting_03.png)
## Dynamically tracked artifacts
Currently, **ClearML** supports uploading and dynamically tracking Pandas DataFrames. Use the [Task.register_artifact](../../references/sdk/task.md#register_artifact)
method. If the Pandas DataFrame changes, **ClearML** uploads the changes. The updated artifact is associated with the experiment.
For example:
df = pd.DataFrame(
{
'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]
},
index=['falcon', 'dog', 'spider', 'fish']
)
# Register Pandas object as artifact to watch
# (it will be monitored in the background and automatically synced and uploaded)
task.register_artifact('train', df, metadata={'counting': 'legs', 'max legs': 69}))
By changing the artifact, and calling the [Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
method to retrieve it, we can see that **ClearML** tracked the change.
# change the artifact object
df.sample(frac=0.5, replace=True, random_state=1)
# or access it from anywhere using the Task's get_registered_artifacts()
Task.current_task().get_registered_artifacts()['train'].sample(frac=0.5, replace=True, random_state=1)
## Artifacts without tracking
**ClearML** supports several types of objects that can be uploaded and are not tracked. Use the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method.
Artifacts without tracking include:
* Pandas DataFrames
* Local files
* Dictionaries (stored as a JSONs)
* Numpy objects (stored as NPZ files)
* Image files (stored as PNG files)
* Folders (stored as a ZIP files)
* Wildcards (stored as a ZIP files)
### Pandas DataFrames
# add and upload pandas.DataFrame (onetime snapshot of the object)
task.upload_artifact('Pandas', artifact_object=df)
### Local files
# add and upload local file artifact
task.upload_artifact('local file', artifact_object=os.path.join('data_samples', 'dancing.jpg'))
### Dictionaries
# add and upload dictionary stored as JSON)
task.upload_artifact('dictionary', df.to_dict())
### Numpy objects
# add and upload Numpy Object (stored as .npz file)
task.upload_artifact('Numpy Eye', np.eye(100, 100))
### Image files
# add and upload Image (stored as .png file)
im = Image.open(os.path.join('data_samples', 'dancing.jpg'))
task.upload_artifact('pillow_image', im)
### Folders
# add and upload a folder, artifact_object should be the folder path
task.upload_artifact('local folder', artifact_object=os.path.join('data_samples'))
### Wildcards
# add and upload a wildcard
task.upload_artifact('wildcard jpegs', artifact_object=os.path.join('data_samples', '*.jpg'))

View File

@@ -0,0 +1,200 @@
---
title: Explicit Reporting - Jupyter Notebook
---
The [jupyter_logging_example.ipynb](https://github.com/allegroai/clearml/blob/master/examples/reporting/jupyter_logging_example.ipynb)
script demonstrates the integration of **ClearML** explicit reporting running in a Jupyter Notebook. All **ClearML**
explicit reporting works with Jupyter Notebook.
This example includes several types of explicit reporting, including:
* Scalars
* Some plots
* Media.
:::note
In the ``clearml`` GitHub repository, this example includes a clickable icon to open the notebook in Google Colab.
:::
## Scalars
To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method. The scalar plots appear in the **web UI** in **RESULTS** **>** **SCALARS**.
# report two scalar series on two different graphs
for i in range(10):
logger.report_scalar("graph A", "series A", iteration=i, value=1./(i+1))
logger.report_scalar("graph B", "series B", iteration=i, value=10./(i+1))
![image](../../img/colab_explicit_reporting_01.png)
# report two scalar series on the same graph
for i in range(10):
logger.report_scalar("unified graph", "series A", iteration=i, value=1./(i+1))
logger.report_scalar("unified graph", "series B", iteration=i, value=10./(i+1))
![image](../../img/colab_explicit_reporting_02.png)
## Plots
Plots appear in **RESULTS** **>** **PLOTS**.
### 2D Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d) method.
Use the `mode` parameter to plot data points as markers, or both lines and markers.
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
# report 2d scatter plot with markers
logger.report_scatter2d(
"example_scatter",
"series_lines+markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='lines+markers'
)
![image](../../img/colab_explicit_reporting_04.png)
### 3D Plots
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d) method.
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
logger.report_scatter3d(
"example_scatter_3d",
"series_xyz",
iteration=iteration,
scatter=scatter3d,
xaxis="title x",
yaxis="title y",
zaxis="title z",
)
![image](../../img/colab_explicit_reporting_05.png)
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
logger.report_surface(
"example_surface",
"series1",
iteration=iteration,
matrix=surface,
xaxis="title X",
yaxis="title Y",
zaxis="title Z",
)
![image](../../img/colab_explicit_reporting_06.png)
### Confusion matrices
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
logger.report_matrix(
"example_confusion",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
)
![image](../../img/colab_explicit_reporting_03.png)
### Histograms
Report histograms by calling the [Logger.report_histogram](../../references/sdk/logger.md#report_histogram)
method. To report more than one series on the same plot, use the same `title` argument.
# report a single histogram
histogram = np.random.randint(10, size=10)
logger.report_histogram(
"single_histogram",
"random histogram",
iteration=iteration,
values=histogram,
xaxis="title x",
yaxis="title y",
)
![image](../../img/colab_explicit_reporting_12.png)
# report a two histograms on the same plot
histogram1 = np.random.randint(13, size=10)
histogram2 = histogram * 0.75
logger.report_histogram(
"two_histogram",
"series 1",
iteration=iteration,
values=histogram1,
xaxis="title x",
yaxis="title y",
)
logger.report_histogram(
"two_histogram",
"series 2",
iteration=iteration,
values=histogram2,
xaxis="title x",
yaxis="title y",
)
![image](../../img/colab_explicit_reporting_07.png)
## Media
Report audio, HTML, image, and video by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `local_path` parameter. They appear in **RESULTS** **>** **DEBUG SAMPLES**.
The media for these examples is downloaded using the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
method.
For example, to download an image:
image_local_copy = StorageManager.get_local_copy(
remote_url="https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg",
name="picasso.jpg"
)
### Audio
logger.report_media('audio', 'pink panther', iteration=1, local_path=audio_local_copy)
![image](../../img/colab_explicit_reporting_08.png)
### HTML
logger.report_media("html", "url_html", iteration=1, url="https://allegro.ai/docs/index.html")
![image](../../img/colab_explicit_reporting_09.png)
### Images
logger.report_image("image", "image from url", iteration=100, local_path=image_local_copy)
![image](../../img/colab_explicit_reporting_10.png)
### Video
logger.report_media('video', 'big bunny', iteration=1, local_path=video_local_copy)
![image](../../img/colab_explicit_reporting_11.png)
## Text
Report text messages by calling the [Logger.report_text](../../references/sdk/logger.md#report_text).
logger.report_text("hello, this is plain text")
![image](../../img/colab_explicit_reporting_13.png)

View File

@@ -0,0 +1,250 @@
---
title: Explicit Reporting
---
In this tutorial, learn how to extend **ClearML** automagical capturing of inputs and outputs with explicit reporting.
In this example, we will add the following to the [pytorch_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py)
example script from ClearML's GitHub repo:
* Setting an output destination for model checkpoints (snapshots).
* Explicitly logging a scalar, other (non-scalar) data, and logging text.
* Registering an artifact, which is uploaded to **ClearML Server**, and **ClearML** logs changes to it.
* Uploading an artifact, which is uploaded, but changes to it are not logged.
## Prerequisites
* The [clearml](https://github.com/allegroai/clearml) repository is cloned.
* The `clearml` package is installed.
## Before starting
Make a copy of `pytorch_mnist.py` in order to add explicit reporting to it.
* In the local **ClearML** repository, `example` directory.
cp pytorch_mnist.py pytorch_mnist_tutorial.py
## Step 1: Setting an output destination for model checkpoints
Specify a default output location, which is where model checkpoints (snapshots) and artifacts will be stored when the
experiment runs. Some possible destinations include:
* Local destination
* Shared folder
* Cloud storage:
* S3 EC2
* Google Cloud Storage
* Azure Storage.
Specify the output location in the `output_uri` parameter of the [Task.init](../../references/sdk/task.md#taskinit) method.
In this tutorial, we specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
task = Task.init(project_name='examples', task_name='pytorch mnist train')
to:
model_snapshots_path = '/mnt/clearml'
if not os.path.exists(model_snapshots_path):
os.makedirs(model_snapshots_path)
task = Task.init(project_name='examples',
task_name='extending automagical ClearML example',
output_uri=model_snapshots_path)
When the script runs, **ClearML** creates the following directory structure:
+ - <output destination name>
| +-- <project name>
| +-- <task name>.<Task Id>
| +-- models
| +-- artifacts
and puts the model checkpoints (snapshots) and artifacts in that folder.
For example, if the Task ID is `9ed78536b91a44fbb3cc7a006128c1b0`, then the directory structure will be:
+ - model_snapshots
| +-- examples
| +-- extending automagical ClearML example.9ed78536b91a44fbb3cc7a006128c1b0
| +-- models
| +-- artifacts
## Step 2: Logger class reporting methods
In addition to **ClearML** automagical logging, the **ClearML** Python
package contains methods for explicit reporting of plots, log text, media, and tables. These methods include:
* [Logger.report_histogram](../../references/sdk/logger.md#report_histogram)
* [Logger.report_confusion_matrix](../../references/sdk/logger.md#report_confusion_matrix)
* [Logger.report_line_plot](../../references/sdk/logger.md#report_line_plot)
* [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
* [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
* [Logger.report_surface](../../references/sdk/logger.md#report_surface)
(surface diagrams)
* [Logger.report_image](../../references/sdk/logger.md#report_image) - Report an image and upload its contents.
* [Logger.report_table](../../references/sdk/logger.md#report_table) - Report a table as a Pandas DataFrame, CSV file,
or URL for a CSV file.
* [Logger.report_media](../../references/sdk/logger.md#report_media) - Report media including images, audio, and video.
* [Logger.get_default_upload_destination](../../references/sdk/logger.md#get_default_upload_destination) - Retrieve the destination that is set for uploaded media.
### Get a logger
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
method.
logger = task.get_logger
### Plot scalar metrics
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method to report loss metrics.
def train(args, model, device, train_loader, optimizer, epoch):
save_loss = []
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
save_loss.append(loss)
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# Add manual scalar reporting for loss metrics
logger.report_scalar(title='Scalar example {} - epoch'.format(epoch),
series='Loss', value=loss.item(), iteration=batch_idx)
### Plot other (not scalar) data
The script contains a function named `test`, which determines loss and correct for the trained model. We add a histogram
and confusion matrix to log them.
def test(args, model, device, test_loader):
save_test_loss = []
save_correct = []
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# sum up batch loss
test_loss += F.nll_loss(output, target, reduction='sum').item()
# get the index of the max log-probability
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
save_test_loss.append(test_loss)
save_correct.append(correct)
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
logger.report_histogram(title='Histogram example', series='correct',
iteration=1, values=save_correct, xaxis='Test', yaxis='Correct')
# Manually report test loss and correct as a confusion matrix
matrix = np.array([save_test_loss, save_correct])
logger.report_confusion_matrix(title='Confusion matrix example',
series='Test loss / correct', matrix=matrix, iteration=1)
### Log text
Extend **ClearML** by explicitly logging text, including errors, warnings, and debugging statements. We use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
logger.report_text('The default output destination for model snapshots and artifacts is: {}'.format(model_snapshots_path ), level=logging.DEBUG)
## Step 3: Registering artifacts
Registering an artifact uploads it to **ClearML Server**, and if it changes, the change is logged in **ClearML Server**.
Currently, **ClearML** supports Pandas DataFrames as registered artifacts.
### Register the artifact
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
# Create the Pandas DataFrame
test_loss_correct = {
'test lost': save_test_loss,
'correct': save_correct
}
df = pd.DataFrame(test_loss_correct, columns=['test lost','correct'])
# Register the test loss and correct as a Pandas DataFrame artifact
task.register_artifact('Test_Loss_Correct', df, metadata={'metadata string': 'apple',
'metadata int': 100, 'metadata dict': {'dict string': 'pear', 'dict int': 200}})
### Reference the registered artifact
Once an artifact is registered, it can be referenced and utilized in the Python experiment script.
In the tutorial script, we add [Task.current_task](../../references/sdk/task.md#taskcurrent_task) and
[Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
methods to take a sample.
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(frac=0.5,
replace=True, random_state=1)
## Step 4: Uploading artifacts
Artifact can be uploaded to the **ClearML Server**, but changes are not logged.
Supported artifacts include:
* Pandas DataFrames
* Files of any type, including image files
* Folders - stored as ZIP files
* Images - stored as PNG files
* Dictionaries - stored as JSONs
* Numpy arrays - stored as NPZ files
In the tutorial script, we upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method with metadata specified in the `metadata` parameter.
# Upload test loss as an artifact. Here, the artifact is numpy array
task.upload_artifact('Predictions',artifact_object=np.array(save_test_loss),
metadata={'metadata string': 'banana', 'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}})
## Additional information
After extending the Python experiment script, run it and view the results in the **ClearML Web UI**.
python pytorch_mnist_tutorial.py
**To view the experiment results, do the following:**
1. In the **ClearML Web UI**, on the Projects page, click the examples project.
1. In the experiments table, click the **Extending automagical ClearML example** experiment.
1. In the **ARTIFACTS** tab, **DATA AUDIT** section, click **Test_Loss_Correct**. The registered Pandas DataFrame appears,
including the file path, size, hash, metadata, and a preview.
1. In the **OTHER** section, click **Loss**. The uploaded numpy array appears, including its related information.
1. Click the **RESULTS** tab.
1. Click the **LOG** sub-tab, and see the debugging message showing the Pandas DataFrame sample.
1. Click the **SCALARS** sub-tab, and see the scalar plots for epoch logging loss.
1. Click the **PLOTS** sub-tab, and see the confusion matrix and histogram.
## Next Steps
* See the [User Interface](../../webapp/webapp_overview.md) section to learn about its features.
* See the [ClearML Python Package Reference](../../clearml_sdk.md) to learn about
all the available classes and methods.

View File

@@ -0,0 +1,60 @@
---
title: HTML Reporting
---
The [html_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py) example
demonstrates reporting local HTML files and HTML by URL, using the [Logger.report_media](../../references/sdk/logger.md#report_media)
method.
**ClearML** reports these HTML debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab **>**
**DEBUG SAMPLES** sub-tab.
When the script runs, it creates an experiment named `html samples reporting`, which is associated with the `examples` project.
![image](../../img/examples_reporting_05.png)
## Reporting HTML URLs
Report HTML by URL, using the `Logger.report_media` method `url` parameter.
See the example script's [report_html_url](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L16)
function, which reports the **ClearML** documentation's home page.
Logger.current_logger().report_media("html", "url_html", iteration=iteration, url="https://allegro.ai/docs/index.html")
## Reporting HTML local files
Report the following using the `Logger.report_media` parameter method `local_path` parameter:
* [Interactive HTML](#interactive-html)
* [Bokeh GroupBy HTML](#bokeh-groupby-html)
* [Bokeh Graph HTML](#bokeh-graph-html)
* [Bokeh Image HTML](#bokeh-image-html)
### Interactive HTML
See the example script's [report_html_periodic_table](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L26) function, which reports a file created from Bokeh sample data.
Logger.current_logger().report_media("html", "periodic_html", iteration=iteration, local_path="periodic.html")
### Bokeh GroupBy HTML
See the example script's [report_html_groupby](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L117) function, which reports a Pandas GroupBy with nested HTML, created from Bokeh sample data.
Logger.current_logger().report_media(
"html",
"pandas_groupby_nested_html",
iteration=iteration,
local_path="bar_pandas_groupby_nested.html",
)
### Bokeh Graph HTML
See the example script's [report_html_graph](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L162) function, which reports a Bokeh plot created from Bokeh sample data.
Logger.current_logger().report_media("html", "Graph_html", iteration=iteration, local_path="graph.html")
### Bokeh Image HTML
See the example script's [report_html_image](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L195) function, which reports an image created from Bokeh sample data.
Logger.current_logger().report_media("html", "Spectral_html", iteration=iteration, local_path="image.html")

View File

@@ -0,0 +1,73 @@
---
title: Hyperparameters Reporting
---
The [hyper_parameters.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py) example
script demonstrates:
* **ClearML**'s automatic logging of `argparse` command line options and TensorFlow Definitions
* Logging user-defined hyperparameters with a parameter dictionary and connecting the dictionary to a Task.
Hyperparameters appear in the **web UI** in the experiment's page, under **CONFIGURATIONS** **>** **HYPER PARAMETERS**.
Each type is in its own subsection. Parameters from older experiments are grouped together with the ``argparse`` command
line options (in the **Args** subsection).
When the script runs, it creates an experiment named `hyper-parameters example`, which is associated with the `examples` project.
## argparse command line options
If a code uses argparse and initializes a Task, **ClearML** automatically logs the argparse arguments.
parser = ArgumentParser()
parser.add_argument('--argparser_int_value', help='integer value', type=int, default=1)
parser.add_argument('--argparser_disabled', action='store_true', default=False, help='disables something')
parser.add_argument('--argparser_str_value', help='string value', default='a string')
args = parser.parse_args()
Command line options appears in **HYPER PARAMETERS** **>** **Args**.
![image](../../img/examples_reporting_hyper_param_01.png)
## TensorFlow Definitions
**ClearML** automatically logs TensorFlow Definitions, whether they are defined before or after the Task is initialized.
flags.DEFINE_string('echo', None, 'Text to echo.')
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
task = Task.init(project_name='examples', task_name='hyper-parameters example')
flags.DEFINE_integer('echo3', 3, 'Text to echo.')
flags.DEFINE_string('echo5', '5', 'Text to echo.', module_name='test')
TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
![image](../../img/examples_reporting_hyper_param_03.png)
## Parameter dictionaries
Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
method, and **ClearML** logs the parameters. **ClearML** also tracks changes to the parameters.
parameters = {
'list': [1, 2, 3],
'dict': {'a': 1, 'b': 2},
'tuple': (1, 2, 3),
'int': 3,
'float': 2.2,
'string': 'my string',
}
parameters = task.connect(parameters)
# adding new parameter after connect (will be logged as well)
parameters['new_param'] = 'this is new'
# changing the value of a parameter (new value will be stored instead of previous one)
parameters['float'] = '9.9'
Parameters from dictionaries connected to Tasks appear in **HYPER PARAMETERS** **>** **General**.
![image](../../img/examples_reporting_hyper_param_02.png)

View File

@@ -0,0 +1,46 @@
---
title: Images Reporting
---
The [image_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/image_reporting.py) example
demonstrates reporting (uploading) images in several formats, including:
* NumPy arrays
* uint8
* uint8 RGB
* PIL Image objects
* Local files.
**ClearML** uploads images to the bucket specified in the **ClearML** configuration file, or **ClearML** can be configured
for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
(storage for [artifacts](../../fundamentals/artifacts.md#setting-upload-destination) is different). Set credentials for
storage in the **ClearML** configuration file.
When the script runs, it creates an experiment named `image reporting`, which is associated with the `examples` project.
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
method:
# report image as float image
m = np.eye(256, 256, dtype=np.float)
Logger.current_logger().report_image("image", "image float", iteration=iteration, image=m)
# report image as uint8
m = np.eye(256, 256, dtype=np.uint8) * 255
Logger.current_logger().report_image("image", "image uint8", iteration=iteration, image=m)
# report image as uint8 RGB
m = np.concatenate((np.atleast_3d(m), np.zeros((256, 256, 2), dtype=np.uint8)), axis=2)
Logger.current_logger().report_image("image", "image color red", iteration=iteration, image=m)
# report PIL Image object
image_open = Image.open(os.path.join("data_samples", "picasso.jpg"))
Logger.current_logger().report_image("image", "image PIL", iteration=iteration, image=image_open)
**ClearML** reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
**>** **DEBUG SAMPLES** sub-tab.
![image](../../img/examples_reporting_07.png)
Double click a thumbnail and the image viewer opens.
![image](../../img/examples_reporting_07a.png)

View File

@@ -0,0 +1,14 @@
---
title: Manual Matplotlib Reporting
---
The [matplotlib_manual_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/matplotlib_manual_reporting.py)
example demonstrates reporting using Matplotlib and Seaborn with **ClearML**.
When the script runs, it creates an experiment named "Manual Matplotlib example", which is associated with the
examples project.
The Matplotlib figure reported by calling the [Logger.report_matplotlib_figure](../../references/sdk/logger.md#report_matplotlib_figure)
method appears in **RESULTS** **>** **PLOTS**.
![image](../../img/manual_matplotlib_reporting_01.png)

View File

@@ -0,0 +1,54 @@
---
title: Media Reporting
---
The [media_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/media_reporting.py) example
demonstrates reporting (uploading) images, audio, and video. Use the [Logger.report_media](../../references/sdk/logger.md#report_media)
method to upload from:
* Local path
* BytesIO stream
* URL of media already uploaded to some storage
**ClearML** uploads media to the bucket specified in the **ClearML** configuration file, or **ClearML** can be configured
for image storage, see [Logger.set_default_upload_destination](../../references/sdk/logger.md#set_default_upload_destination)
(storage for [artifacts](../../fundamentals/artifacts#setting-upload-destination) is different). Set credentials for storage in the **ClearML**
[configuration file](../../configs/clearml_conf.md).
**ClearML** reports media in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab **>** **DEBUG SAMPLES**
sub-tab.
When the script runs, it creates an experiment named `audio and video reporting`, which is associated with the `examples`
project.
## Reporting (uploading) media from a source by URL
Report by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `url` parameter.
# report video, an already uploaded video media (url)
Logger.current_logger().report_media(
'video', 'big bunny', iteration=1,
url='https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_1MB.mp4')
# report audio, report an already uploaded audio media (url)
Logger.current_logger().report_media(
'audio', 'pink panther', iteration=1,
url='https://www2.cs.uic.edu/~i101/SoundFiles/PinkPanther30.wav')
The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the audio player opens.
![image](../../img/examples_reporting_08.png)
## Reporting (uploading) media from a local file
Use the `local_path` parameter.
# report audio, report local media audio file
Logger.current_logger().report_media(
'audio', 'tada', iteration=1,
local_path=os.path.join('data_samples', 'sample.mp3'))
The reported video can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the video player opens.
![image](../../img/examples_reporting_09.png)

View File

@@ -0,0 +1,55 @@
---
title: Configuring Models
---
The [model_config.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/model_config.py) example demonstrates
configuring a model and defining label enumeration. Connect the configuration and label enumeration to a Task and, once
connected, **ClearML** tracks any changes to them. When **ClearML** stores a model, in any framework, **ClearML** stores
the configuration and label enumeration with it.
When the script runs, it creates an experiment named `Model configuration example`, which is associated with the `examples` project.
## Configuring models
### Using a configuration file
Connect a configuration file to a Task by calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the file as an argument.
# Connect a local configuration file
config_file = os.path.join('data_samples', 'sample.json')
config_file = task.connect_configuration(config_file)
**ClearML** reports the configuration in the **ClearML Web UI**, experiment details, **CONFIGURATION** tab, **CONFIGURATION OBJECTS**
area. See the image in the next section.
### Configuration dictionary
Connect a configuration dictionary to a Task by creating a dictionary, and then calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the dictionary as an argument. After the configuration is connected, **ClearML** tracks changes to it.
model_config_dict = {
'value': 13.37,
'dict': {'sub_value': 'string', 'sub_integer': 11},
'list_of_ints': [1, 2, 3, 4],
}
model_config_dict = task.connect_configuration(model_config_dict)
# We now update the dictionary after connecting it, and the changes will be tracked as well.
model_config_dict['new value'] = 10
model_config_dict['value'] *= model_config_dict['new value']
**ClearML** reports the configuration in the **ClearML Web UI** **>** experiment details **>** **CONFIGURATION** tab **>**
**CONFIGURATION OBJECTS** area.
![image](../../img/examples_reporting_config.png)
## Label enumeration
Connect a label enumeration dictionary by creating the dictionary, and then calling the [Task.connect_label_enumeration](../../references/sdk/task.md#connect_label_enumeration)
method with the dictionary as an argument.
# store the label enumeration of the training model
labels = {'background': 0, 'cat': 1, 'dog': 2}
task.connect_label_enumeration(labels)

View File

@@ -0,0 +1,39 @@
---
title: Tables Reporting (Pandas and CSV Files)
---
The [pandas_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/pandas_reporting.py) example demonstrates reporting tabular data from Pandas DataFrames and CSV files as tables.
**ClearML** reports these tables in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab **>** **PLOTS**
sub-tab.
When the script runs, it creates an experiment named `pandas table reporting`, which is associated with the `examples` project.
## Reporting Pandas DataFrames as tables
Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
method, and providing the DataFrame in the `table_plot` parameter.
# Report table - DataFrame with index
df = pd.DataFrame(
{
"num_legs": [2, 4, 8, 0],
"num_wings": [2, 0, 0, 0],
"num_specimen_seen": [10, 2, 1, 8],
},
index=["falcon", "dog", "spider", "fish"],
)
df.index.name = "id"
Logger.current_logger().report_table("table pd", "PD with index", iteration=iteration, table_plot=df)
![image](../../img/examples_reporting_12.png)
## Reporting CSV files as tables
Report CSV files by providing the URL location of the CSV file in the `url` parameter. For a local CSV file, use the `csv` parameter.
# Report table - CSV from path
csv_url = "https://raw.githubusercontent.com/plotly/datasets/master/Mining-BTC-180.csv"
Logger.current_logger().report_table("table csv", "remote csv", iteration=iteration, url=csv_url)
![image](../../img/examples_reporting_11.png)

View File

@@ -0,0 +1,28 @@
---
title: Plotly Reporting
---
The [plotly_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py) example
demonstrates **ClearML**'s Plotly integration and reporting.
Report Plotly plots in **ClearML** by calling the [`Logger.report_plotly`](../../references/sdk/logger.md#report_plotly) method, and passing a complex
Plotly figure, using the `figure` parameter.
In this example, the Plotly figure is created using `plotly.express.scatter` (see [Scatter Plots in Python](https://plotly.com/python/line-and-scatter/)
in the Plotly documentation):
# Iris dataset
df = px.data.iris()
# create complex plotly figure
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", marginal_y="rug", marginal_x="histogram")
# report the plotly figure
task.get_logger().report_plotly(title="iris", series="sepal", iteration=0, figure=fig)
When the script runs, it creates an experiment named `plotly reporting`, which is associated with the examples project.
**ClearML** reports Plotly plots in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab **>** **PLOTS**
sub-tab.
![image](../../img/examples_reporting_13.png)

View File

@@ -0,0 +1,25 @@
---
title: Scalars Reporting
---
The [scalar_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py) script
demonstrates explicit scalar reporting. **ClearML** reports scalars in the **ClearML Web UI** **>** experiment details **>**
**RESULTS** tab **>** **SCALARS** sub-tab.
When the script runs, it creates an experiment named `scalar reporting`, which is associated with the `examples` project.
To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method. To report more than one series on the same plot, use the same `title` argument. For different plots, use different
`title` arguments.
# report two scalar series on the same graph
for i in range(100):
Logger.current_logger().report_scalar("unified graph", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("unified graph", "series B", iteration=i, value=10./(i+1))
# report two scalar series on two different graphs
for i in range(100):
Logger.current_logger().report_scalar("graph A", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("graph B", "series B", iteration=i, value=10./(i+1))
![image](../../img/examples_reporting_14.png)

View File

@@ -0,0 +1,128 @@
---
title: 2D Plots Reporting
---
The [scatter_hist_confusion_mat_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/scatter_hist_confusion_mat_reporting.py)
example demonstrates reporting series data in the following 2D formats:
* [Histograms](#histograms)
* [Confusion matrices](#confusion-matrices)
* [Scatter plots](#2d-scatter-plots)
**ClearML** reports these tables in the **ClearML Web UI**, experiment details **>** **RESULTS** tab **>** **PLOTS** sub-tab.
When the script runs, it creates an experiment named `2D plots reporting`, which is associated with the `examples` project.
## Histograms
Report histograms by calling the [Logger.report_histogram](../../references/sdk/logger.md#report_histogram)
method. To report more than one series on the same plot, use same the `title` argument. For different plots, use different
`title` arguments. Specify the type of histogram with the `mode` parameter. The `mode` values are `group` (the default),
`stack`, and `relative`.
# report a single histogram
histogram = np.random.randint(10, size=10)
Logger.current_logger().report_histogram(
"single_histogram",
"random histogram",
iteration=iteration,
values=histogram,
xaxis="title x",
yaxis="title y",
)
# report two histograms on the same graph (plot)
histogram1 = np.random.randint(13, size=10)
histogram2 = histogram * 0.75
Logger.current_logger().report_histogram(
"two_histogram",
"series 1",
iteration=iteration,
values=histogram1,
xaxis="title x",
yaxis="title y",
)
Logger.current_logger().report_histogram(
"two_histogram",
"series 2",
iteration=iteration,
values=histogram2,
xaxis="title x",
yaxis="title y",
)
![image](../../img/examples_reporting_15.png)
![image](../../img/examples_reporting_15a.png)
## Confusion Matrices
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_matrix(
"example_confusion",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
)
![image](../../img/examples_reporting_16.png)
# report confusion matrix with 0,0 is at the top left
Logger.current_logger().report_matrix(
"example_confusion_0_0_at_top",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
yaxis_reversed=True,
)
## 2D scatter plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
method. Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
# report 2d scatter plot with lines
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_xy",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
)
# report 2d scatter plot with markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='markers'
)
# report 2d scatter plot with lines and markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_lines+markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='lines+markers'
)
![image](../../img/examples_reporting_17.png)

View File

@@ -0,0 +1,16 @@
---
title: Text Reporting
---
The [text_reporting.py](https://github.com/allegroai/clearml/blob/master/examples/reporting/text_reporting.py) script
demonstrates reporting explicit text, by calling the [Logger.report_text](../../references/sdk/logger.md#report_text)
method.
**ClearML** reports these tables in the **ClearML Web UI**, experiment details, **RESULTS** tab, **CONSOLE** sub-tab.
When the script runs, it creates an experiment named `text reporting`, which is associated with the `examples` project.
# report text
Logger.current_logger().report_text("hello, this is plain text")
![image](../../img/examples_reporting_text.png)