Small fixes (#131)

This commit is contained in:
pollfly 2021-12-14 15:12:30 +02:00 committed by GitHub
parent 6ae75beaa2
commit ec304690b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 622 additions and 432 deletions

View File

@ -37,7 +37,7 @@ A specific branch and commit ID, other than latest commit in master, to be execu
`--branch <branch_name> --commit <commit_id>` flags.
If unspecified, `clearml-task` will use the latest commit from the master branch.
### Command line options
### Command Line Options
<div className="tbl-cmd">

View File

@ -49,10 +49,10 @@ Task id=2f96ee95b05d4693b360d0fcbb26b727 sent for execution on queue default
Execution log at: https://app.community.clear.ml/projects/552d5399112d47029c146d5248570295/experiments/2f96ee95b05d4693b360d0fcbb26b727/output/log
```
:::note
**clearml-task** automatically finds the requirements.txt file in remote repositories.
:::note Adding Requirements
`clearml-task` automatically finds the requirements.txt file in remote repositories.
If a remote repo does not have such a file, make sure to either add one with all the required Python packages,
or add the **`--packages '<package_name>`** flag to the command.
or add the `--packages "<package_name>"` flag to the command (for example: `--packages "tqdm>=2.1" "scikit-learn"`).
:::
<br />

View File

@ -83,7 +83,9 @@ dataset_project = "dataset_examples"
from clearml import Dataset
dataset_path = Dataset.get(dataset_name=dataset_name, dataset_project=dataset_project).get_local_copy()
dataset_path = Dataset.get(
dataset_name=dataset_name,
dataset_project=dataset_project).get_local_copy()
trainset = datasets.CIFAR10(
root=dataset_path,

View File

@ -24,7 +24,9 @@ We first need to obtain a local copy of the CIFAR dataset.
from clearml import StorageManager
manager = StorageManager()
dataset_path = manager.get_local_copy(remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz")
dataset_path = manager.get_local_copy(
remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
)
```
This script downloads the data and `dataset_path` contains the path to the downloaded data.

View File

@ -26,8 +26,12 @@ The example uploads a dictionary as an artifact in the main Task by calling the
method on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
of the subprocess, making each unique.
```python
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
'temp {:02d}'.format(dist.get_rank()),
artifact_object={'worker_rank': dist.get_rank()}
)
```
All of these artifacts appear in the main Task under **ARTIFACTS** **>** **OTHER**.
@ -40,8 +44,14 @@ method on `Task.current_task().get_logger`, which is the logger for the main Tas
with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are
logged together.
```python
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
'loss',
'worker {:02d}'.format(dist.get_rank()),
value=loss.item(),
iteration=i
)
```
The single scalar plot for loss appears in **RESULTS** **>** **SCALARS**.
@ -49,12 +59,14 @@ The single scalar plot for loss appears in **RESULTS** **>** **SCALARS**.
## Hyperparameters
**ClearML** automatically logs the argparse command line options. Since the [Task.connect](../../references/sdk/task#connect)
method is called on `Task.current_task`, they are logged in the main Task. A different hyperparameter key is used in each
**ClearML** automatically logs the argparse command line options. Since the [`Task.connect`](../../references/sdk/task#connect)
method is called on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task), they are logged in the main Task. A different hyperparameter key is used in each
subprocess, so they do not overwrite each other in the main Task.
```python
param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))}
Task.current_task().connect(param)
```
All the hyperparameters appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS**.

View File

@ -14,11 +14,15 @@ which always returns the main Task.
## Hyperparameters
**ClearML** automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
connecting it to the Task using a call to the [Task.connect](../../references/sdk/task#connect) method.
additional_parameters = {'stuff_' + str(randint(0, 100)): 'some stuff ' + str(randint(0, 100))}
```python
additional_parameters = {
'stuff_' + str(randint(0, 100)): 'some stuff ' + str(randint(0, 100))
}
Task.current_task().connect(additional_parameters)
```
Command line options appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS** **>** **Args**.

View File

@ -19,23 +19,20 @@ a shell script when a docker is started, but before an experiment is run.
## Steps
1. Open your ClearML configuration file for editing. Depending upon your operating system, it is:
* Linux - ~/clearml.conf
* Mac - $HOME/clearml.conf
* Windows - \\User\\<username\>\\clearml.conf
* Linux - `~/clearml.conf`
* Mac - `$HOME/clearml.conf`
* Windows - `\User\<username>\clearml.conf`
When you open up the file, the first line should say: `# CLEARML-AGENT configuration file`
1. In the file, search for and go to, `extra_docker_shell_script:`, which is where we will be putting our extra script. If
it is commented out, make sure to uncomment the line. We will use the example script that is already there `["apt-get install -y bindfs", ]`.
1. In the file, search for and go to, "extra_docker_shell_script:", which is where we will be putting our extra script. If
it is commented out, make sure to uncomment the line. We will use the example script that is already there ["apt-get install -y bindfs", ].
1. Search for and go to "docker_force_pull" in the document, and make sure that it is set to "true", so that your docker image will
be updated.
1. Search for and go to `docker_force_pull` in the document, and make sure that it is set to `true`, so that your docker
image will be updated.
1. Run the `clearml-agent` in docker mode: `clearml-agent daemon --docker --queue default`. The agent will use the default
Cuda/Nvidia Docker Image.
1. Enqueue any Clearml Task to the default queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,
1. Enqueue any ClearML Task to the `default` queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,
and now it will execute the `extra_docker_shell_script` that was put in the configuration file. Then the code will be
executed in the updated docker container. If we look at the console output in the web UI, the third entry should start
with `Executing: ['docker', 'run', '-t', '--gpus...'`, and towards the end of the entry, where the downloaded packages are

View File

@ -18,8 +18,13 @@ Accuracy, learning rate, and training loss appear in **RESULTS** **>** **SCALARS
is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect)
method.
configuration_dict = {'number_of_epochs': 6, 'batch_size': 16, 'ngrams': 2, 'base_lr': 1.0}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
```python
configuration_dict = {
'number_of_epochs': 6, 'batch_size': 16, 'ngrams': 2, 'base_lr': 1.0
}
# enabling configuration override by clearml
configuration_dict = task.connect(configuration_dict)
```
Command line options appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS** **>** **Args**.

View File

@ -34,23 +34,14 @@ installed, it attempts to import `OptimizerBOHB`. If `clearml.automation.hpbands
the `RandomSearch` for the search strategy.
```python
aSearchStrategy = None
if not aSearchStrategy:
try:
from clearml.optuna import OptimizerOptuna
from clearml.automation.optuna import OptimizerOptuna # noqa
aSearchStrategy = OptimizerOptuna
except ImportError as ex:
pass
if not aSearchStrategy:
try:
from clearml.automation.hpbandster import OptimizerBOHB
from clearml.automation.hpbandster import OptimizerBOHB # noqa
aSearchStrategy = OptimizerBOHB
except ImportError as ex:
pass
if not aSearchStrategy:
logging.getLogger().warning(
'Apologies, it seems you do not have \'optuna\' or \'hpbandster\' installed, '
'we will be using RandomSearch strategy instead')
@ -87,10 +78,12 @@ the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML
```python
# Connecting CLEARML
task = Task.init(project_name='Hyper-Parameter Optimization',
task = Task.init(
project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False)
reuse_last_task_id=False
)
```
## Set Up the Arguments
@ -199,7 +192,8 @@ running as a service, see [Services Mode](../../../clearml_agent.md#services-mod
## Optimize
The optimizer is ready. Set the report period and start it, providing the callback method to report the best performance.
The optimizer is ready. Set the report period and [start](../../../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start)
it, providing the callback method to report the best performance.
```python
# report every 12 seconds, this is way too often, but we are testing here J

View File

@ -14,6 +14,7 @@ When the script runs, it creates an experiment named `3D plot reporting`, which
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
```python
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_surface(
@ -25,16 +26,16 @@ method.
yaxis="title Y",
zaxis="title Z",
)
```
Visualize the reported surface plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_01.png)
![Surface plot](../../img/examples_reporting_02.png)
## 3D Scatter Plot
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
method.
```python
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
Logger.current_logger().report_scatter3d(
@ -46,6 +47,7 @@ method.
yaxis="title y",
zaxis="title z",
)
```
Visualize the reported 3D scatter plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_02.png)
![3d scatter plot](../../img/examples_reporting_01.png)

View File

@ -38,6 +38,7 @@ method. If the Pandas DataFrame changes, **ClearML** uploads the changes. The up
For example:
```python
df = pd.DataFrame(
{
'num_legs': [2, 4, 8, 0],
@ -50,14 +51,17 @@ For example:
# Register Pandas object as artifact to watch
# (it will be monitored in the background and automatically synced and uploaded)
task.register_artifact('train', df, metadata={'counting': 'legs', 'max legs': 69}))
```
By changing the artifact, and calling the [Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
method to retrieve it, we can see that **ClearML** tracked the change.
```python
# change the artifact object
df.sample(frac=0.5, replace=True, random_state=1)
# or access it from anywhere using the Task's get_registered_artifacts()
Task.current_task().get_registered_artifacts()['train'].sample(frac=0.5, replace=True, random_state=1)
```
## Artifacts Without Tracking
@ -75,37 +79,52 @@ Artifacts without tracking include:
* Wildcards (stored as a ZIP files)
### Pandas DataFrames
```python
# add and upload pandas.DataFrame (onetime snapshot of the object)
task.upload_artifact('Pandas', artifact_object=df)
```
### Local Files
```python
# add and upload local file artifact
task.upload_artifact('local file', artifact_object=os.path.join('data_samples', 'dancing.jpg'))
task.upload_artifact(
'local file',
artifact_object=os.path.join(
'data_samples',
'dancing.jpg'
)
)
```
### Dictionaries
```python
# add and upload dictionary stored as JSON)
task.upload_artifact('dictionary', df.to_dict())
```
### Numpy Objects
```python
# add and upload Numpy Object (stored as .npz file)
task.upload_artifact('Numpy Eye', np.eye(100, 100))
```
### Image Files
```python
# add and upload Image (stored as .png file)
im = Image.open(os.path.join('data_samples', 'dancing.jpg'))
task.upload_artifact('pillow_image', im)
```
### Folders
```python
# add and upload a folder, artifact_object should be the folder path
task.upload_artifact('local folder', artifact_object=os.path.join('data_samples'))
```
### Wildcards
```python
# add and upload a wildcard
task.upload_artifact('wildcard jpegs', artifact_object=os.path.join('data_samples', '*.jpg'))
```

View File

@ -23,8 +23,9 @@ Make a copy of `pytorch_mnist.py` in order to add explicit reporting to it.
* In the local **ClearML** repository, `example` directory.
```bash
cp pytorch_mnist.py pytorch_mnist_tutorial.py
```
## Step 1: Setting an Output Destination for Model Checkpoints
@ -42,10 +43,13 @@ In this tutorial, we specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
```python
task = Task.init(project_name='examples', task_name='pytorch mnist train')
```
to:
```python
model_snapshots_path = '/mnt/clearml'
if not os.path.exists(model_snapshots_path):
os.makedirs(model_snapshots_path)
@ -53,6 +57,7 @@ to:
task = Task.init(project_name='examples',
task_name='extending automagical ClearML example',
output_uri=model_snapshots_path)
```
When the script runs, **ClearML** creates the following directory structure:
@ -94,14 +99,16 @@ package contains methods for explicit reporting of plots, log text, media, and t
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
method.
```python
logger = task.get_logger
```
### Plot Scalar Metrics
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method to report loss metrics.
```python
def train(args, model, device, train_loader, optimizer, epoch):
save_loss = []
@ -124,12 +131,14 @@ method to report loss metrics.
# Add manual scalar reporting for loss metrics
logger.report_scalar(title='Scalar example {} - epoch'.format(epoch),
series='Loss', value=loss.item(), iteration=batch_idx)
```
### Plot Other (Not Scalar) Data
The script contains a function named `test`, which determines loss and correct for the trained model. We add a histogram
and confusion matrix to log them.
```python
def test(args, model, device, test_loader):
save_test_loss = []
@ -157,20 +166,39 @@ and confusion matrix to log them.
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
logger.report_histogram(title='Histogram example', series='correct',
iteration=1, values=save_correct, xaxis='Test', yaxis='Correct')
logger.report_histogram(
title='Histogram example',
series='correct',
iteration=1,
values=save_correct,
xaxis='Test',
yaxis='Correct'
)
# Manually report test loss and correct as a confusion matrix
matrix = np.array([save_test_loss, save_correct])
logger.report_confusion_matrix(title='Confusion matrix example',
series='Test loss / correct', matrix=matrix, iteration=1)
logger.report_confusion_matrix(
title='Confusion matrix example',
series='Test loss / correct',
matrix=matrix,
iteration=1
)
```
### Log Text
Extend **ClearML** by explicitly logging text, including errors, warnings, and debugging statements. We use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
logger.report_text('The default output destination for model snapshots and artifacts is: {}'.format(model_snapshots_path ), level=logging.DEBUG)
```python
logger.report_text(
'The default output destination for model snapshots and artifacts is: {}'.format(
model_snapshots_path
),
level=logging.DEBUG
)
```
## Step 3: Registering Artifacts
@ -182,6 +210,7 @@ Currently, **ClearML** supports Pandas DataFrames as registered artifacts.
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
```python
# Create the Pandas DataFrame
test_loss_correct = {
'test lost': save_test_loss,
@ -190,8 +219,16 @@ that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/ta
df = pd.DataFrame(test_loss_correct, columns=['test lost','correct'])
# Register the test loss and correct as a Pandas DataFrame artifact
task.register_artifact('Test_Loss_Correct', df, metadata={'metadata string': 'apple',
'metadata int': 100, 'metadata dict': {'dict string': 'pear', 'dict int': 200}})
task.register_artifact(
'Test_Loss_Correct',
df,
metadata={
'metadata string': 'apple',
'metadata int': 100,
'metadata dict': {'dict string': 'pear', 'dict int': 200}
}
)
```
### Reference the Registered Artifact
@ -201,9 +238,15 @@ In the tutorial script, we add [Task.current_task](../../references/sdk/task.md#
[Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
methods to take a sample.
```python
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(frac=0.5,
replace=True, random_state=1)
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(
frac=0.5,
replace=True,
random_state=1
)
```
## Step 4: Uploading Artifacts
@ -220,10 +263,18 @@ Supported artifacts include:
In the tutorial script, we upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method with metadata specified in the `metadata` parameter.
```python
# Upload test loss as an artifact. Here, the artifact is numpy array
task.upload_artifact('Predictions',artifact_object=np.array(save_test_loss),
metadata={'metadata string': 'banana', 'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}})
task.upload_artifact(
'Predictions',
artifact_object=np.array(save_test_loss),
metadata={
'metadata string': 'banana',
'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}
}
)
```
## Additional Information

View File

@ -33,13 +33,16 @@ Report the following using the `Logger.report_media` parameter method `local_pat
### Interactive HTML
See the example script's [report_html_periodic_table](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L26) function, which reports a file created from Bokeh sample data.
Logger.current_logger().report_media("html", "periodic_html", iteration=iteration, local_path="periodic.html")
```python
Logger.current_logger().report_media(
"html", "periodic_html", iteration=iteration, local_path="periodic.html"
)
```
### Bokeh GroupBy HTML
See the example script's [report_html_groupby](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L117) function, which reports a Pandas GroupBy with nested HTML, created from Bokeh sample data.
```python
Logger.current_logger().report_media(
"html",
"pandas_groupby_nested_html",
@ -47,14 +50,24 @@ See the example script's [report_html_groupby](https://github.com/allegroai/clea
local_path="bar_pandas_groupby_nested.html",
)
```
### Bokeh Graph HTML
See the example script's [report_html_graph](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L162) function, which reports a Bokeh plot created from Bokeh sample data.
Logger.current_logger().report_media("html", "Graph_html", iteration=iteration, local_path="graph.html")
```python
Logger.current_logger().report_media(
"html", "Graph_html", iteration=iteration, local_path="graph.html"
)
```
### Bokeh Image HTML
See the example script's [report_html_image](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L195) function, which reports an image created from Bokeh sample data.
Logger.current_logger().report_media("html", "Spectral_html", iteration=iteration, local_path="image.html")
```python
Logger.current_logger().report_media(
"html", "Spectral_html", iteration=iteration, local_path="image.html"
)
```

View File

@ -17,12 +17,16 @@ When the script runs, it creates an experiment named `hyper-parameters example`,
If a code uses argparse and initializes a Task, **ClearML** automatically logs the argparse arguments.
```python
parser = ArgumentParser()
parser.add_argument('--argparser_int_value', help='integer value', type=int, default=1)
parser.add_argument('--argparser_disabled', action='store_true', default=False, help='disables something')
parser.add_argument(
'--argparser_disabled', action='store_true', default=False, help='disables something'
)
parser.add_argument('--argparser_str_value', help='string value', default='a string')
args = parser.parse_args()
```
Command line options appears in **HYPER PARAMETERS** **>** **Args**.
@ -32,6 +36,7 @@ Command line options appears in **HYPER PARAMETERS** **>** **Args**.
**ClearML** automatically logs TensorFlow Definitions, whether they are defined before or after the Task is initialized.
```python
flags.DEFINE_string('echo', None, 'Text to echo.')
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
@ -41,6 +46,8 @@ Command line options appears in **HYPER PARAMETERS** **>** **Args**.
flags.DEFINE_string('echo5', '5', 'Text to echo.', module_name='test')
```
TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
![image](../../img/examples_reporting_hyper_param_03.png)
@ -50,6 +57,7 @@ TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
method, and **ClearML** logs the parameters. **ClearML** also tracks changes to the parameters.
```python
parameters = {
'list': [1, 2, 3],
'dict': {'a': 1, 'b': 2},
@ -67,6 +75,8 @@ method, and **ClearML** logs the parameters. **ClearML** also tracks changes to
# changing the value of a parameter (new value will be stored instead of previous one)
parameters['float'] = '9.9'
```
Parameters from dictionaries connected to Tasks appear in **HYPER PARAMETERS** **>** **General**.
![image](../../img/examples_reporting_hyper_param_02.png)

View File

@ -20,6 +20,7 @@ When the script runs, it creates an experiment named `image reporting`, which is
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
method:
```python
# report image as float image
m = np.eye(256, 256, dtype=np.float)
Logger.current_logger().report_image("image", "image float", iteration=iteration, image=m)
@ -30,17 +31,28 @@ method:
# report image as uint8 RGB
m = np.concatenate((np.atleast_3d(m), np.zeros((256, 256, 2), dtype=np.uint8)), axis=2)
Logger.current_logger().report_image("image", "image color red", iteration=iteration, image=m)
Logger.current_logger().report_image(
"image",
"image color red",
iteration=iteration,
image=m
)
# report PIL Image object
image_open = Image.open(os.path.join("data_samples", "picasso.jpg"))
Logger.current_logger().report_image("image", "image PIL", iteration=iteration, image=image_open)
Logger.current_logger().report_image(
"image",
"image PIL",
iteration=iteration,
image=image_open
)
```
**ClearML** reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
**>** **DEBUG SAMPLES** sub-tab.
![image](../../img/examples_reporting_07.png)
Double click a thumbnail and the image viewer opens.
Double click a thumbnail, and the image viewer opens.
![image](../../img/examples_reporting_07a.png)

View File

@ -24,15 +24,19 @@ project.
Report by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `url` parameter.
```python
# report video, an already uploaded video media (url)
Logger.current_logger().report_media(
'video', 'big bunny', iteration=1,
url='https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_1MB.mp4')
url='https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_1MB.mp4'
)
# report audio, report an already uploaded audio media (url)
Logger.current_logger().report_media(
'audio', 'pink panther', iteration=1,
url='https://www2.cs.uic.edu/~i101/SoundFiles/PinkPanther30.wav')
url='https://www2.cs.uic.edu/~i101/SoundFiles/PinkPanther30.wav'
)
```
The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the audio player opens.
@ -43,10 +47,13 @@ The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click
Use the `local_path` parameter.
```python
# report audio, report local media audio file
Logger.current_logger().report_media(
'audio', 'tada', iteration=1,
local_path=os.path.join('data_samples', 'sample.mp3'))
local_path=os.path.join('data_samples', 'sample.mp3')
)
```
The reported video can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the video player opens.

View File

@ -17,11 +17,12 @@ Connect a configuration file to a Task by calling the [Task.connect_configuratio
method with the file location and the configuration object's name as arguments. In this example, we connect a JSON file and a YAML file
to a Task.
```python
config_file_json = 'data_samples/sample.json'
task.connect_configuration(name="json file", configuration=config_file_json)
...
config_file_yaml = 'data_samples/config_yaml.yaml'
task.connect_configuration(configuration=config_file_yaml, name="yaml file")
```
The configuration is logged to the ClearML Task and can be viewed in the **ClearML Web UI** experiment details **>** **CONFIGURATION** tab **>** **CONFIGURATION OBJECTS**
section. The contents of the JSON file will appear in the **json file** object, and the contents of the YAML file will appear
@ -34,17 +35,21 @@ in the **yaml file** object, as specified in the `name` parameter of the `connec
Connect a configuration dictionary to a Task by creating a dictionary, and then calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the dictionary and the object name as arguments. After the configuration is connected, **ClearML** tracks changes to it.
```python
model_config_dict = {
'CHANGE ME': 13.37,
'dict': {'sub_value': 'string', 'sub_integer': 11},
'list_of_ints': [1, 2, 3, 4],
}
model_config_dict = task.connect_configuration(name='dictionary', configuration=model_config_dict)
model_config_dict = task.connect_configuration(
name='dictionary',
configuration=model_config_dict
)
# Update the dictionary after connecting it, and the changes will be tracked as well.
model_config_dict['new value'] = 10
model_config_dict['CHANGE ME'] *= model_config_dict['new value']
```
The configurations are connected to the ClearML Task and can be viewed in the **ClearML Web UI** **>** experiment details **>** **CONFIGURATION** tab **>**
**CONFIGURATION OBJECTS** area **>** **dictionary** object.
@ -55,13 +60,16 @@ The configurations are connected to the ClearML Task and can be viewed in the **
Connect a label enumeration dictionary by creating the dictionary, and then calling the [Task.connect_label_enumeration](../../references/sdk/task.md#connect_label_enumeration)
method with the dictionary as an argument.
```python
# store the label enumeration of the training model
labels = {'background': 0, 'cat': 1, 'dog': 2}
task.connect_label_enumeration(labels)
```
Log a local model file.
```python
OutputModel().update_weights('my_best_model.bin')
```
The model which is stored contains the model configuration and the label enumeration.

View File

@ -14,6 +14,7 @@ When the script runs, it creates an experiment named `pandas table reporting`, w
Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
method, and providing the DataFrame in the `table_plot` parameter.
```python
# Report table - DataFrame with index
df = pd.DataFrame(
{
@ -24,7 +25,13 @@ method, and providing the DataFrame in the `table_plot` parameter.
index=["falcon", "dog", "spider", "fish"],
)
df.index.name = "id"
Logger.current_logger().report_table("table pd", "PD with index", iteration=iteration, table_plot=df)
Logger.current_logger().report_table(
"table pd",
"PD with index",
iteration=iteration,
table_plot=df
)
```
![image](../../img/examples_reporting_12.png)
@ -32,8 +39,15 @@ method, and providing the DataFrame in the `table_plot` parameter.
Report CSV files by providing the URL location of the CSV file in the `url` parameter. For a local CSV file, use the `csv` parameter.
```python
# Report table - CSV from path
csv_url = "https://raw.githubusercontent.com/plotly/datasets/master/Mining-BTC-180.csv"
Logger.current_logger().report_table("table csv", "remote csv", iteration=iteration, url=csv_url)
Logger.current_logger().report_table(
"table csv",
"remote csv",
iteration=iteration,
url=csv_url
)
```
![image](../../img/examples_reporting_11.png)

View File

@ -11,14 +11,25 @@ Plotly figure, using the `figure` parameter.
In this example, the Plotly figure is created using `plotly.express.scatter` (see [Scatter Plots in Python](https://plotly.com/python/line-and-scatter/)
in the Plotly documentation):
```python
# Iris dataset
df = px.data.iris()
# create complex plotly figure
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", marginal_y="rug", marginal_x="histogram")
fig = px.scatter(
df,
x="sepal_width",
y="sepal_length",
color="species",
marginal_y="rug",
marginal_x="histogram"
)
# report the plotly figure
task.get_logger().report_plotly(title="iris", series="sepal", iteration=0, figure=fig)
task.get_logger().report_plotly(
title="iris", series="sepal", iteration=0, figure=fig
)
```
When the script runs, it creates an experiment named `plotly reporting`, which is associated with the examples project.

View File

@ -12,14 +12,24 @@ To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.
method. To report more than one series on the same plot, use the same `title` argument. For different plots, use different
`title` arguments.
```python
# report two scalar series on the same graph
for i in range(100):
Logger.current_logger().report_scalar("unified graph", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("unified graph", "series B", iteration=i, value=10./(i+1))
Logger.current_logger().report_scalar(
"unified graph", "series A", iteration=i, value=1./(i+1)
)
Logger.current_logger().report_scalar(
"unified graph", "series B", iteration=i, value=10./(i+1)
)
# report two scalar series on two different graphs
for i in range(100):
Logger.current_logger().report_scalar("graph A", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("graph B", "series B", iteration=i, value=10./(i+1))
Logger.current_logger().report_scalar(
"graph A", "series A", iteration=i, value=1./(i+1)
)
Logger.current_logger().report_scalar(
"graph B", "series B", iteration=i, value=10./(i+1)
)
```
![image](../../img/examples_reporting_14.png)

View File

@ -19,6 +19,7 @@ method. To report more than one series on the same plot, use same the `title` ar
`title` arguments. Specify the type of histogram with the `mode` parameter. The `mode` values are `group` (the default),
`stack`, and `relative`.
```python
# report a single histogram
histogram = np.random.randint(10, size=10)
Logger.current_logger().report_histogram(
@ -50,6 +51,7 @@ method. To report more than one series on the same plot, use same the `title` ar
xaxis="title x",
yaxis="title y",
)
```
![image](../../img/examples_reporting_15.png)
@ -60,6 +62,7 @@ method. To report more than one series on the same plot, use same the `title` ar
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
```python
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_matrix(
@ -70,9 +73,11 @@ method.
xaxis="title X",
yaxis="title Y",
)
```
![image](../../img/examples_reporting_16.png)
```python
# report confusion matrix with 0,0 is at the top left
Logger.current_logger().report_matrix(
"example_confusion_0_0_at_top",
@ -83,12 +88,14 @@ method.
yaxis="title Y",
yaxis_reversed=True,
)
```
## 2D Scatter Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
method. Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.
```python
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
@ -124,5 +131,6 @@ method. Use the `mode` parameter to plot data points with lines (by default), ma
yaxis="title y",
mode='lines+markers'
)
```
![image](../../img/examples_reporting_17.png)

View File

@ -342,11 +342,19 @@ one ROI labeled with both `Car` and `largely_occluded` will be input.
```python
myDataView = DataView(iteration_order=IterationOrder.random, iteration_infinite=True)
myDataView.add_query(dataset_name='myDataset', version_name='training',
roi_query='Car', weight = 1)
myDataView.add_query(
dataset_name='myDataset',
version_name='training',
roi_query='Car',
weight = 1
)
myDataView.add_query(dataset_name='myDataset', version_name='training',
roi_query='label.keyword:\"Car\" AND label.keyword:\"largely_occluded\"', weight = 5)
myDataView.add_query(
dataset_name='myDataset',
version_name='training',
roi_query='label.keyword:\"Car\" AND label.keyword:\"largely_occluded\"',
weight = 5
)
```
### Mapping ROI Labels

View File

@ -251,6 +251,7 @@ mask value as a list with the RGB values in the `mask_rgb` parameter, and a list
frame = SingleFrame(
source='/home/user/woof_meow.jpg',
preview_uri='https://storage.googleapis.com/kaggle-competitions/kaggle/3362/media/woof_meow.jpg',
)
frame.add_annotation(mask_rgb=[0, 0, 0], labels=['cat'])
```

View File

@ -35,7 +35,7 @@ When archiving an experiment:
* Experiments or models table - Right click the experiment or model **>** **Restore**.
* Info panel or full screen details view - Click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bars menu" className="icon size-sm space-sm" />
(menu) **>** **Restore from archive**.
(menu) **>** **Restore from Archive**.
* Restore multiple experiments or models from the:

View File

@ -126,7 +126,7 @@ Visualize the comparison of scalars, which includes metrics and monitored resour
1. Click the **SCALARS** tab.
1. In the dropdown menu (upper right of the left sidebar), choose either:
* **Last values** (the final or most recent value)
* **Last Values** (the final or most recent value)
* **Min Values** (the minimal values)
* **Max Values** (the maximal values)
1. Sort by variant.