Small fixes ()

This commit is contained in:
pollfly 2021-12-14 15:12:30 +02:00 committed by GitHub
parent 6ae75beaa2
commit ec304690b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 622 additions and 432 deletions

View File

@ -37,7 +37,7 @@ A specific branch and commit ID, other than latest commit in master, to be execu
`--branch <branch_name> --commit <commit_id>` flags.
If unspecified, `clearml-task` will use the latest commit from the master branch.
### Command line options
### Command Line Options
<div className="tbl-cmd">

View File

@ -49,10 +49,10 @@ Task id=2f96ee95b05d4693b360d0fcbb26b727 sent for execution on queue default
Execution log at: https://app.community.clear.ml/projects/552d5399112d47029c146d5248570295/experiments/2f96ee95b05d4693b360d0fcbb26b727/output/log
```
:::note
**clearml-task** automatically finds the requirements.txt file in remote repositories.
:::note Adding Requirements
`clearml-task` automatically finds the requirements.txt file in remote repositories.
If a remote repo does not have such a file, make sure to either add one with all the required Python packages,
or add the **`--packages '<package_name>`** flag to the command.
or add the `--packages "<package_name>"` flag to the command (for example: `--packages "tqdm>=2.1" "scikit-learn"`).
:::
<br />

View File

@ -83,7 +83,9 @@ dataset_project = "dataset_examples"
from clearml import Dataset
dataset_path = Dataset.get(dataset_name=dataset_name, dataset_project=dataset_project).get_local_copy()
dataset_path = Dataset.get(
dataset_name=dataset_name,
dataset_project=dataset_project).get_local_copy()
trainset = datasets.CIFAR10(
root=dataset_path,

View File

@ -24,7 +24,9 @@ We first need to obtain a local copy of the CIFAR dataset.
from clearml import StorageManager
manager = StorageManager()
dataset_path = manager.get_local_copy(remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz")
dataset_path = manager.get_local_copy(
remote_url="https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
)
```
This script downloads the data and `dataset_path` contains the path to the downloaded data.

View File

@ -26,8 +26,12 @@ The example uploads a dictionary as an artifact in the main Task by calling the
method on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task) (the main Task). The dictionary contains the [`dist.rank`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_rank)
of the subprocess, making each unique.
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
```python
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()),
artifact_object={'worker_rank': dist.get_rank()}
)
```
All of these artifacts appear in the main Task under **ARTIFACTS** **>** **OTHER**.
@ -40,8 +44,14 @@ method on `Task.current_task().get_logger`, which is the logger for the main Tas
with the same title (`loss`), but a different series name (containing the subprocess' `rank`), all loss scalar series are
logged together.
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
```python
Task.current_task().get_logger().report_scalar(
'loss',
'worker {:02d}'.format(dist.get_rank()),
value=loss.item(),
iteration=i
)
```
The single scalar plot for loss appears in **RESULTS** **>** **SCALARS**.
@ -49,12 +59,14 @@ The single scalar plot for loss appears in **RESULTS** **>** **SCALARS**.
## Hyperparameters
**ClearML** automatically logs the argparse command line options. Since the [Task.connect](../../references/sdk/task#connect)
method is called on `Task.current_task`, they are logged in the main Task. A different hyperparameter key is used in each
**ClearML** automatically logs the argparse command line options. Since the [`Task.connect`](../../references/sdk/task#connect)
method is called on [`Task.current_task`](../../references/sdk/task.md#taskcurrent_task), they are logged in the main Task. A different hyperparameter key is used in each
subprocess, so they do not overwrite each other in the main Task.
param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))}
Task.current_task().connect(param)
```python
param = {'worker_{}_stuff'.format(dist.get_rank()): 'some stuff ' + str(randint(0, 100))}
Task.current_task().connect(param)
```
All the hyperparameters appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS**.

View File

@ -14,11 +14,15 @@ which always returns the main Task.
## Hyperparameters
**ClearML** automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
ClearML automatically logs the command line options defined with `argparse`. A parameter dictionary is logged by
connecting it to the Task using a call to the [Task.connect](../../references/sdk/task#connect) method.
additional_parameters = {'stuff_' + str(randint(0, 100)): 'some stuff ' + str(randint(0, 100))}
Task.current_task().connect(additional_parameters)
```python
additional_parameters = {
'stuff_' + str(randint(0, 100)): 'some stuff ' + str(randint(0, 100))
}
Task.current_task().connect(additional_parameters)
```
Command line options appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS** **>** **Args**.

View File

@ -19,23 +19,20 @@ a shell script when a docker is started, but before an experiment is run.
## Steps
1. Open your ClearML configuration file for editing. Depending upon your operating system, it is:
* Linux - ~/clearml.conf
* Mac - $HOME/clearml.conf
* Windows - \\User\\<username\>\\clearml.conf
* Linux - `~/clearml.conf`
* Mac - `$HOME/clearml.conf`
* Windows - `\User\<username>\clearml.conf`
When you open up the file, the first line should say: `# CLEARML-AGENT configuration file`
1. In the file, search for and go to, "extra_docker_shell_script:", which is where we will be putting our extra script. If
it is commented out, make sure to uncomment the line. We will use the example script that is already there ["apt-get install -y bindfs", ].
1. In the file, search for and go to, `extra_docker_shell_script:`, which is where we will be putting our extra script. If
it is commented out, make sure to uncomment the line. We will use the example script that is already there `["apt-get install -y bindfs", ]`.
1. Search for and go to "docker_force_pull" in the document, and make sure that it is set to "true", so that your docker image will
be updated.
1. Search for and go to `docker_force_pull` in the document, and make sure that it is set to `true`, so that your docker
image will be updated.
1. Run the `clearml-agent` in docker mode: `clearml-agent daemon --docker --queue default`. The agent will use the default
Cuda/Nvidia Docker Image.
1. Enqueue any Clearml Task to the default queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,
1. Enqueue any ClearML Task to the `default` queue, which the Agent is now listening to. The Agent pulls the Task, and then reproduces it,
and now it will execute the `extra_docker_shell_script` that was put in the configuration file. Then the code will be
executed in the updated docker container. If we look at the console output in the web UI, the third entry should start
with `Executing: ['docker', 'run', '-t', '--gpus...'`, and towards the end of the entry, where the downloaded packages are

View File

@ -18,9 +18,14 @@ Accuracy, learning rate, and training loss appear in **RESULTS** **>** **SCALARS
is logged by connecting it to the Task using a call to the [Task.connect](../../../../../references/sdk/task.md#connect)
method.
configuration_dict = {'number_of_epochs': 6, 'batch_size': 16, 'ngrams': 2, 'base_lr': 1.0}
configuration_dict = task.connect(configuration_dict) # enabling configuration override by clearml
```python
configuration_dict = {
'number_of_epochs': 6, 'batch_size': 16, 'ngrams': 2, 'base_lr': 1.0
}
# enabling configuration override by clearml
configuration_dict = task.connect(configuration_dict)
```
Command line options appear in **CONFIGURATIONS** **>** **HYPER PARAMETERS** **>** **Args**.
![image](../../../../../img/text_classification_AG_NEWS_01.png)

View File

@ -34,23 +34,14 @@ installed, it attempts to import `OptimizerBOHB`. If `clearml.automation.hpbands
the `RandomSearch` for the search strategy.
```python
aSearchStrategy = None
if not aSearchStrategy:
try:
from clearml.optuna import OptimizerOptuna
aSearchStrategy = OptimizerOptuna
except ImportError as ex:
pass
if not aSearchStrategy:
try:
from clearml.automation.hpbandster import OptimizerBOHB
aSearchStrategy = OptimizerBOHB
except ImportError as ex:
pass
if not aSearchStrategy:
try:
from clearml.automation.optuna import OptimizerOptuna # noqa
aSearchStrategy = OptimizerOptuna
except ImportError as ex:
try:
from clearml.automation.hpbandster import OptimizerBOHB # noqa
aSearchStrategy = OptimizerBOHB
except ImportError as ex:
logging.getLogger().warning(
'Apologies, it seems you do not have \'optuna\' or \'hpbandster\' installed, '
'we will be using RandomSearch strategy instead')
@ -63,16 +54,16 @@ When the optimization starts, a callback is provided that returns the best perfo
the `job_complete_callback` function returns the ID of `top_performance_job_id`.
```python
def job_complete_callback(
job_id, # type: str
objective_value, # type: float
objective_iteration, # type: int
job_parameters, # type: dict
top_performance_job_id # type: str
):
print('Job completed!', job_id, objective_value, objective_iteration, job_parameters)
if job_id == top_performance_job_id:
print('WOOT WOOT we broke the record! Objective reached {}'.format(objective_value))
def job_complete_callback(
job_id, # type: str
objective_value, # type: float
objective_iteration, # type: int
job_parameters, # type: dict
top_performance_job_id # type: str
):
print('Job completed!', job_id, objective_value, objective_iteration, job_parameters)
if job_id == top_performance_job_id:
print('WOOT WOOT we broke the record! Objective reached {}'.format(objective_value))
```
## Initialize the Optimization Task
@ -86,11 +77,13 @@ When the code runs, it creates an experiment named **Automatic Hyper-Parameter O
the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML Web UI**.
```python
# Connecting CLEARML
task = Task.init(project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False)
# Connecting CLEARML
task = Task.init(
project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False
)
```
## Set Up the Arguments
@ -105,17 +98,17 @@ Since the arguments dictionary is connected to the Task, after the code runs onc
to optimize a different experiment.
```python
# experiment template to optimize in the hyper-parameter optimization
args = {
'template_task_id': None,
'run_as_service': False,
}
args = task.connect(args)
# experiment template to optimize in the hyper-parameter optimization
args = {
'template_task_id': None,
'run_as_service': False,
}
args = task.connect(args)
# Get the template task experiment that we want to optimize
if not args['template_task_id']:
args['template_task_id'] = Task.get_task(
project_name='examples', task_name='Keras HP optimization base').id
# Get the template task experiment that we want to optimize
if not args['template_task_id']:
args['template_task_id'] = Task.get_task(
project_name='examples', task_name='Keras HP optimization base').id
```
## Creating the Optimizer Object
@ -124,9 +117,9 @@ Initialize an [automation.HyperParameterOptimizer](../../../references/sdk/hpo_o
object, setting the optimization parameters, beginning with the ID of the experiment to optimize.
```python
an_optimizer = HyperParameterOptimizer(
# This is the experiment we want to optimize
base_task_id=args['template_task_id'],
an_optimizer = HyperParameterOptimizer(
# This is the experiment we want to optimize
base_task_id=args['template_task_id'],
```
Set the hyperparameter ranges to sample, instantiating them as **ClearML** automation objects using [automation.UniformIntegerParameterRange](../../../references/sdk/hpo_parameters_uniformintegerparameterrange.md)
@ -190,24 +183,25 @@ The optimization can run as a service, if the `run_as_service` argument is set t
running as a service, see [Services Mode](../../../clearml_agent.md#services-mode).
```python
# if we are running as a service, just enqueue ourselves into the services queue and let it run the optimization
if args['run_as_service']:
# if this code is executed by `clearml-agent` the function call does nothing.
# if executed locally, the local process will be terminated, and a remote copy will be executed instead
task.execute_remotely(queue_name='services', exit_process=True)
# if we are running as a service, just enqueue ourselves into the services queue and let it run the optimization
if args['run_as_service']:
# if this code is executed by `clearml-agent` the function call does nothing.
# if executed locally, the local process will be terminated, and a remote copy will be executed instead
task.execute_remotely(queue_name='services', exit_process=True)
```
## Optimize
The optimizer is ready. Set the report period and start it, providing the callback method to report the best performance.
The optimizer is ready. Set the report period and [start](../../../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start)
it, providing the callback method to report the best performance.
```python
# report every 12 seconds, this is way too often, but we are testing here J
an_optimizer.set_report_period(0.2)
# start the optimization process, callback function to be called every time an experiment is completed
# this function returns immediately
an_optimizer.start(job_complete_callback=job_complete_callback)
# set the time limit for the optimization process (2 hours)
# report every 12 seconds, this is way too often, but we are testing here J
an_optimizer.set_report_period(0.2)
# start the optimization process, callback function to be called every time an experiment is completed
# this function returns immediately
an_optimizer.start(job_complete_callback=job_complete_callback)
# set the time limit for the optimization process (2 hours)
```
Now that it is running:
@ -218,15 +212,15 @@ Now that it is running:
1. Stop the optimizer.
```python
# set the time limit for the optimization process (2 hours)
an_optimizer.set_time_limit(in_minutes=90.0)
# wait until process is done (notice we are controlling the optimization process in the background)
an_optimizer.wait()
# optimization is completed, print the top performing experiments id
top_exp = an_optimizer.get_top_experiments(top_k=3)
print([t.id for t in top_exp])
# make sure background optimization stopped
an_optimizer.stop()
print('We are done, good bye')
# set the time limit for the optimization process (2 hours)
an_optimizer.set_time_limit(in_minutes=90.0)
# wait until process is done (notice we are controlling the optimization process in the background)
an_optimizer.wait()
# optimization is completed, print the top performing experiments id
top_exp = an_optimizer.get_top_experiments(top_k=3)
print([t.id for t in top_exp])
# make sure background optimization stopped
an_optimizer.stop()
print('We are done, good bye')
```

View File

@ -14,38 +14,40 @@ When the script runs, it creates an experiment named `3D plot reporting`, which
To plot a series as a surface plot, use the [Logger.report_surface](../../references/sdk/logger.md#report_surface)
method.
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_surface(
"example_surface",
"series1",
iteration=iteration,
matrix=surface,
xaxis="title X",
yaxis="title Y",
zaxis="title Z",
)
```python
# report 3d surface
surface = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_surface(
"example_surface",
"series1",
iteration=iteration,
matrix=surface,
xaxis="title X",
yaxis="title Y",
zaxis="title Z",
)
```
Visualize the reported surface plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_01.png)
![Surface plot](../../img/examples_reporting_02.png)
## 3D Scatter Plot
To plot a series as a 3-dimensional scatter plot, use the [Logger.report_scatter3d](../../references/sdk/logger.md#report_scatter3d)
method.
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
Logger.current_logger().report_scatter3d(
"example_scatter_3d",
"series_xyz",
iteration=iteration,
scatter=scatter3d,
xaxis="title x",
yaxis="title y",
zaxis="title z",
)
```python
# report 3d scatter plot
scatter3d = np.random.randint(10, size=(10, 3))
Logger.current_logger().report_scatter3d(
"example_scatter_3d",
"series_xyz",
iteration=iteration,
scatter=scatter3d,
xaxis="title x",
yaxis="title y",
zaxis="title z",
)
```
Visualize the reported 3D scatter plot in **RESULTS** **>** **PLOTS**.
![image](../../img/examples_reporting_02.png)
![3d scatter plot](../../img/examples_reporting_01.png)

View File

@ -38,26 +38,30 @@ method. If the Pandas DataFrame changes, **ClearML** uploads the changes. The up
For example:
df = pd.DataFrame(
{
'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]
},
index=['falcon', 'dog', 'spider', 'fish']
)
```python
df = pd.DataFrame(
{
'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]
},
index=['falcon', 'dog', 'spider', 'fish']
)
# Register Pandas object as artifact to watch
# (it will be monitored in the background and automatically synced and uploaded)
task.register_artifact('train', df, metadata={'counting': 'legs', 'max legs': 69}))
# Register Pandas object as artifact to watch
# (it will be monitored in the background and automatically synced and uploaded)
task.register_artifact('train', df, metadata={'counting': 'legs', 'max legs': 69}))
```
By changing the artifact, and calling the [Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
method to retrieve it, we can see that **ClearML** tracked the change.
# change the artifact object
df.sample(frac=0.5, replace=True, random_state=1)
# or access it from anywhere using the Task's get_registered_artifacts()
Task.current_task().get_registered_artifacts()['train'].sample(frac=0.5, replace=True, random_state=1)
```python
# change the artifact object
df.sample(frac=0.5, replace=True, random_state=1)
# or access it from anywhere using the Task's get_registered_artifacts()
Task.current_task().get_registered_artifacts()['train'].sample(frac=0.5, replace=True, random_state=1)
```
## Artifacts Without Tracking
@ -75,37 +79,52 @@ Artifacts without tracking include:
* Wildcards (stored as a ZIP files)
### Pandas DataFrames
# add and upload pandas.DataFrame (onetime snapshot of the object)
task.upload_artifact('Pandas', artifact_object=df)
```python
# add and upload pandas.DataFrame (onetime snapshot of the object)
task.upload_artifact('Pandas', artifact_object=df)
```
### Local Files
# add and upload local file artifact
task.upload_artifact('local file', artifact_object=os.path.join('data_samples', 'dancing.jpg'))
```python
# add and upload local file artifact
task.upload_artifact(
'local file',
artifact_object=os.path.join(
'data_samples',
'dancing.jpg'
)
)
```
### Dictionaries
# add and upload dictionary stored as JSON)
task.upload_artifact('dictionary', df.to_dict())
```python
# add and upload dictionary stored as JSON)
task.upload_artifact('dictionary', df.to_dict())
```
### Numpy Objects
# add and upload Numpy Object (stored as .npz file)
task.upload_artifact('Numpy Eye', np.eye(100, 100))
```python
# add and upload Numpy Object (stored as .npz file)
task.upload_artifact('Numpy Eye', np.eye(100, 100))
```
### Image Files
# add and upload Image (stored as .png file)
im = Image.open(os.path.join('data_samples', 'dancing.jpg'))
task.upload_artifact('pillow_image', im)
```python
# add and upload Image (stored as .png file)
im = Image.open(os.path.join('data_samples', 'dancing.jpg'))
task.upload_artifact('pillow_image', im)
```
### Folders
# add and upload a folder, artifact_object should be the folder path
task.upload_artifact('local folder', artifact_object=os.path.join('data_samples'))
```python
# add and upload a folder, artifact_object should be the folder path
task.upload_artifact('local folder', artifact_object=os.path.join('data_samples'))
```
### Wildcards
# add and upload a wildcard
task.upload_artifact('wildcard jpegs', artifact_object=os.path.join('data_samples', '*.jpg'))
```python
# add and upload a wildcard
task.upload_artifact('wildcard jpegs', artifact_object=os.path.join('data_samples', '*.jpg'))
```

View File

@ -23,8 +23,9 @@ Make a copy of `pytorch_mnist.py` in order to add explicit reporting to it.
* In the local **ClearML** repository, `example` directory.
cp pytorch_mnist.py pytorch_mnist_tutorial.py
```bash
cp pytorch_mnist.py pytorch_mnist_tutorial.py
```
## Step 1: Setting an Output Destination for Model Checkpoints
@ -42,17 +43,21 @@ In this tutorial, we specify a local folder destination.
In `pytorch_mnist_tutorial.py`, change the code from:
task = Task.init(project_name='examples', task_name='pytorch mnist train')
```python
task = Task.init(project_name='examples', task_name='pytorch mnist train')
```
to:
model_snapshots_path = '/mnt/clearml'
if not os.path.exists(model_snapshots_path):
os.makedirs(model_snapshots_path)
```python
model_snapshots_path = '/mnt/clearml'
if not os.path.exists(model_snapshots_path):
os.makedirs(model_snapshots_path)
task = Task.init(project_name='examples',
task_name='extending automagical ClearML example',
output_uri=model_snapshots_path)
task = Task.init(project_name='examples',
task_name='extending automagical ClearML example',
output_uri=model_snapshots_path)
```
When the script runs, **ClearML** creates the following directory structure:
@ -94,83 +99,106 @@ package contains methods for explicit reporting of plots, log text, media, and t
First, create a logger for the Task using the [Task.get_logger](../../references/sdk/task.md#get_logger)
method.
logger = task.get_logger
```python
logger = task.get_logger
```
### Plot Scalar Metrics
Add scalar metrics using the [Logger.report_scalar](../../references/sdk/logger.md#report_scalar)
method to report loss metrics.
def train(args, model, device, train_loader, optimizer, epoch):
```python
def train(args, model, device, train_loader, optimizer, epoch):
save_loss = []
save_loss = []
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
save_loss.append(loss)
save_loss.append(loss)
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# Add manual scalar reporting for loss metrics
logger.report_scalar(title='Scalar example {} - epoch'.format(epoch),
series='Loss', value=loss.item(), iteration=batch_idx)
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# Add manual scalar reporting for loss metrics
logger.report_scalar(title='Scalar example {} - epoch'.format(epoch),
series='Loss', value=loss.item(), iteration=batch_idx)
```
### Plot Other (Not Scalar) Data
The script contains a function named `test`, which determines loss and correct for the trained model. We add a histogram
and confusion matrix to log them.
def test(args, model, device, test_loader):
```python
def test(args, model, device, test_loader):
save_test_loss = []
save_correct = []
save_test_loss = []
save_correct = []
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# sum up batch loss
test_loss += F.nll_loss(output, target, reduction='sum').item()
# get the index of the max log-probability
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
# sum up batch loss
test_loss += F.nll_loss(output, target, reduction='sum').item()
# get the index of the max log-probability
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
save_test_loss.append(test_loss)
save_correct.append(correct)
save_test_loss.append(test_loss)
save_correct.append(correct)
test_loss /= len(test_loader.dataset)
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
logger.report_histogram(title='Histogram example', series='correct',
iteration=1, values=save_correct, xaxis='Test', yaxis='Correct')
logger.report_histogram(
title='Histogram example',
series='correct',
iteration=1,
values=save_correct,
xaxis='Test',
yaxis='Correct'
)
# Manually report test loss and correct as a confusion matrix
matrix = np.array([save_test_loss, save_correct])
logger.report_confusion_matrix(title='Confusion matrix example',
series='Test loss / correct', matrix=matrix, iteration=1)
# Manually report test loss and correct as a confusion matrix
matrix = np.array([save_test_loss, save_correct])
logger.report_confusion_matrix(
title='Confusion matrix example',
series='Test loss / correct',
matrix=matrix,
iteration=1
)
```
### Log Text
Extend **ClearML** by explicitly logging text, including errors, warnings, and debugging statements. We use the [Logger.report_text](../../references/sdk/logger.md#report_text)
method and its argument `level` to report a debugging message.
logger.report_text('The default output destination for model snapshots and artifacts is: {}'.format(model_snapshots_path ), level=logging.DEBUG)
```python
logger.report_text(
'The default output destination for model snapshots and artifacts is: {}'.format(
model_snapshots_path
),
level=logging.DEBUG
)
```
## Step 3: Registering Artifacts
@ -182,16 +210,25 @@ Currently, **ClearML** supports Pandas DataFrames as registered artifacts.
In the tutorial script, `test` function, we can assign the test loss and correct data to a Pandas DataFrame object and register
that Pandas DataFrame using the [Task.register_artifact](../../references/sdk/task.md#register_artifact) method.
# Create the Pandas DataFrame
test_loss_correct = {
'test lost': save_test_loss,
'correct': save_correct
}
df = pd.DataFrame(test_loss_correct, columns=['test lost','correct'])
```python
# Create the Pandas DataFrame
test_loss_correct = {
'test lost': save_test_loss,
'correct': save_correct
}
df = pd.DataFrame(test_loss_correct, columns=['test lost','correct'])
# Register the test loss and correct as a Pandas DataFrame artifact
task.register_artifact('Test_Loss_Correct', df, metadata={'metadata string': 'apple',
'metadata int': 100, 'metadata dict': {'dict string': 'pear', 'dict int': 200}})
# Register the test loss and correct as a Pandas DataFrame artifact
task.register_artifact(
'Test_Loss_Correct',
df,
metadata={
'metadata string': 'apple',
'metadata int': 100,
'metadata dict': {'dict string': 'pear', 'dict int': 200}
}
)
```
### Reference the Registered Artifact
@ -201,9 +238,15 @@ In the tutorial script, we add [Task.current_task](../../references/sdk/task.md#
[Task.get_registered_artifacts](../../references/sdk/task.md#get_registered_artifacts)
methods to take a sample.
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(frac=0.5,
replace=True, random_state=1)
```python
# Once the artifact is registered, we can get it and work with it. Here, we sample it.
sample = Task.current_task().get_registered_artifacts()['Test_Loss_Correct'].sample(
frac=0.5,
replace=True,
random_state=1
)
```
## Step 4: Uploading Artifacts
@ -220,10 +263,18 @@ Supported artifacts include:
In the tutorial script, we upload the loss data as an artifact using the [Task.upload_artifact](../../references/sdk/task.md#upload_artifact)
method with metadata specified in the `metadata` parameter.
# Upload test loss as an artifact. Here, the artifact is numpy array
task.upload_artifact('Predictions',artifact_object=np.array(save_test_loss),
metadata={'metadata string': 'banana', 'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}})
```python
# Upload test loss as an artifact. Here, the artifact is numpy array
task.upload_artifact(
'Predictions',
artifact_object=np.array(save_test_loss),
metadata={
'metadata string': 'banana',
'metadata integer': 300,
'metadata dictionary': {'dict string': 'orange', 'dict int': 400}
}
)
```
## Additional Information

View File

@ -33,28 +33,41 @@ Report the following using the `Logger.report_media` parameter method `local_pat
### Interactive HTML
See the example script's [report_html_periodic_table](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L26) function, which reports a file created from Bokeh sample data.
Logger.current_logger().report_media("html", "periodic_html", iteration=iteration, local_path="periodic.html")
```python
Logger.current_logger().report_media(
"html", "periodic_html", iteration=iteration, local_path="periodic.html"
)
```
### Bokeh GroupBy HTML
See the example script's [report_html_groupby](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L117) function, which reports a Pandas GroupBy with nested HTML, created from Bokeh sample data.
```python
Logger.current_logger().report_media(
"html",
"pandas_groupby_nested_html",
iteration=iteration,
local_path="bar_pandas_groupby_nested.html",
)
Logger.current_logger().report_media(
"html",
"pandas_groupby_nested_html",
iteration=iteration,
local_path="bar_pandas_groupby_nested.html",
)
```
### Bokeh Graph HTML
See the example script's [report_html_graph](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L162) function, which reports a Bokeh plot created from Bokeh sample data.
Logger.current_logger().report_media("html", "Graph_html", iteration=iteration, local_path="graph.html")
```python
Logger.current_logger().report_media(
"html", "Graph_html", iteration=iteration, local_path="graph.html"
)
```
### Bokeh Image HTML
See the example script's [report_html_image](https://github.com/allegroai/clearml/blob/master/examples/reporting/html_reporting.py#L195) function, which reports an image created from Bokeh sample data.
Logger.current_logger().report_media("html", "Spectral_html", iteration=iteration, local_path="image.html")
```python
Logger.current_logger().report_media(
"html", "Spectral_html", iteration=iteration, local_path="image.html"
)
```

View File

@ -16,13 +16,17 @@ When the script runs, it creates an experiment named `hyper-parameters example`,
## Argparse Command Line Options
If a code uses argparse and initializes a Task, **ClearML** automatically logs the argparse arguments.
parser = ArgumentParser()
parser.add_argument('--argparser_int_value', help='integer value', type=int, default=1)
parser.add_argument('--argparser_disabled', action='store_true', default=False, help='disables something')
parser.add_argument('--argparser_str_value', help='string value', default='a string')
```python
parser = ArgumentParser()
parser.add_argument('--argparser_int_value', help='integer value', type=int, default=1)
parser.add_argument(
'--argparser_disabled', action='store_true', default=False, help='disables something'
)
parser.add_argument('--argparser_str_value', help='string value', default='a string')
args = parser.parse_args()
args = parser.parse_args()
```
Command line options appears in **HYPER PARAMETERS** **>** **Args**.
@ -32,14 +36,17 @@ Command line options appears in **HYPER PARAMETERS** **>** **Args**.
**ClearML** automatically logs TensorFlow Definitions, whether they are defined before or after the Task is initialized.
flags.DEFINE_string('echo', None, 'Text to echo.')
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
```python
flags.DEFINE_string('echo', None, 'Text to echo.')
flags.DEFINE_string('another_str', 'My string', 'A string', module_name='test')
task = Task.init(project_name='examples', task_name='hyper-parameters example')
task = Task.init(project_name='examples', task_name='hyper-parameters example')
flags.DEFINE_integer('echo3', 3, 'Text to echo.')
flags.DEFINE_integer('echo3', 3, 'Text to echo.')
flags.DEFINE_string('echo5', '5', 'Text to echo.', module_name='test')
flags.DEFINE_string('echo5', '5', 'Text to echo.', module_name='test')
```
TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
@ -50,22 +57,25 @@ TensorFlow Definitions appear in **HYPER PARAMETERS** **>** **TF_DEFINE**.
Connect a parameter dictionary to a Task by calling the [Task.connect](../../references/sdk/task.md#connect)
method, and **ClearML** logs the parameters. **ClearML** also tracks changes to the parameters.
parameters = {
'list': [1, 2, 3],
'dict': {'a': 1, 'b': 2},
'tuple': (1, 2, 3),
'int': 3,
'float': 2.2,
'string': 'my string',
}
```python
parameters = {
'list': [1, 2, 3],
'dict': {'a': 1, 'b': 2},
'tuple': (1, 2, 3),
'int': 3,
'float': 2.2,
'string': 'my string',
}
parameters = task.connect(parameters)
parameters = task.connect(parameters)
# adding new parameter after connect (will be logged as well)
parameters['new_param'] = 'this is new'
# changing the value of a parameter (new value will be stored instead of previous one)
parameters['float'] = '9.9'
# adding new parameter after connect (will be logged as well)
parameters['new_param'] = 'this is new'
# changing the value of a parameter (new value will be stored instead of previous one)
parameters['float'] = '9.9'
```
Parameters from dictionaries connected to Tasks appear in **HYPER PARAMETERS** **>** **General**.

View File

@ -20,27 +20,39 @@ When the script runs, it creates an experiment named `image reporting`, which is
Report images using several formats by calling the [Logger.report_image](../../references/sdk/logger.md#report_image)
method:
# report image as float image
m = np.eye(256, 256, dtype=np.float)
Logger.current_logger().report_image("image", "image float", iteration=iteration, image=m)
```python
# report image as float image
m = np.eye(256, 256, dtype=np.float)
Logger.current_logger().report_image("image", "image float", iteration=iteration, image=m)
# report image as uint8
m = np.eye(256, 256, dtype=np.uint8) * 255
Logger.current_logger().report_image("image", "image uint8", iteration=iteration, image=m)
# report image as uint8
m = np.eye(256, 256, dtype=np.uint8) * 255
Logger.current_logger().report_image("image", "image uint8", iteration=iteration, image=m)
# report image as uint8 RGB
m = np.concatenate((np.atleast_3d(m), np.zeros((256, 256, 2), dtype=np.uint8)), axis=2)
Logger.current_logger().report_image("image", "image color red", iteration=iteration, image=m)
# report image as uint8 RGB
m = np.concatenate((np.atleast_3d(m), np.zeros((256, 256, 2), dtype=np.uint8)), axis=2)
Logger.current_logger().report_image(
"image",
"image color red",
iteration=iteration,
image=m
)
# report PIL Image object
image_open = Image.open(os.path.join("data_samples", "picasso.jpg"))
Logger.current_logger().report_image("image", "image PIL", iteration=iteration, image=image_open)
# report PIL Image object
image_open = Image.open(os.path.join("data_samples", "picasso.jpg"))
Logger.current_logger().report_image(
"image",
"image PIL",
iteration=iteration,
image=image_open
)
```
**ClearML** reports these images as debug samples in the **ClearML Web UI** **>** experiment details **>** **RESULTS** tab
**>** **DEBUG SAMPLES** sub-tab.
![image](../../img/examples_reporting_07.png)
Double click a thumbnail and the image viewer opens.
Double click a thumbnail, and the image viewer opens.
![image](../../img/examples_reporting_07a.png)

View File

@ -24,15 +24,19 @@ project.
Report by calling the [Logger.report_media](../../references/sdk/logger.md#report_media)
method using the `url` parameter.
# report video, an already uploaded video media (url)
Logger.current_logger().report_media(
'video', 'big bunny', iteration=1,
url='https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_1MB.mp4')
```python
# report video, an already uploaded video media (url)
Logger.current_logger().report_media(
'video', 'big bunny', iteration=1,
url='https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/720/Big_Buck_Bunny_720_10s_1MB.mp4'
)
# report audio, report an already uploaded audio media (url)
Logger.current_logger().report_media(
'audio', 'pink panther', iteration=1,
url='https://www2.cs.uic.edu/~i101/SoundFiles/PinkPanther30.wav')
# report audio, report an already uploaded audio media (url)
Logger.current_logger().report_media(
'audio', 'pink panther', iteration=1,
url='https://www2.cs.uic.edu/~i101/SoundFiles/PinkPanther30.wav'
)
```
The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the audio player opens.
@ -43,10 +47,13 @@ The reported audio can be viewed in the **DEBUG SAMPLES** sub-tab. Double click
Use the `local_path` parameter.
# report audio, report local media audio file
Logger.current_logger().report_media(
'audio', 'tada', iteration=1,
local_path=os.path.join('data_samples', 'sample.mp3'))
```python
# report audio, report local media audio file
Logger.current_logger().report_media(
'audio', 'tada', iteration=1,
local_path=os.path.join('data_samples', 'sample.mp3')
)
```
The reported video can be viewed in the **DEBUG SAMPLES** sub-tab. Double click a thumbnail, and the video player opens.

View File

@ -17,11 +17,12 @@ Connect a configuration file to a Task by calling the [Task.connect_configuratio
method with the file location and the configuration object's name as arguments. In this example, we connect a JSON file and a YAML file
to a Task.
config_file_json = 'data_samples/sample.json'
task.connect_configuration(name="json file", configuration=config_file_json)
...
config_file_yaml = 'data_samples/config_yaml.yaml'
task.connect_configuration(configuration=config_file_yaml, name="yaml file")
```python
config_file_json = 'data_samples/sample.json'
task.connect_configuration(name="json file", configuration=config_file_json)
config_file_yaml = 'data_samples/config_yaml.yaml'
task.connect_configuration(configuration=config_file_yaml, name="yaml file")
```
The configuration is logged to the ClearML Task and can be viewed in the **ClearML Web UI** experiment details **>** **CONFIGURATION** tab **>** **CONFIGURATION OBJECTS**
section. The contents of the JSON file will appear in the **json file** object, and the contents of the YAML file will appear
@ -34,17 +35,21 @@ in the **yaml file** object, as specified in the `name` parameter of the `connec
Connect a configuration dictionary to a Task by creating a dictionary, and then calling the [Task.connect_configuration](../../references/sdk/task.md#connect_configuration)
method with the dictionary and the object name as arguments. After the configuration is connected, **ClearML** tracks changes to it.
model_config_dict = {
'CHANGE ME': 13.37,
'dict': {'sub_value': 'string', 'sub_integer': 11},
'list_of_ints': [1, 2, 3, 4],
}
model_config_dict = task.connect_configuration(name='dictionary', configuration=model_config_dict)
# Update the dictionary after connecting it, and the changes will be tracked as well.
model_config_dict['new value'] = 10
model_config_dict['CHANGE ME'] *= model_config_dict['new value']
```python
model_config_dict = {
'CHANGE ME': 13.37,
'dict': {'sub_value': 'string', 'sub_integer': 11},
'list_of_ints': [1, 2, 3, 4],
}
model_config_dict = task.connect_configuration(
name='dictionary',
configuration=model_config_dict
)
# Update the dictionary after connecting it, and the changes will be tracked as well.
model_config_dict['new value'] = 10
model_config_dict['CHANGE ME'] *= model_config_dict['new value']
```
The configurations are connected to the ClearML Task and can be viewed in the **ClearML Web UI** **>** experiment details **>** **CONFIGURATION** tab **>**
**CONFIGURATION OBJECTS** area **>** **dictionary** object.
@ -55,13 +60,16 @@ The configurations are connected to the ClearML Task and can be viewed in the **
Connect a label enumeration dictionary by creating the dictionary, and then calling the [Task.connect_label_enumeration](../../references/sdk/task.md#connect_label_enumeration)
method with the dictionary as an argument.
# store the label enumeration of the training model
labels = {'background': 0, 'cat': 1, 'dog': 2}
task.connect_label_enumeration(labels)
```python
# store the label enumeration of the training model
labels = {'background': 0, 'cat': 1, 'dog': 2}
task.connect_label_enumeration(labels)
```
Log a local model file.
OutputModel().update_weights('my_best_model.bin')
```python
OutputModel().update_weights('my_best_model.bin')
```
The model which is stored contains the model configuration and the label enumeration.

View File

@ -14,17 +14,24 @@ When the script runs, it creates an experiment named `pandas table reporting`, w
Report Pandas DataFrames by calling the [Logger.report_table](../../references/sdk/logger.md#report_table)
method, and providing the DataFrame in the `table_plot` parameter.
# Report table - DataFrame with index
df = pd.DataFrame(
{
"num_legs": [2, 4, 8, 0],
"num_wings": [2, 0, 0, 0],
"num_specimen_seen": [10, 2, 1, 8],
},
index=["falcon", "dog", "spider", "fish"],
)
df.index.name = "id"
Logger.current_logger().report_table("table pd", "PD with index", iteration=iteration, table_plot=df)
```python
# Report table - DataFrame with index
df = pd.DataFrame(
{
"num_legs": [2, 4, 8, 0],
"num_wings": [2, 0, 0, 0],
"num_specimen_seen": [10, 2, 1, 8],
},
index=["falcon", "dog", "spider", "fish"],
)
df.index.name = "id"
Logger.current_logger().report_table(
"table pd",
"PD with index",
iteration=iteration,
table_plot=df
)
```
![image](../../img/examples_reporting_12.png)
@ -32,8 +39,15 @@ method, and providing the DataFrame in the `table_plot` parameter.
Report CSV files by providing the URL location of the CSV file in the `url` parameter. For a local CSV file, use the `csv` parameter.
# Report table - CSV from path
csv_url = "https://raw.githubusercontent.com/plotly/datasets/master/Mining-BTC-180.csv"
Logger.current_logger().report_table("table csv", "remote csv", iteration=iteration, url=csv_url)
```python
# Report table - CSV from path
csv_url = "https://raw.githubusercontent.com/plotly/datasets/master/Mining-BTC-180.csv"
Logger.current_logger().report_table(
"table csv",
"remote csv",
iteration=iteration,
url=csv_url
)
```
![image](../../img/examples_reporting_11.png)

View File

@ -11,14 +11,25 @@ Plotly figure, using the `figure` parameter.
In this example, the Plotly figure is created using `plotly.express.scatter` (see [Scatter Plots in Python](https://plotly.com/python/line-and-scatter/)
in the Plotly documentation):
# Iris dataset
df = px.data.iris()
```python
# Iris dataset
df = px.data.iris()
# create complex plotly figure
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", marginal_y="rug", marginal_x="histogram")
# create complex plotly figure
fig = px.scatter(
df,
x="sepal_width",
y="sepal_length",
color="species",
marginal_y="rug",
marginal_x="histogram"
)
# report the plotly figure
task.get_logger().report_plotly(title="iris", series="sepal", iteration=0, figure=fig)
# report the plotly figure
task.get_logger().report_plotly(
title="iris", series="sepal", iteration=0, figure=fig
)
```
When the script runs, it creates an experiment named `plotly reporting`, which is associated with the examples project.

View File

@ -12,14 +12,24 @@ To reports scalars, call the [Logger.report_scalar](../../references/sdk/logger.
method. To report more than one series on the same plot, use the same `title` argument. For different plots, use different
`title` arguments.
# report two scalar series on the same graph
for i in range(100):
Logger.current_logger().report_scalar("unified graph", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("unified graph", "series B", iteration=i, value=10./(i+1))
```python
# report two scalar series on the same graph
for i in range(100):
Logger.current_logger().report_scalar(
"unified graph", "series A", iteration=i, value=1./(i+1)
)
Logger.current_logger().report_scalar(
"unified graph", "series B", iteration=i, value=10./(i+1)
)
# report two scalar series on two different graphs
for i in range(100):
Logger.current_logger().report_scalar("graph A", "series A", iteration=i, value=1./(i+1))
Logger.current_logger().report_scalar("graph B", "series B", iteration=i, value=10./(i+1))
# report two scalar series on two different graphs
for i in range(100):
Logger.current_logger().report_scalar(
"graph A", "series A", iteration=i, value=1./(i+1)
)
Logger.current_logger().report_scalar(
"graph B", "series B", iteration=i, value=10./(i+1)
)
```
![image](../../img/examples_reporting_14.png)

View File

@ -19,37 +19,39 @@ method. To report more than one series on the same plot, use same the `title` ar
`title` arguments. Specify the type of histogram with the `mode` parameter. The `mode` values are `group` (the default),
`stack`, and `relative`.
# report a single histogram
histogram = np.random.randint(10, size=10)
Logger.current_logger().report_histogram(
"single_histogram",
"random histogram",
iteration=iteration,
values=histogram,
xaxis="title x",
yaxis="title y",
)
```python
# report a single histogram
histogram = np.random.randint(10, size=10)
Logger.current_logger().report_histogram(
"single_histogram",
"random histogram",
iteration=iteration,
values=histogram,
xaxis="title x",
yaxis="title y",
)
# report two histograms on the same graph (plot)
histogram1 = np.random.randint(13, size=10)
histogram2 = histogram * 0.75
Logger.current_logger().report_histogram(
"two_histogram",
"series 1",
iteration=iteration,
values=histogram1,
xaxis="title x",
yaxis="title y",
)
# report two histograms on the same graph (plot)
histogram1 = np.random.randint(13, size=10)
histogram2 = histogram * 0.75
Logger.current_logger().report_histogram(
"two_histogram",
"series 1",
iteration=iteration,
values=histogram1,
xaxis="title x",
yaxis="title y",
)
Logger.current_logger().report_histogram(
"two_histogram",
"series 2",
iteration=iteration,
values=histogram2,
xaxis="title x",
yaxis="title y",
)
Logger.current_logger().report_histogram(
"two_histogram",
"series 2",
iteration=iteration,
values=histogram2,
xaxis="title x",
yaxis="title y",
)
```
![image](../../img/examples_reporting_15.png)
@ -60,69 +62,75 @@ method. To report more than one series on the same plot, use same the `title` ar
Report confusion matrices by calling the [Logger.report_matrix](../../references/sdk/logger.md#report_matrix)
method.
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_matrix(
"example_confusion",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
)
```python
# report confusion matrix
confusion = np.random.randint(10, size=(10, 10))
Logger.current_logger().report_matrix(
"example_confusion",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
)
```
![image](../../img/examples_reporting_16.png)
# report confusion matrix with 0,0 is at the top left
Logger.current_logger().report_matrix(
"example_confusion_0_0_at_top",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
yaxis_reversed=True,
)
```python
# report confusion matrix with 0,0 is at the top left
Logger.current_logger().report_matrix(
"example_confusion_0_0_at_top",
"ignored",
iteration=iteration,
matrix=confusion,
xaxis="title X",
yaxis="title Y",
yaxis_reversed=True,
)
```
## 2D Scatter Plots
Report 2D scatter plots by calling the [Logger.report_scatter2d](../../references/sdk/logger.md#report_scatter2d)
method. Use the `mode` parameter to plot data points with lines (by default), markers, or both lines and markers.
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
```python
scatter2d = np.hstack(
(np.atleast_2d(np.arange(0, 10)).T, np.random.randint(10, size=(10, 1)))
)
# report 2d scatter plot with lines
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_xy",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
)
# report 2d scatter plot with lines
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_xy",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
)
# report 2d scatter plot with markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='markers'
)
# report 2d scatter plot with markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='markers'
)
# report 2d scatter plot with lines and markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_lines+markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='lines+markers'
)
# report 2d scatter plot with lines and markers
Logger.current_logger().report_scatter2d(
"example_scatter",
"series_lines+markers",
iteration=iteration,
scatter=scatter2d,
xaxis="title x",
yaxis="title y",
mode='lines+markers'
)
```
![image](../../img/examples_reporting_17.png)

View File

@ -342,11 +342,19 @@ one ROI labeled with both `Car` and `largely_occluded` will be input.
```python
myDataView = DataView(iteration_order=IterationOrder.random, iteration_infinite=True)
myDataView.add_query(dataset_name='myDataset', version_name='training',
roi_query='Car', weight = 1)
myDataView.add_query(
dataset_name='myDataset',
version_name='training',
roi_query='Car',
weight = 1
)
myDataView.add_query(dataset_name='myDataset', version_name='training',
roi_query='label.keyword:\"Car\" AND label.keyword:\"largely_occluded\"', weight = 5)
myDataView.add_query(
dataset_name='myDataset',
version_name='training',
roi_query='label.keyword:\"Car\" AND label.keyword:\"largely_occluded\"',
weight = 5
)
```
### Mapping ROI Labels

View File

@ -251,6 +251,7 @@ mask value as a list with the RGB values in the `mask_rgb` parameter, and a list
frame = SingleFrame(
source='/home/user/woof_meow.jpg',
preview_uri='https://storage.googleapis.com/kaggle-competitions/kaggle/3362/media/woof_meow.jpg',
)
frame.add_annotation(mask_rgb=[0, 0, 0], labels=['cat'])
```

View File

@ -35,7 +35,7 @@ When archiving an experiment:
* Experiments or models table - Right click the experiment or model **>** **Restore**.
* Info panel or full screen details view - Click <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bars menu" className="icon size-sm space-sm" />
(menu) **>** **Restore from archive**.
(menu) **>** **Restore from Archive**.
* Restore multiple experiments or models from the:

View File

@ -126,7 +126,7 @@ Visualize the comparison of scalars, which includes metrics and monitored resour
1. Click the **SCALARS** tab.
1. In the dropdown menu (upper right of the left sidebar), choose either:
* **Last values** (the final or most recent value)
* **Last Values** (the final or most recent value)
* **Min Values** (the minimal values)
* **Max Values** (the maximal values)
1. Sort by variant.