mirror of
https://github.com/clearml/clearml-docs
synced 2025-06-26 18:17:44 +00:00
Small edits (#691)
This commit is contained in:
@@ -18,8 +18,7 @@ ClearML automatically captures scalars logged by CatBoost. These scalars can be
|
||||

|
||||
|
||||
## Hyperparameters
|
||||
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPER
|
||||
PARAMETERS > Args**.
|
||||
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPERPARAMETERS > Args**.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ The example script does the following:
|
||||
|
||||
## Scalars
|
||||
|
||||
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the **experiment's page > SCALARS**.
|
||||
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the experiment's **SCALARS** tab.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -28,7 +28,8 @@ on `Task.current_task` (the main Task). The dictionary contains the `dist.rank`
|
||||
|
||||
```python
|
||||
Task.current_task().upload_artifact(
|
||||
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
|
||||
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()}
|
||||
)
|
||||
```
|
||||
|
||||
All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
|
||||
@@ -43,7 +44,8 @@ same title (`loss`), but a different series name (containing the subprocess' `ra
|
||||
|
||||
```python
|
||||
Task.current_task().get_logger().report_scalar(
|
||||
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
|
||||
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i
|
||||
)
|
||||
```
|
||||
|
||||
The single scalar plot for loss appears in **SCALARS**.
|
||||
|
||||
@@ -70,25 +70,29 @@ clearml_logger.attach(
|
||||
* Log metrics for training:
|
||||
|
||||
```python
|
||||
clearml_logger.attach(train_evaluator,
|
||||
clearml_logger.attach(
|
||||
train_evaluator,
|
||||
log_handler=OutputHandler(
|
||||
tag="training",
|
||||
metric_names=["nll", "accuracy"],
|
||||
global_step_transform=global_step_from_engine(trainer)
|
||||
),
|
||||
event_name=Events.EPOCH_COMPLETED)
|
||||
event_name=Events.EPOCH_COMPLETED
|
||||
)
|
||||
```
|
||||
|
||||
* Log metrics for validation:
|
||||
|
||||
```python
|
||||
clearml_logger.attach(evaluator,
|
||||
clearml_logger.attach(
|
||||
evaluator,
|
||||
log_handler=OutputHandler(
|
||||
tag="validation",
|
||||
metric_names=["nll", "accuracy"],
|
||||
global_step_transform=global_step_from_engine(trainer)
|
||||
),
|
||||
event_name=Events.EPOCH_COMPLETED)
|
||||
event_name=Events.EPOCH_COMPLETED
|
||||
)
|
||||
```
|
||||
|
||||
To log optimizer parameters, use the `attach_opt_params_handler` method:
|
||||
|
||||
@@ -29,7 +29,8 @@ tuner = kt.Hyperband(
|
||||
logger=ClearMLTunerLogger(),
|
||||
objective='val_accuracy',
|
||||
max_epochs=10,
|
||||
hyperband_iterations=6)
|
||||
hyperband_iterations=6
|
||||
)
|
||||
```
|
||||
|
||||
When the script runs, it logs:
|
||||
|
||||
Reference in New Issue
Block a user