Small edits (#691)

This commit is contained in:
pollfly
2023-10-15 10:59:07 +03:00
committed by GitHub
parent e6257d2843
commit a8be5b50c8
18 changed files with 27 additions and 26 deletions

View File

@@ -18,8 +18,7 @@ ClearML automatically captures scalars logged by CatBoost. These scalars can be
![Experiment scalars](../../../img/examples_catboost_scalars.png)
## Hyperparameters
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPER
PARAMETERS > Args**.
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPERPARAMETERS > Args**.
![Experiment hyperparameters](../../../img/examples_catboost_configurations.png)

View File

@@ -13,7 +13,7 @@ The example script does the following:
## Scalars
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the **experiment's page > SCALARS**.
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the experiment's **SCALARS** tab.
![LightGBM scalars](../../../img/examples_lightgbm_scalars.png)

View File

@@ -28,7 +28,8 @@ on `Task.current_task` (the main Task). The dictionary contains the `dist.rank`
```python
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()}
)
```
All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
@@ -43,7 +44,8 @@ same title (`loss`), but a different series name (containing the subprocess' `ra
```python
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i
)
```
The single scalar plot for loss appears in **SCALARS**.

View File

@@ -70,25 +70,29 @@ clearml_logger.attach(
* Log metrics for training:
```python
clearml_logger.attach(train_evaluator,
clearml_logger.attach(
train_evaluator,
log_handler=OutputHandler(
tag="training",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)
),
event_name=Events.EPOCH_COMPLETED)
event_name=Events.EPOCH_COMPLETED
)
```
* Log metrics for validation:
```python
clearml_logger.attach(evaluator,
clearml_logger.attach(
evaluator,
log_handler=OutputHandler(
tag="validation",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)
),
event_name=Events.EPOCH_COMPLETED)
event_name=Events.EPOCH_COMPLETED
)
```
To log optimizer parameters, use the `attach_opt_params_handler` method:

View File

@@ -29,7 +29,8 @@ tuner = kt.Hyperband(
logger=ClearMLTunerLogger(),
objective='val_accuracy',
max_epochs=10,
hyperband_iterations=6)
hyperband_iterations=6
)
```
When the script runs, it logs: