Small edits (#691)

This commit is contained in:
pollfly 2023-10-15 10:59:07 +03:00 committed by GitHub
parent e6257d2843
commit a8be5b50c8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 27 additions and 26 deletions

View File

@ -642,7 +642,6 @@ logger.report_scatter2d(
xaxis="title x",
yaxis="title y"
)
```
## GIT and Storage

View File

@ -18,8 +18,7 @@ ClearML automatically captures scalars logged by CatBoost. These scalars can be
![Experiment scalars](../../../img/examples_catboost_scalars.png)
## Hyperparameters
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPER
PARAMETERS > Args**.
ClearML automatically logs command line options defined with argparse. They appear in **CONFIGURATIONS > HYPERPARAMETERS > Args**.
![Experiment hyperparameters](../../../img/examples_catboost_configurations.png)

View File

@ -13,7 +13,7 @@ The example script does the following:
## Scalars
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the **experiment's page > SCALARS**.
The scalars logged in the experiment can be visualized in a plot, which appears in the ClearML web UI, in the experiment's **SCALARS** tab.
![LightGBM scalars](../../../img/examples_lightgbm_scalars.png)

View File

@ -28,7 +28,8 @@ on `Task.current_task` (the main Task). The dictionary contains the `dist.rank`
```python
Task.current_task().upload_artifact(
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()})
'temp {:02d}'.format(dist.get_rank()), artifact_object={'worker_rank': dist.get_rank()}
)
```
All of these artifacts appear in the main Task, **ARTIFACTS** **>** **OTHER**.
@ -43,7 +44,8 @@ same title (`loss`), but a different series name (containing the subprocess' `ra
```python
Task.current_task().get_logger().report_scalar(
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i)
'loss', 'worker {:02d}'.format(dist.get_rank()), value=loss.item(), iteration=i
)
```
The single scalar plot for loss appears in **SCALARS**.

View File

@ -70,25 +70,29 @@ clearml_logger.attach(
* Log metrics for training:
```python
clearml_logger.attach(train_evaluator,
clearml_logger.attach(
train_evaluator,
log_handler=OutputHandler(
tag="training",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)
),
event_name=Events.EPOCH_COMPLETED)
event_name=Events.EPOCH_COMPLETED
)
```
* Log metrics for validation:
```python
clearml_logger.attach(evaluator,
clearml_logger.attach(
evaluator,
log_handler=OutputHandler(
tag="validation",
metric_names=["nll", "accuracy"],
global_step_transform=global_step_from_engine(trainer)
),
event_name=Events.EPOCH_COMPLETED)
event_name=Events.EPOCH_COMPLETED
)
```
To log optimizer parameters, use the `attach_opt_params_handler` method:

View File

@ -29,7 +29,8 @@ tuner = kt.Hyperband(
logger=ClearMLTunerLogger(),
objective='val_accuracy',
max_epochs=10,
hyperband_iterations=6)
hyperband_iterations=6
)
```
When the script runs, it logs:

View File

@ -23,7 +23,7 @@ The following search strategies can be used:
documentation.
* Random uniform sampling of hyperparameter strategy - [automation.RandomSearch](../../../references/sdk/hpo_optimization_randomsearch.md)
* Full grid sampling strategy of every hyperparameter combination - Grid search [automation.GridSearch](../../../references/sdk/hpo_optimization_gridsearch.md).
* Full grid sampling strategy of every hyperparameter combination - [automation.GridSearch](../../../references/sdk/hpo_optimization_gridsearch.md).
* Custom - Use a custom class and inherit from the ClearML automation base strategy class, automation.optimization.SearchStrategy.
The search strategy class that is chosen will be passed to the [automation.HyperParameterOptimizer](../../../references/sdk/hpo_optimization_hyperparameteroptimizer.md)
@ -73,8 +73,8 @@ can be [reproduced](../../../webapp/webapp_exp_reproducing.md) and [tuned](../..
Set the Task type to `optimizer`, and create a new experiment (and Task object) each time the optimizer runs (`reuse_last_task_id=False`).
When the code runs, it creates an experiment named **Automatic Hyper-Parameter Optimization** that is associated with
the project **Hyper-Parameter Optimization**, which can be seen in the **ClearML Web UI**.
When the code runs, it creates an experiment named **Automatic Hyper-Parameter Optimization** in
the **Hyper-Parameter Optimization** project, which can be seen in the **ClearML Web UI**.
```python
# Connecting CLEARML
@ -174,7 +174,6 @@ Specify the remaining parameters, including the time limit per Task (minutes), p
max_iteration_per_job=30,
) # done creating HyperParameterOptimizer
```
## Running as a Service

View File

@ -56,7 +56,6 @@ Logger.current_logger().report_media(
iteration=iteration,
local_path="bar_pandas_groupby_nested.html",
)
```
### Bokeh Graph HTML

View File

@ -74,7 +74,6 @@ parameters['new_param'] = 'this is new'
# changing the value of a parameter (new value will be stored instead of previous one)
parameters['float'] = '9.9'
```
Parameters from dictionaries connected to Tasks appear in **HYPERPARAMETERS** **>** **General**.

View File

@ -319,7 +319,6 @@ frame.meta['road_hazard'] = 'yes'
# update the SingeFrame
frames.append(frame)
myDatasetVersion.update_frames(frames)
```