Small edits (#621)

This commit is contained in:
pollfly 2023-07-23 12:11:32 +03:00 committed by GitHub
parent 1adfc18696
commit f2491cf9f0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 18 additions and 15 deletions

View File

@ -121,14 +121,14 @@ optimization.
## Optimizer Execution Options ## Optimizer Execution Options
The `HyperParameterOptimizer` provides options to launch the optimization tasks locally or through a ClearML [queue](agents_and_queues.md#what-is-a-queue). The `HyperParameterOptimizer` provides options to launch the optimization tasks locally or through a ClearML [queue](agents_and_queues.md#what-is-a-queue).
Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start) Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start)
or [`HyperParameterOptimizer.start_locally`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally). or [`HyperParameterOptimizer.start_locally()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally).
Both methods run the optimizer controller locally. The `start` method launches the base task clones through a queue Both methods run the optimizer controller locally. `start()` launches the base task clones through a queue
specified when instantiating the controller, while `start_locally` runs the tasks locally. specified when instantiating the controller, while `start_locally()` runs the tasks locally.
:::tip Remote Execution :::tip Remote Execution
You can also launch the optimizer controller through a queue by using the [`Task.execute_remotely`](../references/sdk/task.md#execute_remotely) You can also launch the optimizer controller through a queue by using [`Task.execute_remotely()`](../references/sdk/task.md#execute_remotely)
method before starting the optimizer. before starting the optimizer.
::: :::
@ -147,5 +147,9 @@ ClearML also provides `clearml-param-search`, a CLI utility for managing the hyp
## UI Application ## UI Application
:::info Pro Plan Offering
The ClearML HPO App is available under the ClearML Pro plan
:::
ClearML provides the [Hyperparameter Optimization GUI application](../webapp/applications/apps_hpo.md) for launching and ClearML provides the [Hyperparameter Optimization GUI application](../webapp/applications/apps_hpo.md) for launching and
managing the hyperparameter optimization process. managing the hyperparameter optimization process.

View File

@ -26,10 +26,10 @@ The agent executes the code with the modifications you made in the UI, even over
Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways: Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways:
* Modify the OmegaConf directly: * Modify the OmegaConf directly:
1. In the experiments **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True` 1. In the experiments **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
1. In the experiments **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values 1. In the experiments **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
* Add an experiment hyperparameter: * Add an experiment hyperparameter:
1. In the experiments **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set 1. In the experiments **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
to `False` to `False`
1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf 1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf
that you want to modify using dot notation. For example, if your OmegaConf looks like this: that you want to modify using dot notation. For example, if your OmegaConf looks like this:

View File

@ -142,7 +142,7 @@ New dataset created id=<dataset-id>
``` ```
### Run Training Using a ClearML Dataset ### Run Training Using a ClearML Dataset
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models: Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models:
```commandline ```commandline
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache

View File

@ -12,17 +12,16 @@ the instructions [here](https://github.com/allegroai/clearml/tree/master/docs/er
::: :::
**New Features** **New Features**
* Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1069](https://github.com/allegroai/clearml/issues/1069) * Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1067](https://github.com/allegroai/clearml/issues/1067)
* Add support to specify the multipart chunk size and threshold using the `aws.boto3.multipart_chunksize` and * Add support to specify the multipart chunk size and threshold using the `aws.boto3.multipart_chunksize` and
`aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1059](https://github.com/allegroai/clearml/issues/1059) `aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1058](https://github.com/allegroai/clearml/issues/1058)
* Add `PipelineController.get_pipeline()` for retrieving previously run pipelines. * Add `PipelineController.get_pipeline()` for retrieving previously run pipelines.
**Bug Fixes** **Bug Fixes**
* Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054) * Fix AWS driver issues: [ClearML GitHub PR #1000](https://github.com/allegroai/clearml/pull/1000)
* Fix AWS driver issues: [ClearML GitHub issue #1000](https://github.com/allegroai/clearml/issues/1000)
* Fix credential authentication failure when attempting to use token * Fix credential authentication failure when attempting to use token
* Fix instantiation within VPC without AvailabilityZones * Fix instantiation within VPC without AvailabilityZones
* Fix Error accessing GCP artifacts when using special characters in task name [ClearML GitHub issue #1051](https://github.com/allegroai/clearml/issues/1051) * Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054)
* Fix `Task.connect_configuration()` doesn't handle dictionaries with special characters * Fix `Task.connect_configuration()` doesn't handle dictionaries with special characters
* Fix pipeline steps created with `PipelineDecorator` aren't cached * Fix pipeline steps created with `PipelineDecorator` aren't cached
* Fix `Task.get_by_name()` doesn't return the most recent task when multiple tasks have same name * Fix `Task.get_by_name()` doesn't return the most recent task when multiple tasks have same name