mirror of
https://github.com/clearml/clearml-docs
synced 2025-04-02 12:21:08 +00:00
Small edits (#621)
This commit is contained in:
parent
1adfc18696
commit
f2491cf9f0
@ -121,14 +121,14 @@ optimization.
|
||||
|
||||
## Optimizer Execution Options
|
||||
The `HyperParameterOptimizer` provides options to launch the optimization tasks locally or through a ClearML [queue](agents_and_queues.md#what-is-a-queue).
|
||||
Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start)
|
||||
or [`HyperParameterOptimizer.start_locally`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally).
|
||||
Both methods run the optimizer controller locally. The `start` method launches the base task clones through a queue
|
||||
specified when instantiating the controller, while `start_locally` runs the tasks locally.
|
||||
Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start)
|
||||
or [`HyperParameterOptimizer.start_locally()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally).
|
||||
Both methods run the optimizer controller locally. `start()` launches the base task clones through a queue
|
||||
specified when instantiating the controller, while `start_locally()` runs the tasks locally.
|
||||
|
||||
:::tip Remote Execution
|
||||
You can also launch the optimizer controller through a queue by using the [`Task.execute_remotely`](../references/sdk/task.md#execute_remotely)
|
||||
method before starting the optimizer.
|
||||
You can also launch the optimizer controller through a queue by using [`Task.execute_remotely()`](../references/sdk/task.md#execute_remotely)
|
||||
before starting the optimizer.
|
||||
:::
|
||||
|
||||
|
||||
@ -147,5 +147,9 @@ ClearML also provides `clearml-param-search`, a CLI utility for managing the hyp
|
||||
|
||||
## UI Application
|
||||
|
||||
:::info Pro Plan Offering
|
||||
The ClearML HPO App is available under the ClearML Pro plan
|
||||
:::
|
||||
|
||||
ClearML provides the [Hyperparameter Optimization GUI application](../webapp/applications/apps_hpo.md) for launching and
|
||||
managing the hyperparameter optimization process.
|
||||
|
@ -26,10 +26,10 @@ The agent executes the code with the modifications you made in the UI, even over
|
||||
|
||||
Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways:
|
||||
* Modify the OmegaConf directly:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True`
|
||||
1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values
|
||||
* Add an experiment hyperparameter:
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set
|
||||
to `False`
|
||||
1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf
|
||||
that you want to modify using dot notation. For example, if your OmegaConf looks like this:
|
||||
|
@ -142,7 +142,7 @@ New dataset created id=<dataset-id>
|
||||
```
|
||||
|
||||
### Run Training Using a ClearML Dataset
|
||||
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models:
|
||||
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models:
|
||||
|
||||
```commandline
|
||||
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
|
||||
|
@ -12,17 +12,16 @@ the instructions [here](https://github.com/allegroai/clearml/tree/master/docs/er
|
||||
:::
|
||||
|
||||
**New Features**
|
||||
* Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1069](https://github.com/allegroai/clearml/issues/1069)
|
||||
* Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1067](https://github.com/allegroai/clearml/issues/1067)
|
||||
* Add support to specify the multipart chunk size and threshold using the `aws.boto3.multipart_chunksize` and
|
||||
`aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1059](https://github.com/allegroai/clearml/issues/1059)
|
||||
`aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1058](https://github.com/allegroai/clearml/issues/1058)
|
||||
* Add `PipelineController.get_pipeline()` for retrieving previously run pipelines.
|
||||
|
||||
**Bug Fixes**
|
||||
* Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054)
|
||||
* Fix AWS driver issues: [ClearML GitHub issue #1000](https://github.com/allegroai/clearml/issues/1000)
|
||||
* Fix AWS driver issues: [ClearML GitHub PR #1000](https://github.com/allegroai/clearml/pull/1000)
|
||||
* Fix credential authentication failure when attempting to use token
|
||||
* Fix instantiation within VPC without AvailabilityZones
|
||||
* Fix Error accessing GCP artifacts when using special characters in task name [ClearML GitHub issue #1051](https://github.com/allegroai/clearml/issues/1051)
|
||||
* Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054)
|
||||
* Fix `Task.connect_configuration()` doesn't handle dictionaries with special characters
|
||||
* Fix pipeline steps created with `PipelineDecorator` aren't cached
|
||||
* Fix `Task.get_by_name()` doesn't return the most recent task when multiple tasks have same name
|
||||
|
Loading…
Reference in New Issue
Block a user