From f2491cf9f0eb243dc3731cbdb964bab6c677118d Mon Sep 17 00:00:00 2001 From: pollfly <75068813+pollfly@users.noreply.github.com> Date: Sun, 23 Jul 2023 12:11:32 +0300 Subject: [PATCH] Small edits (#621) --- docs/fundamentals/hpo.md | 16 ++++++++++------ docs/integrations/hydra.md | 6 +++--- docs/integrations/yolov5.md | 2 +- docs/release_notes/ver_1_12.md | 9 ++++----- 4 files changed, 18 insertions(+), 15 deletions(-) diff --git a/docs/fundamentals/hpo.md b/docs/fundamentals/hpo.md index c6dadf70..1fcff191 100644 --- a/docs/fundamentals/hpo.md +++ b/docs/fundamentals/hpo.md @@ -121,14 +121,14 @@ optimization. ## Optimizer Execution Options The `HyperParameterOptimizer` provides options to launch the optimization tasks locally or through a ClearML [queue](agents_and_queues.md#what-is-a-queue). -Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start) -or [`HyperParameterOptimizer.start_locally`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally). -Both methods run the optimizer controller locally. The `start` method launches the base task clones through a queue -specified when instantiating the controller, while `start_locally` runs the tasks locally. +Start a `HyperParameterOptimizer` instance using either [`HyperParameterOptimizer.start()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start) +or [`HyperParameterOptimizer.start_locally()`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md#start_locally). +Both methods run the optimizer controller locally. `start()` launches the base task clones through a queue +specified when instantiating the controller, while `start_locally()` runs the tasks locally. :::tip Remote Execution -You can also launch the optimizer controller through a queue by using the [`Task.execute_remotely`](../references/sdk/task.md#execute_remotely) -method before starting the optimizer. +You can also launch the optimizer controller through a queue by using [`Task.execute_remotely()`](../references/sdk/task.md#execute_remotely) +before starting the optimizer. ::: @@ -147,5 +147,9 @@ ClearML also provides `clearml-param-search`, a CLI utility for managing the hyp ## UI Application +:::info Pro Plan Offering +The ClearML HPO App is available under the ClearML Pro plan +::: + ClearML provides the [Hyperparameter Optimization GUI application](../webapp/applications/apps_hpo.md) for launching and managing the hyperparameter optimization process. diff --git a/docs/integrations/hydra.md b/docs/integrations/hydra.md index 5e7ff7fa..654ac831 100644 --- a/docs/integrations/hydra.md +++ b/docs/integrations/hydra.md @@ -26,10 +26,10 @@ The agent executes the code with the modifications you made in the UI, even over Clone your experiment, then modify your Hydra parameters via the UI in one of the following ways: * Modify the OmegaConf directly: - 1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True` - 1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values + 1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, set `_allow_omegaconf_edit_` to `True` + 1. In the experiment’s **CONFIGURATION > CONFIGURATION OBJECTS > OmegaConf** section, modify the OmegaConf values * Add an experiment hyperparameter: - 1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set + 1. In the experiment’s **CONFIGURATION > HYPERPARAMETERS > HYDRA** section, make sure `_allow_omegaconf_edit_` is set to `False` 1. In the same section, click `Edit`, which gives you the option to add parameters. Input parameters from the OmegaConf that you want to modify using dot notation. For example, if your OmegaConf looks like this: diff --git a/docs/integrations/yolov5.md b/docs/integrations/yolov5.md index 0c198487..ca039725 100644 --- a/docs/integrations/yolov5.md +++ b/docs/integrations/yolov5.md @@ -142,7 +142,7 @@ New dataset created id= ``` ### Run Training Using a ClearML Dataset -Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models: +Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 models: ```commandline python train.py --img 640 --batch 16 --epochs 3 --data clearml:// --weights yolov5s.pt --cache diff --git a/docs/release_notes/ver_1_12.md b/docs/release_notes/ver_1_12.md index 63193278..cec43b87 100644 --- a/docs/release_notes/ver_1_12.md +++ b/docs/release_notes/ver_1_12.md @@ -12,17 +12,16 @@ the instructions [here](https://github.com/allegroai/clearml/tree/master/docs/er ::: **New Features** -* Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1069](https://github.com/allegroai/clearml/issues/1069) +* Add `include_archive` parameter to `Dataset.list_datasets()`: include archived datasets in list [ClearML GitHub issue #1067](https://github.com/allegroai/clearml/issues/1067) * Add support to specify the multipart chunk size and threshold using the `aws.boto3.multipart_chunksize` and -`aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1059](https://github.com/allegroai/clearml/issues/1059) +`aws.boto3.multipart_threshold` configuration options in the clearml.conf [ClearML GitHub issue #1058](https://github.com/allegroai/clearml/issues/1058) * Add `PipelineController.get_pipeline()` for retrieving previously run pipelines. **Bug Fixes** -* Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054) -* Fix AWS driver issues: [ClearML GitHub issue #1000](https://github.com/allegroai/clearml/issues/1000) +* Fix AWS driver issues: [ClearML GitHub PR #1000](https://github.com/allegroai/clearml/pull/1000) * Fix credential authentication failure when attempting to use token * Fix instantiation within VPC without AvailabilityZones -* Fix Error accessing GCP artifacts when using special characters in task name [ClearML GitHub issue #1051](https://github.com/allegroai/clearml/issues/1051) +* Fix `continue_last_task=0` is ignored in pipelines run with `retry_on_failure` [ClearML GitHub issue #1054](https://github.com/allegroai/clearml/issues/1054) * Fix `Task.connect_configuration()` doesn't handle dictionaries with special characters * Fix pipeline steps created with `PipelineDecorator` aren't cached * Fix `Task.get_by_name()` doesn't return the most recent task when multiple tasks have same name