Small edits (#865)

This commit is contained in:
pollfly 2024-07-01 10:07:19 +03:00 committed by GitHub
parent f4457456dd
commit d7a713d0be
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 258 additions and 220 deletions

View File

@ -63,7 +63,9 @@ Use the following JSON format for each parameter:
}
```
The following are the parameter type options and their corresponding fields:
- `LogUniformParameterRange`
- `"min_value": float` - The minimum exponent sample to use for logarithmic uniform random sampling
- `"max_value": float` - The maximum exponent sample to use for logarithmic uniform random sampling
- `"base": Optional[float]` - The base used to raise the sampled exponent. Default: `10`

View File

@ -102,7 +102,7 @@ hyperparameters. Passing `alias=<dataset_alias_string>` stores the dataset's ID
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using.
[`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
[`Dataset.get_local_copy()`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object.
The script then trains a neural network to classify images using the dataset created above.

View File

@ -53,29 +53,28 @@ Modify the data folder:
1. Add a file to the sample_data folder.<br/>
Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
1. Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples
```
Expected response:
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
See that 2 files were added or modified, just as expected!
```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples
```
Expected response:
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
See that 2 files were added or modified, just as expected!

View File

@ -107,12 +107,13 @@ Using ClearML Data, you can create child datasets that inherit the content of ot
```bash
clearml-data create --project datasets --name HelloDataset-improved --parents 24d05040f3e14fbfbed8edb1bf08a88c
```
:::note
You'll need to input the Dataset ID you received when created the dataset above
:::
:::note
You'll need to input the Dataset ID you received when created the dataset above
:::
1. Add a new file.
* Create a new file: `echo "data data data" > new_data.txt`
* Now add the file to the dataset:
```bash

View File

@ -46,4 +46,4 @@ title: Windows
docker-compose -f c:\opt\clearml\docker-compose-win10.yml up -d
```
If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors).
If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors)

View File

@ -117,12 +117,15 @@ output to the console, when a Python experiment script is run.
For example, when a new ClearML Python Package version is available, the notification is:
CLEARML new package available: UPGRADE to vX.Y.Z is recommended!
```
CLEARML new package available: UPGRADE to vX.Y.Z is recommended!
```
When a new ClearML Server version is available, the notification is:
CLEARML-SERVER new version available: upgrade to vX.Y is recommended!
```
CLEARML-SERVER new version available: upgrade to vX.Y is recommended!
```
<br/>
@ -183,8 +186,7 @@ For more information about `Task` class methods, see the [Task Class](fundamenta
#### Can I store the model configuration file as well? <a id="store-model-configuration"></a>
Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration)
method:
Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration):
```python
Task.current_task().connect_configuration("a very long text with the configuration file's content")
@ -240,6 +242,7 @@ To replace the URL of each model, execute the following commands:
```
1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`):
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
@ -248,11 +251,13 @@ To replace the URL of each model, execute the following commands:
EOT
```
Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`.
1. Run the script against the backend DB:
```bash
mongo backend script.js
```
<br/>
#### Models are not accessible from the UI after I moved them (different bucket / server). How do I fix this? <a id="relocate_models"></a>
@ -342,7 +347,9 @@ ClearML monitors your Python process. When the process exits properly, ClearML c
This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the following command:
pip install -U clearml
```
pip install -U clearml
```
<a id="ssl-connection-error"></a>
@ -352,7 +359,7 @@ This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the fo
Your firewall may be preventing the connection. Try one of the following solutions:
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables CURL_CA_BUNDLE or REQUESTS_CA_BUNDLE. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Disable certificate verification
:::warning
@ -725,30 +732,30 @@ To fix this, the registered URL of each debug image and/or artifact needs to be
}' \
```
* For **artifacts**, you can do the following:
* For **artifacts**, you can do the following:
1. Open bash in the mongo DB docker container:
```bash
sudo docker exec -it clearml-mongo /bin/bash
```
```bash
sudo docker exec -it clearml-mongo /bin/bash
```
1. Inside the docker shell, create the following script. Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`,
as well as the URL protocol prefixes if you aren't using `s3`.
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
e.uri = e.uri.replace("s3://<old-bucket-name>/","s3://<new-bucket-name>/");
db.model.save(e);});
EOT
```
```bash
cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
e.uri = e.uri.replace("s3://<old-bucket-name>/","s3://<new-bucket-name>/");
db.model.save(e);});
EOT
```
1. Run the script against the backend DB:
1. Run the script against the backend DB:
```bash
mongo backend script.js
```
```bash
mongo backend script.js
```
## Jupyter
@ -763,22 +770,28 @@ Yes! You can run ClearML in Jupyter Notebooks using either of the following:
**Option 1: Install ClearML on your Jupyter host machine** <a id="opt1"></a>
1. Connect to your Jupyter host machine.
1. Install the ClearML Python Package.
1. Install the ClearML Python Package:
pip install clearml
```
pip install clearml
```
1. Run the ClearML setup wizard:
1. Run the ClearML setup wizard.
clearml-init
```
clearml-init
```
1. In your Jupyter Notebook, you can now use ClearML.
**Option 2: Install ClearML in your Jupyter Notebook** <a id="opt2"></a>
1. Install the ClearML Python Package.
pip install clearml
1. Install the ClearML Python Package:
```
pip install clearml
```
1. Get ClearML credentials. Open the ClearML Web UI in a browser. On the **SETTINGS > WORKSPACE** page, click **Create new credentials**.
The **JUPYTER NOTEBOOK** tab shows the commands required to configure your notebook (a copy to clipboard action is available on hover)
@ -822,7 +835,9 @@ To override the default configuration file location, set the `CLEARML_CONFIG_FIL
For example:
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf"
```
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf"
```
<br/>
@ -830,9 +845,11 @@ For example:
To override your configuration file / defaults, set the following OS environment variables:
export CLEARML_API_ACCESS_KEY="key_here"
export CLEARML_API_SECRET_KEY="secret_here"
export CLEARML_API_HOST="http://localhost:8008"
```
export CLEARML_API_ACCESS_KEY="key_here"
export CLEARML_API_SECRET_KEY="secret_here"
export CLEARML_API_HOST="http://localhost:8008"
```
<br/>
@ -864,9 +881,11 @@ Set the OS environment variable `CLEARML_LOG_ENVIRONMENT` with the variables you
If you joined the ClearML Hosted Service and ran a script, but your experiment does not appear in Web UI, you may not have configured ClearML for the hosted service. Run the ClearML setup wizard. It will request your hosted service ClearML credentials and create the ClearML configuration you need.
pip install clearml
```
pip install clearml
clearml-init
clearml-init
```
## ClearML Server Deployment
@ -913,7 +932,9 @@ see [Deploying ClearML Server: Kubernetes using Helm](deploying_clearml/clearml_
If you are using SELinux, run the following command (see this [discussion](https://stackoverflow.com/a/24334000)):
chcon -Rt svirt_sandbox_file_t /opt/clearml
```
chcon -Rt svirt_sandbox_file_t /opt/clearml
```
## ClearML Server Configuration
@ -958,11 +979,15 @@ For example:
To resolve the Docker error:
`... The container name "/trains-???" is already in use by ...`
```
... The container name "/trains-???" is already in use by ...
```
try removing deprecated images:
$ docker rm -f $(docker ps -a -q)
```
$ docker rm -f $(docker ps -a -q)
```
<br/>
@ -1042,8 +1067,10 @@ Do the following:
1. Allow bypassing of your proxy server to `localhost`
using a system environment variable, for example:
NO_PROXY = localhost
```
NO_PROXY = localhost
```
1. If a ClearML configuration file (`clearml.conf`) exists, delete it.
1. Open a terminal session.
1. Set the system environment variable to `127.0.0.1` in the terminal session. For example:

View File

@ -64,72 +64,72 @@ optimization.
1. Import ClearML's automation modules:
```python
from clearml.automation import UniformParameterRange, UniformIntegerParameterRange
from clearml.automation import HyperParameterOptimizer
from clearml.automation.optuna import OptimizerOptuna
```
```python
from clearml.automation import UniformParameterRange, UniformIntegerParameterRange
from clearml.automation import HyperParameterOptimizer
from clearml.automation.optuna import OptimizerOptuna
```
1. Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once,
it can be reproduced, and the parameters can be tuned:
```python
from clearml import Task
```python
from clearml import Task
task = Task.init(
project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False
)
```
task = Task.init(
project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False
)
```
1. Define the optimization configuration and resources budget:
```python
optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=TEMPLATE_TASK_ID,
# setting the hyperparameters to optimize
hyper_parameters=[
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
UniformParameterRange('dropout', min_value=0, max_value=0.5, step_size=0.05),
UniformParameterRange('base_lr', min_value=0.00025, max_value=0.01, step_size=0.00025),
],
# setting the objective metric we want to maximize/minimize
objective_metric_title='accuracy',
objective_metric_series='total',
objective_metric_sign='max',
```python
optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=TEMPLATE_TASK_ID,
# setting the hyperparameters to optimize
hyper_parameters=[
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
UniformParameterRange('dropout', min_value=0, max_value=0.5, step_size=0.05),
UniformParameterRange('base_lr', min_value=0.00025, max_value=0.01, step_size=0.00025),
],
# setting the objective metric we want to maximize/minimize
objective_metric_title='accuracy',
objective_metric_series='total',
objective_metric_sign='max',
# setting optimizer
optimizer_class=OptimizerOptuna,
# configuring optimization parameters
execution_queue='default',
max_number_of_concurrent_tasks=2,
optimization_time_limit=60.,
compute_time_limit=120,
total_max_jobs=20,
min_iteration_per_job=15000,
max_iteration_per_job=150000,
)
```
# setting optimizer
optimizer_class=OptimizerOptuna,
# configuring optimization parameters
execution_queue='default',
max_number_of_concurrent_tasks=2,
optimization_time_limit=60.,
compute_time_limit=120,
total_max_jobs=20,
min_iteration_per_job=15000,
max_iteration_per_job=150000,
)
```
:::tip Locating Task ID
To locate the base task's ID, go to the task's info panel in the [WebApp](../webapp/webapp_overview.md). The ID appears
in the task header.
:::
:::tip Locating Task ID
To locate the base task's ID, go to the task's info panel in the [WebApp](../webapp/webapp_overview.md). The ID appears
in the task header.
:::
:::tip Multi-objective Optimization
If you are using the Optuna framework (see [Supported Optimizers](#supported-optimizers)), you can list multiple optimization objectives.
When doing so, make sure the `objective_metric_title`, `objective_metric_series`, and `objective_metric_sign` lists
are the same length. Each title will be matched to its respective series and sign.
:::tip Multi-objective Optimization
If you are using the Optuna framework (see [Supported Optimizers](#supported-optimizers)), you can list multiple optimization objectives.
When doing so, make sure the `objective_metric_title`, `objective_metric_series`, and `objective_metric_sign` lists
are the same length. Each title will be matched to its respective series and sign.
For example, the code below sets two objectives: to minimize the `validation/loss` metric and to maximize the `validation/accuracy` metric.
```python
objective_metric_title=["validation", "validation"]
objective_metric_series=["loss", "accuracy"]
objective_metric_sign=["min", "max"]
```
:::
For example, the code below sets two objectives: to minimize the `validation/loss` metric and to maximize the `validation/accuracy` metric.
```python
objective_metric_title=["validation", "validation"]
objective_metric_series=["loss", "accuracy"]
objective_metric_sign=["min", "max"]
```
:::
## Optimizer Execution Options

View File

@ -16,8 +16,8 @@ The script does the following:
* Hyperparameters - Hyperparameters created in each subprocess Task are added to the main Task's hyperparameters.
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task),
which always returns the main Task.
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task),
which always returns the main Task.
1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**.

View File

@ -25,23 +25,23 @@ Integrate ClearML with the following steps:
1. Create a `ClearMLLogger` object. When the code runs, it connects to the ClearML backend, and creates a task in ClearML
(see ClearMLLogger's parameters [below](#parameters)).
```python
from ignite.contrib.handlers.clearml_logger import ClearMLLogger
```python
from ignite.contrib.handlers.clearml_logger import ClearMLLogger
clearml_logger = ClearMLLogger(project_name="examples", task_name="ignite")
```
clearml_logger = ClearMLLogger(project_name="examples", task_name="ignite")
```
1. Attach helper handlers to the `ClearMLLogger` object.
For example, attach the `OutputHandler` to log training loss at each iteration:
```python
clearml_logger.attach(
trainer,
log_handler=OutputHandler(tag="training",
output_transform=lambda loss: {"loss": loss}),
event_name=Events.ITERATION_COMPLETED
)
```
For example, attach the `OutputHandler` to log training loss at each iteration:
```python
clearml_logger.attach(
trainer,
log_handler=OutputHandler(tag="training",
output_transform=lambda loss: {"loss": loss}),
event_name=Events.ITERATION_COMPLETED
)
```
### Parameters
The following are the `ClearMLLogger` parameters:

View File

@ -95,13 +95,16 @@ Now, let's execute some code in the remote session!
1. In the first cell of the notebook, clone the [ClearML repository](https://github.com/allegroai/clearml):
!git clone https://github.com/allegroai/clearml.git
```
!git clone https://github.com/allegroai/clearml.git
```
1. In the second cell of the notebook, run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
from the cloned repository:
%run clearml/examples/frameworks/keras/keras_tensorboard.py
```
%run clearml/examples/frameworks/keras/keras_tensorboard.py
```
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but you don't need to install these
packages in Jupyter, because you specified them in the `--packages` flag of `clearml-session`.

View File

@ -36,9 +36,9 @@ the function will be automatically logged as required packages for the pipeline
1. Set an execution queue through which pipeline steps that did not explicitly specify an execution queue will be
executed. These pipeline steps will be enqueued for execution in this queue.
```python
pipe.set_default_execution_queue('default')
```
```python
pipe.set_default_execution_queue('default')
```
1. Add a pipeline level parameter that can be referenced from any step in the pipeline (see `step_one` below).
```python

View File

@ -56,22 +56,22 @@ myDataset_2 = DatasetVersion.create_new_dataset(
To raise a `ValueError` exception if the Dataset exists, specify the `raise_if_exists` parameters as `True`.
* With `Dataset.create`
```python
try:
myDataset = Dataset.create(dataset_name='myDataset One', raise_if_exists=True)
except ValueError:
print('Dataset exists.')
```
* With `Dataset.create`:
```python
try:
myDataset = Dataset.create(dataset_name='myDataset One', raise_if_exists=True)
except ValueError:
print('Dataset exists.')
```
* Or with `DatasetVersion.create_new_dataset`
* Or with `DatasetVersion.create_new_dataset`:
```python
try:
myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset Two', raise_if_exists=True)
except ValueError:
print('Dataset exists.')
```
```python
try:
myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset Two', raise_if_exists=True)
except ValueError:
print('Dataset exists.')
```
Additionally, create a Dataset with tags and a description.

View File

@ -324,7 +324,7 @@ myDatasetVersion.update_frames(frames)
### Deleting Frames
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames).
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames):
```python
frames = []

View File

@ -77,48 +77,48 @@ Integrate ClearML with the following steps:
)
```
1. Attach the ClearMLLogger object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML:
1. Attach the `ClearMLLogger` object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML:
* **ClearMLSaver** - Saves input snapshots as ClearML artifacts.
* **GradsHistHandler** and **WeightsHistHandler** - Logs the model's gradients and weights respectively as histograms.
* **GradsScalarHandler** and **WeightsScalarHandler** - Logs gradients and weights respectively as scalars.
* **OptimizerParamsHandler** - Logs optimizer parameters
```python
# Attach the logger to the trainer to log model's weights norm
clearml_logger.attach(
trainer, log_handler=WeightsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
)
```python
# Attach the logger to the trainer to log model's weights norm
clearml_logger.attach(
trainer, log_handler=WeightsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
)
# Attach the logger to the trainer to log model's weights as a histogram
clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
# Attach the logger to the trainer to log model's weights as a histogram
clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
# Attach the logger to the trainer to log model's gradients as scalars
clearml_logger.attach(
trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
)
# Attach the logger to the trainer to log model's gradients as scalars
clearml_logger.attach(
trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
)
#Attach the logger to the trainer to log model's gradients as a histogram
clearml_logger.attach(trainer, log_handler=GradsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
#Attach the logger to the trainer to log model's gradients as a histogram
clearml_logger.attach(trainer, log_handler=GradsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
handler = Checkpoint(
{"model": model},
ClearMLSaver(),
n_saved=1,
score_function=lambda e: e.state.metrics["accuracy"],
score_name="val_acc",
filename_prefix="best",
global_step_transform=global_step_from_engine(trainer),
)
validation_evaluator.add_event_handler(Events.EPOCH_COMPLETED, handler)
handler = Checkpoint(
{"model": model},
ClearMLSaver(),
n_saved=1,
score_function=lambda e: e.state.metrics["accuracy"],
score_name="val_acc",
filename_prefix="best",
global_step_transform=global_step_from_engine(trainer),
)
validation_evaluator.add_event_handler(Events.EPOCH_COMPLETED, handler)
# Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration
clearml_logger.attach(
trainer,
log_handler=OptimizerParamsHandler(optimizer),
event_name=Events.ITERATION_STARTED
)
```
# Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration
clearml_logger.attach(
trainer,
log_handler=OptimizerParamsHandler(optimizer),
event_name=Events.ITERATION_STARTED
)
```
Visualize all the captured information in the experiment's page in ClearML's [WebApp](#webapp).

View File

@ -21,20 +21,20 @@ Integrate ClearML into your Keras Tuner optimization script by doing the followi
* Specify `ClearMLTunerLogger` as the Keras Tuner logger:
```python
from clearml.external.kerastuner import ClearmlTunerLogger
import keras_tuner as kt
# Create tuner object
tuner = kt.Hyperband(
build_model,
project_name='kt examples',
logger=ClearMLTunerLogger(), # specify ClearMLTunerLogger
objective='val_accuracy',
max_epochs=10,
hyperband_iterations=6
)
```
```python
from clearml.external.kerastuner import ClearmlTunerLogger
import keras_tuner as kt
# Create tuner object
tuner = kt.Hyperband(
build_model,
project_name='kt examples',
logger=ClearMLTunerLogger(), # specify ClearMLTunerLogger
objective='val_accuracy',
max_epochs=10,
hyperband_iterations=6
)
```
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
* Output Keras model

View File

@ -319,13 +319,17 @@ To create block code, use one of the following options:
```
* Surround code with "fences"--three backticks (<code>```</code>):
```
from clearml import Task
~~~
```
from clearml import Task
t = Task.init(project_name='My project', task_name='Base')
```
t = Task.init(project_name='My project', task_name='Base')
```
~~~
Both of these options will be rendered as:
```
@ -338,11 +342,13 @@ t = Task.init(project_name='My project', task_name='Base')
To display syntax highlighting, specify the coding language after the first fence (e.g. <code>\```python</code>, <code>\```json</code>, <code>\```js</code>, etc.):
```python
from clearml import Task
~~~
```python
from clearml import Task
t = Task.init(project_name='My project', task_name='Base')
```
t = Task.init(project_name='My project', task_name='Base')
```
~~~
The rendered output should look like this: