Small edits (#865)

This commit is contained in:
pollfly 2024-07-01 10:07:19 +03:00 committed by GitHub
parent f4457456dd
commit d7a713d0be
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 258 additions and 220 deletions

View File

@ -63,7 +63,9 @@ Use the following JSON format for each parameter:
} }
``` ```
The following are the parameter type options and their corresponding fields: The following are the parameter type options and their corresponding fields:
- `LogUniformParameterRange` - `LogUniformParameterRange`
- `"min_value": float` - The minimum exponent sample to use for logarithmic uniform random sampling - `"min_value": float` - The minimum exponent sample to use for logarithmic uniform random sampling
- `"max_value": float` - The maximum exponent sample to use for logarithmic uniform random sampling - `"max_value": float` - The maximum exponent sample to use for logarithmic uniform random sampling
- `"base": Optional[float]` - The base used to raise the sampled exponent. Default: `10` - `"base": Optional[float]` - The base used to raise the sampled exponent. Default: `10`

View File

@ -102,7 +102,7 @@ hyperparameters. Passing `alias=<dataset_alias_string>` stores the dataset's ID
`dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way `dataset_alias_string` parameter in the experiment's **CONFIGURATION > HYPERPARAMETERS > Datasets** section. This way
you can easily track which dataset the task is using. you can easily track which dataset the task is using.
[`Dataset.get_local_copy`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached, [`Dataset.get_local_copy()`](../../references/sdk/dataset.md#get_local_copy) returns a path to the cached,
downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object. downloaded dataset. Then the dataset path is input to PyTorch's `datasets` object.
The script then trains a neural network to classify images using the dataset created above. The script then trains a neural network to classify images using the dataset created above.

View File

@ -53,29 +53,28 @@ Modify the data folder:
1. Add a file to the sample_data folder.<br/> 1. Add a file to the sample_data folder.<br/>
Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder) Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
1. Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder. ```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples
```
```bash Expected response:
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples ```
``` clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
Expected response: See that 2 files were added or modified, just as expected!
```
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
New dataset created id=0992dd6bae6144388e0f2ef131d9724a
Syncing dataset id 0992dd6bae6144388e0f2ef131d9724a to local folder data_samples
Generating SHA2 hash for 6 files
Hash generation completed
Sync completed: 0 files removed, 2 added / modified
Finalizing dataset
Pending uploads, starting dataset upload to https://files.community.clear.ml
Uploading compressed dataset changes (2 files, total 742 bytes) to https://files.community.clear.ml
Upload completed (742 bytes)
2021-05-04 10:05:42,353 - clearml.Task - INFO - Waiting to finish uploads
2021-05-04 10:05:43,106 - clearml.Task - INFO - Finished uploading
Dataset closed and finalized
```
See that 2 files were added or modified, just as expected!

View File

@ -107,12 +107,13 @@ Using ClearML Data, you can create child datasets that inherit the content of ot
```bash ```bash
clearml-data create --project datasets --name HelloDataset-improved --parents 24d05040f3e14fbfbed8edb1bf08a88c clearml-data create --project datasets --name HelloDataset-improved --parents 24d05040f3e14fbfbed8edb1bf08a88c
``` ```
:::note :::note
You'll need to input the Dataset ID you received when created the dataset above You'll need to input the Dataset ID you received when created the dataset above
::: :::
1. Add a new file. 1. Add a new file.
* Create a new file: `echo "data data data" > new_data.txt` * Create a new file: `echo "data data data" > new_data.txt`
* Now add the file to the dataset: * Now add the file to the dataset:
```bash ```bash

View File

@ -46,4 +46,4 @@ title: Windows
docker-compose -f c:\opt\clearml\docker-compose-win10.yml up -d docker-compose -f c:\opt\clearml\docker-compose-win10.yml up -d
``` ```
If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors). If issues arise during your upgrade, see the FAQ page, [How do I fix Docker upgrade errors?](../faq.md#common-docker-upgrade-errors)

View File

@ -117,12 +117,15 @@ output to the console, when a Python experiment script is run.
For example, when a new ClearML Python Package version is available, the notification is: For example, when a new ClearML Python Package version is available, the notification is:
CLEARML new package available: UPGRADE to vX.Y.Z is recommended! ```
CLEARML new package available: UPGRADE to vX.Y.Z is recommended!
```
When a new ClearML Server version is available, the notification is: When a new ClearML Server version is available, the notification is:
CLEARML-SERVER new version available: upgrade to vX.Y is recommended! ```
CLEARML-SERVER new version available: upgrade to vX.Y is recommended!
```
<br/> <br/>
@ -183,8 +186,7 @@ For more information about `Task` class methods, see the [Task Class](fundamenta
#### Can I store the model configuration file as well? <a id="store-model-configuration"></a> #### Can I store the model configuration file as well? <a id="store-model-configuration"></a>
Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration) Yes! Use [`Task.connect_configuration()`](references/sdk/task.md#connect_configuration):
method:
```python ```python
Task.current_task().connect_configuration("a very long text with the configuration file's content") Task.current_task().connect_configuration("a very long text with the configuration file's content")
@ -240,6 +242,7 @@ To replace the URL of each model, execute the following commands:
``` ```
1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`): 1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`):
```bash ```bash
cat <<EOT >> script.js cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) { db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
@ -248,11 +251,13 @@ To replace the URL of each model, execute the following commands:
EOT EOT
``` ```
Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`. Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`.
1. Run the script against the backend DB: 1. Run the script against the backend DB:
```bash ```bash
mongo backend script.js mongo backend script.js
``` ```
<br/> <br/>
#### Models are not accessible from the UI after I moved them (different bucket / server). How do I fix this? <a id="relocate_models"></a> #### Models are not accessible from the UI after I moved them (different bucket / server). How do I fix this? <a id="relocate_models"></a>
@ -342,7 +347,9 @@ ClearML monitors your Python process. When the process exits properly, ClearML c
This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the following command: This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the following command:
pip install -U clearml ```
pip install -U clearml
```
<a id="ssl-connection-error"></a> <a id="ssl-connection-error"></a>
@ -352,7 +359,7 @@ This issue was resolved in Trains v0.9.2. Upgrade to ClearML by executing the fo
Your firewall may be preventing the connection. Try one of the following solutions: Your firewall may be preventing the connection. Try one of the following solutions:
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables CURL_CA_BUNDLE or REQUESTS_CA_BUNDLE. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module). * Direct python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Disable certificate verification * Disable certificate verification
:::warning :::warning
@ -729,26 +736,26 @@ To fix this, the registered URL of each debug image and/or artifact needs to be
1. Open bash in the mongo DB docker container: 1. Open bash in the mongo DB docker container:
```bash ```bash
sudo docker exec -it clearml-mongo /bin/bash sudo docker exec -it clearml-mongo /bin/bash
``` ```
1. Inside the docker shell, create the following script. Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`, 1. Inside the docker shell, create the following script. Make sure to replace `<old-bucket-name>` and `<new-bucket-name>`,
as well as the URL protocol prefixes if you aren't using `s3`. as well as the URL protocol prefixes if you aren't using `s3`.
```bash ```bash
cat <<EOT >> script.js cat <<EOT >> script.js
db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) { db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) {
e.uri = e.uri.replace("s3://<old-bucket-name>/","s3://<new-bucket-name>/"); e.uri = e.uri.replace("s3://<old-bucket-name>/","s3://<new-bucket-name>/");
db.model.save(e);}); db.model.save(e);});
EOT EOT
``` ```
1. Run the script against the backend DB: 1. Run the script against the backend DB:
```bash ```bash
mongo backend script.js mongo backend script.js
``` ```
## Jupyter ## Jupyter
@ -763,21 +770,27 @@ Yes! You can run ClearML in Jupyter Notebooks using either of the following:
**Option 1: Install ClearML on your Jupyter host machine** <a id="opt1"></a> **Option 1: Install ClearML on your Jupyter host machine** <a id="opt1"></a>
1. Connect to your Jupyter host machine. 1. Connect to your Jupyter host machine.
1. Install the ClearML Python Package. 1. Install the ClearML Python Package:
pip install clearml ```
pip install clearml
```
1. Run the ClearML setup wizard. 1. Run the ClearML setup wizard:
clearml-init ```
clearml-init
```
1. In your Jupyter Notebook, you can now use ClearML. 1. In your Jupyter Notebook, you can now use ClearML.
**Option 2: Install ClearML in your Jupyter Notebook** <a id="opt2"></a> **Option 2: Install ClearML in your Jupyter Notebook** <a id="opt2"></a>
1. Install the ClearML Python Package. 1. Install the ClearML Python Package:
pip install clearml ```
pip install clearml
```
1. Get ClearML credentials. Open the ClearML Web UI in a browser. On the **SETTINGS > WORKSPACE** page, click **Create new credentials**. 1. Get ClearML credentials. Open the ClearML Web UI in a browser. On the **SETTINGS > WORKSPACE** page, click **Create new credentials**.
The **JUPYTER NOTEBOOK** tab shows the commands required to configure your notebook (a copy to clipboard action is available on hover) The **JUPYTER NOTEBOOK** tab shows the commands required to configure your notebook (a copy to clipboard action is available on hover)
@ -822,7 +835,9 @@ To override the default configuration file location, set the `CLEARML_CONFIG_FIL
For example: For example:
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf" ```
export CLEARML_CONFIG_FILE="/home/user/myclearml.conf"
```
<br/> <br/>
@ -830,9 +845,11 @@ For example:
To override your configuration file / defaults, set the following OS environment variables: To override your configuration file / defaults, set the following OS environment variables:
export CLEARML_API_ACCESS_KEY="key_here" ```
export CLEARML_API_SECRET_KEY="secret_here" export CLEARML_API_ACCESS_KEY="key_here"
export CLEARML_API_HOST="http://localhost:8008" export CLEARML_API_SECRET_KEY="secret_here"
export CLEARML_API_HOST="http://localhost:8008"
```
<br/> <br/>
@ -864,9 +881,11 @@ Set the OS environment variable `CLEARML_LOG_ENVIRONMENT` with the variables you
If you joined the ClearML Hosted Service and ran a script, but your experiment does not appear in Web UI, you may not have configured ClearML for the hosted service. Run the ClearML setup wizard. It will request your hosted service ClearML credentials and create the ClearML configuration you need. If you joined the ClearML Hosted Service and ran a script, but your experiment does not appear in Web UI, you may not have configured ClearML for the hosted service. Run the ClearML setup wizard. It will request your hosted service ClearML credentials and create the ClearML configuration you need.
pip install clearml ```
pip install clearml
clearml-init clearml-init
```
## ClearML Server Deployment ## ClearML Server Deployment
@ -913,7 +932,9 @@ see [Deploying ClearML Server: Kubernetes using Helm](deploying_clearml/clearml_
If you are using SELinux, run the following command (see this [discussion](https://stackoverflow.com/a/24334000)): If you are using SELinux, run the following command (see this [discussion](https://stackoverflow.com/a/24334000)):
chcon -Rt svirt_sandbox_file_t /opt/clearml ```
chcon -Rt svirt_sandbox_file_t /opt/clearml
```
## ClearML Server Configuration ## ClearML Server Configuration
@ -958,11 +979,15 @@ For example:
To resolve the Docker error: To resolve the Docker error:
`... The container name "/trains-???" is already in use by ...` ```
... The container name "/trains-???" is already in use by ...
```
try removing deprecated images: try removing deprecated images:
$ docker rm -f $(docker ps -a -q) ```
$ docker rm -f $(docker ps -a -q)
```
<br/> <br/>
@ -1042,7 +1067,9 @@ Do the following:
1. Allow bypassing of your proxy server to `localhost` 1. Allow bypassing of your proxy server to `localhost`
using a system environment variable, for example: using a system environment variable, for example:
NO_PROXY = localhost ```
NO_PROXY = localhost
```
1. If a ClearML configuration file (`clearml.conf`) exists, delete it. 1. If a ClearML configuration file (`clearml.conf`) exists, delete it.
1. Open a terminal session. 1. Open a terminal session.

View File

@ -64,72 +64,72 @@ optimization.
1. Import ClearML's automation modules: 1. Import ClearML's automation modules:
```python ```python
from clearml.automation import UniformParameterRange, UniformIntegerParameterRange from clearml.automation import UniformParameterRange, UniformIntegerParameterRange
from clearml.automation import HyperParameterOptimizer from clearml.automation import HyperParameterOptimizer
from clearml.automation.optuna import OptimizerOptuna from clearml.automation.optuna import OptimizerOptuna
``` ```
1. Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once, 1. Initialize the Task, which will be stored in ClearML Server when the code runs. After the code runs at least once,
it can be reproduced, and the parameters can be tuned: it can be reproduced, and the parameters can be tuned:
```python ```python
from clearml import Task from clearml import Task
task = Task.init( task = Task.init(
project_name='Hyper-Parameter Optimization', project_name='Hyper-Parameter Optimization',
task_name='Automatic Hyper-Parameter Optimization', task_name='Automatic Hyper-Parameter Optimization',
task_type=Task.TaskTypes.optimizer, task_type=Task.TaskTypes.optimizer,
reuse_last_task_id=False reuse_last_task_id=False
) )
``` ```
1. Define the optimization configuration and resources budget: 1. Define the optimization configuration and resources budget:
```python ```python
optimizer = HyperParameterOptimizer( optimizer = HyperParameterOptimizer(
# specifying the task to be optimized, task must be in system already so it can be cloned # specifying the task to be optimized, task must be in system already so it can be cloned
base_task_id=TEMPLATE_TASK_ID, base_task_id=TEMPLATE_TASK_ID,
# setting the hyperparameters to optimize # setting the hyperparameters to optimize
hyper_parameters=[ hyper_parameters=[
UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2), UniformIntegerParameterRange('number_of_epochs', min_value=2, max_value=12, step_size=2),
UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2), UniformIntegerParameterRange('batch_size', min_value=2, max_value=16, step_size=2),
UniformParameterRange('dropout', min_value=0, max_value=0.5, step_size=0.05), UniformParameterRange('dropout', min_value=0, max_value=0.5, step_size=0.05),
UniformParameterRange('base_lr', min_value=0.00025, max_value=0.01, step_size=0.00025), UniformParameterRange('base_lr', min_value=0.00025, max_value=0.01, step_size=0.00025),
], ],
# setting the objective metric we want to maximize/minimize # setting the objective metric we want to maximize/minimize
objective_metric_title='accuracy', objective_metric_title='accuracy',
objective_metric_series='total', objective_metric_series='total',
objective_metric_sign='max', objective_metric_sign='max',
# setting optimizer # setting optimizer
optimizer_class=OptimizerOptuna, optimizer_class=OptimizerOptuna,
# configuring optimization parameters # configuring optimization parameters
execution_queue='default', execution_queue='default',
max_number_of_concurrent_tasks=2, max_number_of_concurrent_tasks=2,
optimization_time_limit=60., optimization_time_limit=60.,
compute_time_limit=120, compute_time_limit=120,
total_max_jobs=20, total_max_jobs=20,
min_iteration_per_job=15000, min_iteration_per_job=15000,
max_iteration_per_job=150000, max_iteration_per_job=150000,
) )
``` ```
:::tip Locating Task ID :::tip Locating Task ID
To locate the base task's ID, go to the task's info panel in the [WebApp](../webapp/webapp_overview.md). The ID appears To locate the base task's ID, go to the task's info panel in the [WebApp](../webapp/webapp_overview.md). The ID appears
in the task header. in the task header.
::: :::
:::tip Multi-objective Optimization :::tip Multi-objective Optimization
If you are using the Optuna framework (see [Supported Optimizers](#supported-optimizers)), you can list multiple optimization objectives. If you are using the Optuna framework (see [Supported Optimizers](#supported-optimizers)), you can list multiple optimization objectives.
When doing so, make sure the `objective_metric_title`, `objective_metric_series`, and `objective_metric_sign` lists When doing so, make sure the `objective_metric_title`, `objective_metric_series`, and `objective_metric_sign` lists
are the same length. Each title will be matched to its respective series and sign. are the same length. Each title will be matched to its respective series and sign.
For example, the code below sets two objectives: to minimize the `validation/loss` metric and to maximize the `validation/accuracy` metric. For example, the code below sets two objectives: to minimize the `validation/loss` metric and to maximize the `validation/accuracy` metric.
```python ```python
objective_metric_title=["validation", "validation"] objective_metric_title=["validation", "validation"]
objective_metric_series=["loss", "accuracy"] objective_metric_series=["loss", "accuracy"]
objective_metric_sign=["min", "max"] objective_metric_sign=["min", "max"]
``` ```
::: :::
## Optimizer Execution Options ## Optimizer Execution Options

View File

@ -16,8 +16,8 @@ The script does the following:
* Hyperparameters - Hyperparameters created in each subprocess Task are added to the main Task's hyperparameters. * Hyperparameters - Hyperparameters created in each subprocess Task are added to the main Task's hyperparameters.
Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task), Each Task in a subprocess references the main Task by calling [`Task.current_task()`](../../../references/sdk/task.md#taskcurrent_task),
which always returns the main Task. which always returns the main Task.
1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**. 1. When the script runs, it creates an experiment named `test torch distributed` in the `examples` project in the **ClearML Web UI**.

View File

@ -25,23 +25,23 @@ Integrate ClearML with the following steps:
1. Create a `ClearMLLogger` object. When the code runs, it connects to the ClearML backend, and creates a task in ClearML 1. Create a `ClearMLLogger` object. When the code runs, it connects to the ClearML backend, and creates a task in ClearML
(see ClearMLLogger's parameters [below](#parameters)). (see ClearMLLogger's parameters [below](#parameters)).
```python ```python
from ignite.contrib.handlers.clearml_logger import ClearMLLogger from ignite.contrib.handlers.clearml_logger import ClearMLLogger
clearml_logger = ClearMLLogger(project_name="examples", task_name="ignite") clearml_logger = ClearMLLogger(project_name="examples", task_name="ignite")
``` ```
1. Attach helper handlers to the `ClearMLLogger` object. 1. Attach helper handlers to the `ClearMLLogger` object.
For example, attach the `OutputHandler` to log training loss at each iteration: For example, attach the `OutputHandler` to log training loss at each iteration:
```python ```python
clearml_logger.attach( clearml_logger.attach(
trainer, trainer,
log_handler=OutputHandler(tag="training", log_handler=OutputHandler(tag="training",
output_transform=lambda loss: {"loss": loss}), output_transform=lambda loss: {"loss": loss}),
event_name=Events.ITERATION_COMPLETED event_name=Events.ITERATION_COMPLETED
) )
``` ```
### Parameters ### Parameters
The following are the `ClearMLLogger` parameters: The following are the `ClearMLLogger` parameters:

View File

@ -95,13 +95,16 @@ Now, let's execute some code in the remote session!
1. In the first cell of the notebook, clone the [ClearML repository](https://github.com/allegroai/clearml): 1. In the first cell of the notebook, clone the [ClearML repository](https://github.com/allegroai/clearml):
!git clone https://github.com/allegroai/clearml.git ```
!git clone https://github.com/allegroai/clearml.git
```
1. In the second cell of the notebook, run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py) 1. In the second cell of the notebook, run this [script](https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py)
from the cloned repository: from the cloned repository:
%run clearml/examples/frameworks/keras/keras_tensorboard.py ```
%run clearml/examples/frameworks/keras/keras_tensorboard.py
```
Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but you don't need to install these Look in the script, and notice that it makes use of ClearML, Keras, and TensorFlow, but you don't need to install these
packages in Jupyter, because you specified them in the `--packages` flag of `clearml-session`. packages in Jupyter, because you specified them in the `--packages` flag of `clearml-session`.

View File

@ -36,9 +36,9 @@ the function will be automatically logged as required packages for the pipeline
1. Set an execution queue through which pipeline steps that did not explicitly specify an execution queue will be 1. Set an execution queue through which pipeline steps that did not explicitly specify an execution queue will be
executed. These pipeline steps will be enqueued for execution in this queue. executed. These pipeline steps will be enqueued for execution in this queue.
```python ```python
pipe.set_default_execution_queue('default') pipe.set_default_execution_queue('default')
``` ```
1. Add a pipeline level parameter that can be referenced from any step in the pipeline (see `step_one` below). 1. Add a pipeline level parameter that can be referenced from any step in the pipeline (see `step_one` below).
```python ```python

View File

@ -56,22 +56,22 @@ myDataset_2 = DatasetVersion.create_new_dataset(
To raise a `ValueError` exception if the Dataset exists, specify the `raise_if_exists` parameters as `True`. To raise a `ValueError` exception if the Dataset exists, specify the `raise_if_exists` parameters as `True`.
* With `Dataset.create` * With `Dataset.create`:
```python ```python
try: try:
myDataset = Dataset.create(dataset_name='myDataset One', raise_if_exists=True) myDataset = Dataset.create(dataset_name='myDataset One', raise_if_exists=True)
except ValueError: except ValueError:
print('Dataset exists.') print('Dataset exists.')
``` ```
* Or with `DatasetVersion.create_new_dataset` * Or with `DatasetVersion.create_new_dataset`:
```python ```python
try: try:
myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset Two', raise_if_exists=True) myDataset = DatasetVersion.create_new_dataset(dataset_name='myDataset Two', raise_if_exists=True)
except ValueError: except ValueError:
print('Dataset exists.') print('Dataset exists.')
``` ```
Additionally, create a Dataset with tags and a description. Additionally, create a Dataset with tags and a description.

View File

@ -324,7 +324,7 @@ myDatasetVersion.update_frames(frames)
### Deleting Frames ### Deleting Frames
To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames). To delete a SingleFrame, use [`DatasetVersion.delete_frames()`](../references/hyperdataset/hyperdatasetversion.md#delete_frames):
```python ```python
frames = [] frames = []

View File

@ -77,48 +77,48 @@ Integrate ClearML with the following steps:
) )
``` ```
1. Attach the ClearMLLogger object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML: 1. Attach the `ClearMLLogger` object to helper handlers to log experiment outputs. Ignite supports the following helper handlers for ClearML:
* **ClearMLSaver** - Saves input snapshots as ClearML artifacts. * **ClearMLSaver** - Saves input snapshots as ClearML artifacts.
* **GradsHistHandler** and **WeightsHistHandler** - Logs the model's gradients and weights respectively as histograms. * **GradsHistHandler** and **WeightsHistHandler** - Logs the model's gradients and weights respectively as histograms.
* **GradsScalarHandler** and **WeightsScalarHandler** - Logs gradients and weights respectively as scalars. * **GradsScalarHandler** and **WeightsScalarHandler** - Logs gradients and weights respectively as scalars.
* **OptimizerParamsHandler** - Logs optimizer parameters * **OptimizerParamsHandler** - Logs optimizer parameters
```python ```python
# Attach the logger to the trainer to log model's weights norm # Attach the logger to the trainer to log model's weights norm
clearml_logger.attach( clearml_logger.attach(
trainer, log_handler=WeightsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100) trainer, log_handler=WeightsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
) )
# Attach the logger to the trainer to log model's weights as a histogram # Attach the logger to the trainer to log model's weights as a histogram
clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100)) clearml_logger.attach(trainer, log_handler=WeightsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
# Attach the logger to the trainer to log model's gradients as scalars # Attach the logger to the trainer to log model's gradients as scalars
clearml_logger.attach( clearml_logger.attach(
trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100) trainer, log_handler=GradsScalarHandler(model), event_name=Events.ITERATION_COMPLETED(every=100)
) )
#Attach the logger to the trainer to log model's gradients as a histogram #Attach the logger to the trainer to log model's gradients as a histogram
clearml_logger.attach(trainer, log_handler=GradsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100)) clearml_logger.attach(trainer, log_handler=GradsHistHandler(model), event_name=Events.EPOCH_COMPLETED(every=100))
handler = Checkpoint( handler = Checkpoint(
{"model": model}, {"model": model},
ClearMLSaver(), ClearMLSaver(),
n_saved=1, n_saved=1,
score_function=lambda e: e.state.metrics["accuracy"], score_function=lambda e: e.state.metrics["accuracy"],
score_name="val_acc", score_name="val_acc",
filename_prefix="best", filename_prefix="best",
global_step_transform=global_step_from_engine(trainer), global_step_transform=global_step_from_engine(trainer),
) )
validation_evaluator.add_event_handler(Events.EPOCH_COMPLETED, handler) validation_evaluator.add_event_handler(Events.EPOCH_COMPLETED, handler)
# Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration # Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration
clearml_logger.attach( clearml_logger.attach(
trainer, trainer,
log_handler=OptimizerParamsHandler(optimizer), log_handler=OptimizerParamsHandler(optimizer),
event_name=Events.ITERATION_STARTED event_name=Events.ITERATION_STARTED
) )
``` ```
Visualize all the captured information in the experiment's page in ClearML's [WebApp](#webapp). Visualize all the captured information in the experiment's page in ClearML's [WebApp](#webapp).

View File

@ -21,20 +21,20 @@ Integrate ClearML into your Keras Tuner optimization script by doing the followi
* Specify `ClearMLTunerLogger` as the Keras Tuner logger: * Specify `ClearMLTunerLogger` as the Keras Tuner logger:
```python ```python
from clearml.external.kerastuner import ClearmlTunerLogger from clearml.external.kerastuner import ClearmlTunerLogger
import keras_tuner as kt import keras_tuner as kt
# Create tuner object # Create tuner object
tuner = kt.Hyperband( tuner = kt.Hyperband(
build_model, build_model,
project_name='kt examples', project_name='kt examples',
logger=ClearMLTunerLogger(), # specify ClearMLTunerLogger logger=ClearMLTunerLogger(), # specify ClearMLTunerLogger
objective='val_accuracy', objective='val_accuracy',
max_epochs=10, max_epochs=10,
hyperband_iterations=6 hyperband_iterations=6
) )
``` ```
And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures: And that's it! This creates a [ClearML Task](../fundamentals/task.md) which captures:
* Output Keras model * Output Keras model

View File

@ -320,11 +320,15 @@ To create block code, use one of the following options:
* Surround code with "fences"--three backticks (<code>```</code>): * Surround code with "fences"--three backticks (<code>```</code>):
```
from clearml import Task
t = Task.init(project_name='My project', task_name='Base') ~~~
``` ```
from clearml import Task
t = Task.init(project_name='My project', task_name='Base')
```
~~~
Both of these options will be rendered as: Both of these options will be rendered as:
@ -338,11 +342,13 @@ t = Task.init(project_name='My project', task_name='Base')
To display syntax highlighting, specify the coding language after the first fence (e.g. <code>\```python</code>, <code>\```json</code>, <code>\```js</code>, etc.): To display syntax highlighting, specify the coding language after the first fence (e.g. <code>\```python</code>, <code>\```json</code>, <code>\```js</code>, etc.):
```python ~~~
from clearml import Task ```python
from clearml import Task
t = Task.init(project_name='My project', task_name='Base') t = Task.init(project_name='My project', task_name='Base')
``` ```
~~~
The rendered output should look like this: The rendered output should look like this: