diff --git a/docs/clearml_data/data_management_examples/data_man_simple.md b/docs/clearml_data/data_management_examples/data_man_simple.md index 20a9c621..b9733c09 100644 --- a/docs/clearml_data/data_management_examples/data_man_simple.md +++ b/docs/clearml_data/data_management_examples/data_man_simple.md @@ -7,7 +7,7 @@ In this example we'll create a simple dataset and demonstrate basic actions on i ## Prerequisites 1. First, make sure that you have cloned the [clearml](https://github.com/allegroai/clearml) repository. It contains all the needed files. -1. Open terminal and change directory to the cloned repository's examples folder +1. Open terminal and change directory to the cloned repository's examples folder: ``` cd clearml/examples/reporting @@ -140,7 +140,7 @@ Using ClearML Data, you can create child datasets that inherit the content of ot 1 files removed ``` -1. Close and finalize the dataset +1. Close and finalize the dataset: ```bash clearml-data close diff --git a/docs/faq.md b/docs/faq.md index e2af06c0..1606771a 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -241,7 +241,8 @@ To replace the URL of each model, execute the following commands: sudo docker exec -it clearml-mongo /bin/bash ``` -1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`): +1. Create the following script inside the Docker shell (as well as the URL protocol if you aren't using `s3`). + Make sure to replace `` and ``. ```bash cat <> script.js @@ -250,7 +251,6 @@ To replace the URL of each model, execute the following commands: db.model.save(e);}); EOT ``` - Make sure to replace `` and ``. 1. Run the script against the backend DB: @@ -273,7 +273,7 @@ To fix this, the registered URL of each model needs to be replaced with its curr sudo docker exec -it clearml-mongo /bin/bash ``` -1. Create the following script inside the Docker shell: +1. Create the following script inside the Docker shell (Make sure to replace `` and ``, as well as the URL protocol prefixes if you aren't using S3): ```bash cat <> script.js db.model.find({uri:{$regex:/^s3/}}).forEach(function(e,i) { @@ -281,7 +281,6 @@ To fix this, the registered URL of each model needs to be replaced with its curr db.model.save(e);}); EOT ``` - Make sure to replace `` and ``, as well as the URL protocol prefixes if you aren't using S3. 1. Run the script against the backend DB: ```bash diff --git a/docs/hyperdatasets/dataset.md b/docs/hyperdatasets/dataset.md index f5b6f13d..547d91c2 100644 --- a/docs/hyperdatasets/dataset.md +++ b/docs/hyperdatasets/dataset.md @@ -94,37 +94,37 @@ myDataset = DatasetVersion.get_current(dataset_name='myDataset') ### Deleting Datasets -Use the [`Dataset.delete`](../references/hyperdataset/hyperdataset.md#datasetdelete) method to delete a Dataset. +Use the [`Dataset.delete`](../references/hyperdataset/hyperdataset.md#datasetdelete) class method to delete a Dataset: -Delete an empty Dataset (no versions). +* Delete an empty Dataset (no versions): -```python -Dataset.delete(dataset_name='MyDataset', delete_all_versions=False, force=False) -``` + ```python + Dataset.delete(dataset_name='MyDataset', delete_all_versions=False, force=False) + ``` -Delete a Dataset containing only versions whose status is *Draft*. +* Delete a Dataset containing only versions whose status is *Draft*: -```python -Dataset.delete(dataset_name='MyDataset', delete_all_versions=True, force=False) -``` + ```python + Dataset.delete(dataset_name='MyDataset', delete_all_versions=True, force=False) + ``` -Delete a Dataset even if it contains versions whose status is *Published*. +* Delete a Dataset even if it contains versions whose status is *Published*: -```python -Dataset.delete(dataset_name='MyDataset', delete_all_versions=True, force=True) -``` + ```python + Dataset.delete(dataset_name='MyDataset', delete_all_versions=True, force=True) + ``` -Delete a Dataset and the sources associated with its deleted frames: +* Delete a Dataset and the sources associated with its deleted frames: -```python -Dataset.delete( - dataset_name='MyDataset', delete_all_versions=True, force=True, delete_sources=True -) -``` + ```python + Dataset.delete( + dataset_name='MyDataset', delete_all_versions=True, force=True, delete_sources=True + ) + ``` -This supports deleting sources located in AWS S3, GCP, and Azure Storage (not local storage). The `delete_sources` -parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process' progress by passing -`show_progress=True` (`tqdm` required). + This supports deleting sources located in AWS S3, GCP, and Azure Storage (not local storage). The `delete_sources` + parameter is ignored if `delete_all_versions` is `False`. You can view the deletion process' progress by passing + `show_progress=True` (`tqdm` required). ### Tagging Datasets diff --git a/docs/integrations/ignite.md b/docs/integrations/ignite.md index b4e51705..87b0c29f 100644 --- a/docs/integrations/ignite.md +++ b/docs/integrations/ignite.md @@ -38,22 +38,22 @@ during training and validation. Integrate ClearML with the following steps: 1. Create a `ClearMLLogger` object: - ```python - from ignite.contrib.handlers.clearml_logger import * + ```python + from ignite.contrib.handlers.clearml_logger import * - clearml_logger = ClearMLLogger(task_name="ignite", project_name="examples") - ``` - - This creates a [ClearML Task](../fundamentals/task.md) called `ignite` in the `examples` project, which captures your - script's information, including Git details, uncommitted code, python environment. + clearml_logger = ClearMLLogger(task_name="ignite", project_name="examples") + ``` + + This creates a [ClearML Task](../fundamentals/task.md) called `ignite` in the `examples` project, which captures your + script's information, including Git details, uncommitted code, python environment. - You can also pass the following parameters to the `ClearMLLogger` object: - * `task_type` – The type of experiment (see [task types](../fundamentals/task.md#task-types)). - * `report_freq` – The histogram processing frequency (handles histogram values every X calls to the handler). Affects - `GradsHistHandler` and `WeightsHistHandler` (default: 100). - * `histogram_update_freq_multiplier` – The histogram report frequency (report first X histograms and once every X - reports afterwards) (default: 10). - * `histogram_granularity` - Histogram sampling granularity (default: 50). + You can also pass the following parameters to the `ClearMLLogger` object: + * `task_type` – The type of experiment (see [task types](../fundamentals/task.md#task-types)). + * `report_freq` – The histogram processing frequency (handles histogram values every X calls to the handler). Affects + `GradsHistHandler` and `WeightsHistHandler` (default: 100). + * `histogram_update_freq_multiplier` – The histogram report frequency (report first X histograms and once every X + reports afterwards) (default: 10). + * `histogram_granularity` - Histogram sampling granularity (default: 50). 1. Attach the `ClearMLLogger` to output handlers to log metrics: