diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md index f8721345..c1bc3a87 100644 --- a/docs/configs/clearml_conf.md +++ b/docs/configs/clearml_conf.md @@ -43,11 +43,13 @@ which supports environment variable reference. For example: ```editorconfig - google.storage { - # # Default project and credentials file - # # Will be used when no bucket configuration is found - project: "clearml" - credentials_json: ${GOOGLE_APPLICATION_CREDENTIALS} +sdk { + google.storage { + # # Default project and credentials file + # # Will be used when no bucket configuration is found + project: "clearml" + credentials_json: ${GOOGLE_APPLICATION_CREDENTIALS} + } } ``` diff --git a/docs/fundamentals/hyperparameters.md b/docs/fundamentals/hyperparameters.md index 4dcaba99..888dfad1 100644 --- a/docs/fundamentals/hyperparameters.md +++ b/docs/fundamentals/hyperparameters.md @@ -97,7 +97,7 @@ ClearML provides methods to directly access a task's logged parameters. To get all of a task's parameters and properties (hyperparameters, configuration objects, and user properties), use the [`Task.get_parameters`](../references/sdk/task.md#get_parameters) method, which will return a dictionary with the parameters, -including their sub-sections (see [WebApp sections](#webapp-interface) below). +including their subsections (see [WebApp sections](#webapp-interface) below). ## WebApp Interface @@ -108,7 +108,7 @@ The configuration panel is split into three sections according to type: - **Hyperparameters** - Individual parameters for configuration - **Configuration Objects** - Usually configuration files (JSON / YAML) or Python objects. -These sections are further broken down into sub-sections based on how the parameters were logged (General / Args / TF_Define / Environment). +These sections are further broken down into subsections based on how the parameters were logged (General / Args / TF_Define / Environment). ![Task hyperparameters sections](../img/hyperparameters_sections.png) diff --git a/docs/getting_started/ds/best_practices.md b/docs/getting_started/ds/best_practices.md index a6ba891d..291782db 100644 --- a/docs/getting_started/ds/best_practices.md +++ b/docs/getting_started/ds/best_practices.md @@ -7,7 +7,7 @@ While ClearML was designed to fit into any workflow, the practices described bel to preparing it to scale in the long term. :::important -The below is only our opinion. ClearML was designed to fit into any workflow whether it conforms to our way or not! +The following is only an opinion. ClearML is designed to accommodate any workflow whether it conforms to our way or not! ::: ## Develop Locally @@ -16,9 +16,9 @@ The below is only our opinion. ClearML was designed to fit into any workflow whe During early stages of model development, while code is still being modified heavily, this is the usual setup we'd expect to see used by data scientists: - - A local development machine, usually a laptop (and usually using only CPU) with a fraction of the dataset for faster - iterations - Use a local machine for writing, training, and debugging pipeline code. - - A workstation with a GPU, usually with a limited amount of memory for small batch-sizes - Use this workstation to train + - **Local development machine**, usually a laptop (and usually using only CPU) with a fraction of the dataset for faster + iterations. Use a local machine for writing, training, and debugging pipeline code. + - **Workstation with a GPU**, usually with a limited amount of memory for small batch-sizes. Use this workstation to train the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing. The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome! diff --git a/docs/getting_started/mlops/mlops_second_steps.md b/docs/getting_started/mlops/mlops_second_steps.md index ca6a3c61..be1616dc 100644 --- a/docs/getting_started/mlops/mlops_second_steps.md +++ b/docs/getting_started/mlops/mlops_second_steps.md @@ -51,8 +51,8 @@ new_dataset = Dataset.create( dataset_project='data', dataset_name='dataset_v2', parent_datasets=[dataset], - use_current_task=True, - # this will make sure we have the creation code and the actual dataset artifacts on the same Task + # this will make sure we have the creation code and the actual dataset artifacts on the same Task + use_current_task=True, ) new_dataset.sync_folder(local_path=dataset_folder) new_dataset.upload()