Small edits (#812)

This commit is contained in:
pollfly 2024-03-27 11:56:21 +02:00 committed by GitHub
parent 57be45d2a8
commit fb6270ff23
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 15 additions and 13 deletions

View File

@ -43,11 +43,13 @@ which supports environment variable reference.
For example:
```editorconfig
google.storage {
# # Default project and credentials file
# # Will be used when no bucket configuration is found
project: "clearml"
credentials_json: ${GOOGLE_APPLICATION_CREDENTIALS}
sdk {
google.storage {
# # Default project and credentials file
# # Will be used when no bucket configuration is found
project: "clearml"
credentials_json: ${GOOGLE_APPLICATION_CREDENTIALS}
}
}
```

View File

@ -97,7 +97,7 @@ ClearML provides methods to directly access a task's logged parameters.
To get all of a task's parameters and properties (hyperparameters, configuration objects, and user properties), use the
[`Task.get_parameters`](../references/sdk/task.md#get_parameters) method, which will return a dictionary with the parameters,
including their sub-sections (see [WebApp sections](#webapp-interface) below).
including their subsections (see [WebApp sections](#webapp-interface) below).
## WebApp Interface
@ -108,7 +108,7 @@ The configuration panel is split into three sections according to type:
- **Hyperparameters** - Individual parameters for configuration
- **Configuration Objects** - Usually configuration files (JSON / YAML) or Python objects.
These sections are further broken down into sub-sections based on how the parameters were logged (General / Args / TF_Define / Environment).
These sections are further broken down into subsections based on how the parameters were logged (General / Args / TF_Define / Environment).
![Task hyperparameters sections](../img/hyperparameters_sections.png)

View File

@ -7,7 +7,7 @@ While ClearML was designed to fit into any workflow, the practices described bel
to preparing it to scale in the long term.
:::important
The below is only our opinion. ClearML was designed to fit into any workflow whether it conforms to our way or not!
The following is only an opinion. ClearML is designed to accommodate any workflow whether it conforms to our way or not!
:::
## Develop Locally
@ -16,9 +16,9 @@ The below is only our opinion. ClearML was designed to fit into any workflow whe
During early stages of model development, while code is still being modified heavily, this is the usual setup we'd expect to see used by data scientists:
- A local development machine, usually a laptop (and usually using only CPU) with a fraction of the dataset for faster
iterations - Use a local machine for writing, training, and debugging pipeline code.
- A workstation with a GPU, usually with a limited amount of memory for small batch-sizes - Use this workstation to train
- **Local development machine**, usually a laptop (and usually using only CPU) with a fraction of the dataset for faster
iterations. Use a local machine for writing, training, and debugging pipeline code.
- **Workstation with a GPU**, usually with a limited amount of memory for small batch-sizes. Use this workstation to train
the model and ensure that you choose a model that makes sense, and the training procedure works. Can be used to provide initial models for testing.
The abovementioned setups might be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!

View File

@ -51,8 +51,8 @@ new_dataset = Dataset.create(
dataset_project='data',
dataset_name='dataset_v2',
parent_datasets=[dataset],
use_current_task=True,
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
# this will make sure we have the creation code and the actual dataset artifacts on the same Task
use_current_task=True,
)
new_dataset.sync_folder(local_path=dataset_folder)
new_dataset.upload()