Small edits (#174)

This commit is contained in:
pollfly 2022-01-24 15:42:17 +02:00 committed by GitHub
parent be9761012e
commit c7591a3a08
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 39 additions and 25 deletions

View File

@ -384,15 +384,15 @@ A single agent can listen to multiple queues. The priority is set by their order
```bash ```bash
clearml-agent daemon --detached --queue high_q low_q --gpus 0 clearml-agent daemon --detached --queue high_q low_q --gpus 0
``` ```
This ensures the agent first tries to pull a Task from the “hiqh_q” queue, and only if it is empty, the agent will try to pull This ensures the agent first tries to pull a Task from the `hiqh_q` queue, and only if it is empty, the agent will try to pull
from the “low_q” queue. from the `low_q` queue.
To make sure an agent pulls from all queues equally, add the `--order-fairness` flag. To make sure an agent pulls from all queues equally, add the `--order-fairness` flag.
```bash ```bash
clearml-agent daemon --detached --queue group_a group_b --order-fairness --gpus 0 clearml-agent daemon --detached --queue group_a group_b --order-fairness --gpus 0
``` ```
It will make sure the agent will pull from the “group_a” queue, then from “group_b”, then back to “group_a”, etc. This ensures It will make sure the agent will pull from the `group_a` queue, then from `group_b`, then back to `group_a`, etc. This ensures
that “group A” or ”group_b” will not be able to starve one another of resources. that `group A` or `group_b` will not be able to starve one another of resources.
### Explicit Task Execution ### Explicit Task Execution
@ -713,8 +713,8 @@ Currently, these runtime properties can only be set using an ClearML REST API ca
endpoint, as follows: endpoint, as follows:
* The body of the request must contain the `worker-id`, and the runtime property to add. * The body of the request must contain the `worker-id`, and the runtime property to add.
* An expiry date is optional. Use the format `”expiry”:<time>`. For example, `”expiry”:86400` will set an expiry of 24 hours. * An expiry date is optional. Use the format `"expiry":<time>`. For example, `"expiry":86400` will set an expiry of 24 hours.
* To delete the property, set the expiry date to zero, `'expiry:0'`. * To delete the property, set the expiry date to zero, `"expiry":0`.
For example, to force a worker on for 24 hours: For example, to force a worker on for 24 hours:
@ -736,10 +736,12 @@ APIClient. The body of the call must contain the ``queue-id`` and the tags to ad
For example, force workers on for a queue using the APIClient: For example, force workers on for a queue using the APIClient:
from trains.backend_api.session.client import APIClient from clearml.backend_api.session.client import APIClient
client = APIClient() client = APIClient()
client.queues.update(queue=<queue_id>, tags=["force_workers:on"] client.queues.update(queue="<queue_id>", tags=["force_workers:on"]
Or, force workers on for a queue using the REST API: Or, force workers on for a queue using the REST API:
```bash
curl --user <key>:<secret> --header "Content-Type: application/json" --data '{"queue":"<queue_id>","tags":["force_workers:on"]}' http://<api-server-hostname-or-ip>:8008/queues.update curl --user <key>:<secret> --header "Content-Type: application/json" --data '{"queue":"<queue_id>","tags":["force_workers:on"]}' http://<api-server-hostname-or-ip>:8008/queues.update
```

View File

@ -13,7 +13,7 @@ The following page provides a reference to `clearml-data`'s CLI commands.
### Creating a Dataset ### Creating a Dataset
```bash ```bash
clearml-data create --project <project_name> --name <dataset_name> --parents <existing_dataset_id>` clearml-data create --project <project_name> --name <dataset_name> --parents <existing_dataset_id>
``` ```
Creates a new dataset. <br/> Creates a new dataset. <br/>
@ -132,7 +132,7 @@ Once a dataset is finalized, it can no longer be modified.
### Syncing Local Storage ### Syncing Local Storage
``` ```
clearml-data sync [--id <dataset_id] --folder <folder_location> [--parents '<parent_id>']` clearml-data sync [--id <dataset_id] --folder <folder_location> [--parents '<parent_id>']
``` ```
This option syncs a folder's content with ClearML. It is useful in case a user has a single point of truth (i.e. a folder) which This option syncs a folder's content with ClearML. It is useful in case a user has a single point of truth (i.e. a folder) which
updates from time to time. updates from time to time.

View File

@ -36,7 +36,10 @@ This script downloads the data and `dataset_path` contains the path to the downl
```python ```python
from clearml import Dataset from clearml import Dataset
dataset = Dataset.create(dataset_name="cifar_dataset", dataset_project="dataset examples" ) dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
``` ```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which

View File

@ -1096,10 +1096,10 @@ For example, to get the metrics for an experiment and to print metrics as a hist
1. Send a request for a metrics histogram for experiment (task) ID `11` using the `events` service `ScalarMetricsIterHistogramRequest` method and print the histogram. 1. Send a request for a metrics histogram for experiment (task) ID `11` using the `events` service `ScalarMetricsIterHistogramRequest` method and print the histogram.
```python ```python
# Import Session from the trains backend_api # Import Session from the clearml backend_api
from trains.backend_api import Session from clearml.backend_api import Session
# Import the services for tasks, events, and projects # Import the services for tasks, events, and projects
from trains.backend_api.services import tasks, events, projects from clearml.backend_api.services import tasks, events, projects
# Create an authenticated session # Create an authenticated session
session = Session() session = Session()

View File

@ -99,7 +99,11 @@ we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket: For example, upload all snapshots to an S3 bucket:
```python ```python
task = Task.init(project_name='examples', task_name='storing model', output_uri='s3://my_models/') task = Task.init(
project_name='examples',
task_name='storing model',
output_uri='s3://my_models/'
)
``` ```
Now, whenever the framework (TF/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the experiment. Now, whenever the framework (TF/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the experiment.

View File

@ -43,7 +43,7 @@ dataset_folder = dataset.get_mutable_local_copy(
overwrite=True overwrite=True
) )
# change some files in the `./work_dataset` folder # change some files in the `./work_dataset` folder
...
# create a new version of the dataset with the pickle file # create a new version of the dataset with the pickle file
new_dataset = Dataset.create( new_dataset = Dataset.create(
dataset_project='data', dataset_name='dataset_v2', dataset_project='data', dataset_name='dataset_v2',

View File

@ -36,7 +36,10 @@ This script downloads the data and `dataset_path` contains the path to the downl
```python ```python
from clearml import Dataset from clearml import Dataset
dataset = Dataset.create(dataset_name="cifar_dataset", dataset_project="dataset examples" ) dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
``` ```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which

View File

@ -19,17 +19,19 @@ where a `clearml-agent` will run and spin an instance of the remote session.
### Step 1: Launch `clearml-session` ### Step 1: Launch `clearml-session`
Execute the `clearml-session` command with the following command line options: Execute the following command:
```bash ```bash
clearml-session --docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --packages "clearml" "tensorflow>=2.2" "keras" --queue default clearml-session --docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --packages "clearml" "tensorflow>=2.2" "keras" --queue default
``` ```
* Enter a docker image `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04` This sets the following arguments:
* Enter required python packages `--packages "clearml" "tensorflow>=2.2" "keras"` * `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04` - Docker image
* Specify the resource queue `--queue default`. * `--packages "clearml" "tensorflow>=2.2" "keras"` - Required Python packages
* `--queue default` - Selected queue to launch the session from
:::note :::note
Enter a project name using `--project <name>`. If no project is input, the default project Enter a project name using `--project <name>`. If no project is input, the default project