Small edits (#174)

This commit is contained in:
pollfly 2022-01-24 15:42:17 +02:00 committed by GitHub
parent be9761012e
commit c7591a3a08
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 39 additions and 25 deletions

View File

@ -384,15 +384,15 @@ A single agent can listen to multiple queues. The priority is set by their order
```bash
clearml-agent daemon --detached --queue high_q low_q --gpus 0
```
This ensures the agent first tries to pull a Task from the “hiqh_q” queue, and only if it is empty, the agent will try to pull
from the “low_q” queue.
This ensures the agent first tries to pull a Task from the `hiqh_q` queue, and only if it is empty, the agent will try to pull
from the `low_q` queue.
To make sure an agent pulls from all queues equally, add the `--order-fairness` flag.
```bash
clearml-agent daemon --detached --queue group_a group_b --order-fairness --gpus 0
```
It will make sure the agent will pull from the “group_a” queue, then from “group_b”, then back to “group_a”, etc. This ensures
that “group A” or ”group_b” will not be able to starve one another of resources.
It will make sure the agent will pull from the `group_a` queue, then from `group_b`, then back to `group_a`, etc. This ensures
that `group A` or `group_b` will not be able to starve one another of resources.
### Explicit Task Execution
@ -713,8 +713,8 @@ Currently, these runtime properties can only be set using an ClearML REST API ca
endpoint, as follows:
* The body of the request must contain the `worker-id`, and the runtime property to add.
* An expiry date is optional. Use the format `”expiry”:<time>`. For example, `”expiry”:86400` will set an expiry of 24 hours.
* To delete the property, set the expiry date to zero, `'expiry:0'`.
* An expiry date is optional. Use the format `"expiry":<time>`. For example, `"expiry":86400` will set an expiry of 24 hours.
* To delete the property, set the expiry date to zero, `"expiry":0`.
For example, to force a worker on for 24 hours:
@ -736,10 +736,12 @@ APIClient. The body of the call must contain the ``queue-id`` and the tags to ad
For example, force workers on for a queue using the APIClient:
from trains.backend_api.session.client import APIClient
from clearml.backend_api.session.client import APIClient
client = APIClient()
client.queues.update(queue=<queue_id>, tags=["force_workers:on"]
client.queues.update(queue="<queue_id>", tags=["force_workers:on"]
Or, force workers on for a queue using the REST API:
curl --user <key>:<secret> --header "Content-Type: application/json" --data '{"queue":"<queue_id>","tags":["force_workers:on"]}' http://<api-server-hostname-or-ip>:8008/queues.update
```bash
curl --user <key>:<secret> --header "Content-Type: application/json" --data '{"queue":"<queue_id>","tags":["force_workers:on"]}' http://<api-server-hostname-or-ip>:8008/queues.update
```

View File

@ -13,7 +13,7 @@ The following page provides a reference to `clearml-data`'s CLI commands.
### Creating a Dataset
```bash
clearml-data create --project <project_name> --name <dataset_name> --parents <existing_dataset_id>`
clearml-data create --project <project_name> --name <dataset_name> --parents <existing_dataset_id>
```
Creates a new dataset. <br/>
@ -132,7 +132,7 @@ Once a dataset is finalized, it can no longer be modified.
### Syncing Local Storage
```
clearml-data sync [--id <dataset_id] --folder <folder_location> [--parents '<parent_id>']`
clearml-data sync [--id <dataset_id] --folder <folder_location> [--parents '<parent_id>']
```
This option syncs a folder's content with ClearML. It is useful in case a user has a single point of truth (i.e. a folder) which
updates from time to time.

View File

@ -36,7 +36,10 @@ This script downloads the data and `dataset_path` contains the path to the downl
```python
from clearml import Dataset
dataset = Dataset.create(dataset_name="cifar_dataset", dataset_project="dataset examples" )
dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which

View File

@ -1096,10 +1096,10 @@ For example, to get the metrics for an experiment and to print metrics as a hist
1. Send a request for a metrics histogram for experiment (task) ID `11` using the `events` service `ScalarMetricsIterHistogramRequest` method and print the histogram.
```python
# Import Session from the trains backend_api
from trains.backend_api import Session
# Import Session from the clearml backend_api
from clearml.backend_api import Session
# Import the services for tasks, events, and projects
from trains.backend_api.services import tasks, events, projects
from clearml.backend_api.services import tasks, events, projects
# Create an authenticated session
session = Session()

View File

@ -99,7 +99,11 @@ we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket:
```python
task = Task.init(project_name='examples', task_name='storing model', output_uri='s3://my_models/')
task = Task.init(
project_name='examples',
task_name='storing model',
output_uri='s3://my_models/'
)
```
Now, whenever the framework (TF/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the experiment.

View File

@ -43,7 +43,7 @@ dataset_folder = dataset.get_mutable_local_copy(
overwrite=True
)
# change some files in the `./work_dataset` folder
...
# create a new version of the dataset with the pickle file
new_dataset = Dataset.create(
dataset_project='data', dataset_name='dataset_v2',

View File

@ -53,7 +53,7 @@ For this example, use a local version of [this script](https://github.com/allegr
1. Go to the root folder of the cloned repository
1. Run the following command:
``` bash
```bash
clearml-task --project keras --name local_test --script webinar-0620/keras_mnist.py --requirements webinar-0620/requirements.txt --args epochs=1 --queue default
```

View File

@ -36,8 +36,11 @@ This script downloads the data and `dataset_path` contains the path to the downl
```python
from clearml import Dataset
dataset = Dataset.create(dataset_name="cifar_dataset", dataset_project="dataset examples" )
```
dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which
can be viewed in the WebApp.

View File

@ -19,17 +19,19 @@ where a `clearml-agent` will run and spin an instance of the remote session.
### Step 1: Launch `clearml-session`
Execute the `clearml-session` command with the following command line options:
Execute the following command:
```bash
clearml-session --docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --packages "clearml" "tensorflow>=2.2" "keras" --queue default
```
* Enter a docker image `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04`
This sets the following arguments:
* `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04` - Docker image
* Enter required python packages `--packages "clearml" "tensorflow>=2.2" "keras"`
* `--packages "clearml" "tensorflow>=2.2" "keras"` - Required Python packages
* Specify the resource queue `--queue default`.
* `--queue default` - Selected queue to launch the session from
:::note
Enter a project name using `--project <name>`. If no project is input, the default project