Small edits (#174)

This commit is contained in:
pollfly
2022-01-24 15:42:17 +02:00
committed by GitHub
parent be9761012e
commit c7591a3a08
9 changed files with 39 additions and 25 deletions

View File

@@ -53,7 +53,7 @@ For this example, use a local version of [this script](https://github.com/allegr
1. Go to the root folder of the cloned repository
1. Run the following command:
``` bash
```bash
clearml-task --project keras --name local_test --script webinar-0620/keras_mnist.py --requirements webinar-0620/requirements.txt --args epochs=1 --queue default
```

View File

@@ -36,8 +36,11 @@ This script downloads the data and `dataset_path` contains the path to the downl
```python
from clearml import Dataset
dataset = Dataset.create(dataset_name="cifar_dataset", dataset_project="dataset examples" )
```
dataset = Dataset.create(
dataset_name="cifar_dataset",
dataset_project="dataset examples"
)
```
This creates a data processing task called `cifar_dataset` in the `dataset examples` project, which
can be viewed in the WebApp.

View File

@@ -19,17 +19,19 @@ where a `clearml-agent` will run and spin an instance of the remote session.
### Step 1: Launch `clearml-session`
Execute the `clearml-session` command with the following command line options:
Execute the following command:
```bash
clearml-session --docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --packages "clearml" "tensorflow>=2.2" "keras" --queue default
```
* Enter a docker image `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04`
This sets the following arguments:
* `--docker nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04` - Docker image
* Enter required python packages `--packages "clearml" "tensorflow>=2.2" "keras"`
* `--packages "clearml" "tensorflow>=2.2" "keras"` - Required Python packages
* Specify the resource queue `--queue default`.
* `--queue default` - Selected queue to launch the session from
:::note
Enter a project name using `--project <name>`. If no project is input, the default project