Small edits (#797)

This commit is contained in:
pollfly 2024-03-12 11:25:00 +02:00 committed by GitHub
parent 67cfbb1ef6
commit e4cde447aa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 12 additions and 8 deletions

View File

@ -45,7 +45,7 @@ of the optimization results in table and graph forms.
|`--args`| List of `<argument>=<value>` strings to pass to the remote execution. Currently only argparse/click/hydra/fire arguments are supported. Example: `--args lr=0.003 batch_size=64`|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--local`| If set, run the experiments locally. Notice that no new python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`| <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--save-top-k-tasks-only`| Keep only the top \<k\> performing tasks, and archive the rest of the experiments. Input `-1` to keep all tasks. Default: `10`.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--time-limit-per-job`|Maximum execution time per single job in minutes. When time limit is exceeded, the job is aborted. Default: no time limit.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--time-limit-per-job`|Maximum execution time per single job in minutes. When the time limit is exceeded, the job is aborted. Default: no time limit.|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
</div>

View File

@ -48,13 +48,13 @@ As can be seen, the `clearml-data sync` command creates the dataset, then upload
## Modifying Synced Folder
Now we'll modify the folder:
Modify the data folder:
1. Add another line to one of the files in the `data_samples` folder.
1. Add a file to the sample_data folder.<br/>
Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
We'll repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
Repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.
```bash
clearml-data sync --project datasets --name second_ds --parents a1ddc8b0711b4178828f6c6e6e994b7c --folder data_samples

View File

@ -126,7 +126,7 @@ You'll need to input the Dataset ID you received when created the dataset above
1 file added
```
1. Remove a file. We'll need to specify the file's full path (within the dataset, not locally) to remove it.
1. Remove a file. You need to specify the file's full path (within the dataset, not locally) to remove it.
```bash
clearml-data remove --files data_samples/dancing.jpg

View File

@ -31,6 +31,8 @@ The second step is to preprocess the data. First access the data, then modify it
and lastly create a new version of the data.
```python
from clearml import Task, Dataset
# create a task for the data processing part
task = Task.init(project_name='data', task_name='create', task_type='data_processing')
@ -93,6 +95,8 @@ will first run the first and then run the second.
It is important to remember that pipelines are Tasks by themselves and can also be automated by other pipelines (i.e. pipelines of pipelines).
```python
from clearml import PipelineController
pipe = PipelineController(
project='data',
name='pipeline demo',
@ -112,6 +116,6 @@ pipe.add_step(
)
```
We can also pass the parameters from one step to the other (for example `Task.id`).
In addition to pipelines made up of Task steps, ClearML also supports pipelines consisting of function steps. See more in the
full pipeline documentation [here](../../pipelines/pipelines.md).
You can also pass the parameters from one step to the other (for example `Task.id`).
In addition to pipelines made up of Task steps, ClearML also supports pipelines consisting of function steps. For more
information, see the [full pipeline documentation](../../pipelines/pipelines.md).

View File

@ -105,7 +105,7 @@ Switch on the **Show row extremes** toggle to highlight each variant's maximum a
The **Hyperparameters** tab's **Parallel Coordinates** comparison shows experiments' hyperparameter impact on specified
metrics:
1. Under **Performance Metrics**, select a metrics to compare for
1. Under **Performance Metrics**, select metrics to compare for
1. Select the values to use for each metric in the plot (can select multiple):
* LAST - The final value, or the most recent value, for currently running experiments
* MIN - Minimal value