small edits

This commit is contained in:
revital 2023-07-31 15:34:36 +03:00
parent 84cf462de1
commit 5b4d0fca02
4 changed files with 4 additions and 4 deletions

View File

@ -51,7 +51,7 @@ As can be seen, the `clearml-data sync` command creates the dataset, then upload
Now we'll modify the folder:
1. Add another line to one of the files in the `data_samples` folder.
1. Add a file to the sample_data folder.<br/>
Run`echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
Run `echo "data data data" > data_samples/new_data.txt` (this will create the file `new_data.txt` and put it in the `data_samples` folder)
We'll repeat the process of creating a new dataset with the previous one as its parent, and syncing the folder.

View File

@ -645,7 +645,7 @@ See more details in the [Artifacts Reporting example](../guides/reporting/artifa
A task's artifacts are accessed through the tasks *artifact* property which lists the artifacts locations.
The artifacts can subsequently be retrieved from their respective locations by using:
* `get_local_copy()`- Downloads the artifact and caches it for later use, returning the path to the cached copy.
* `get_local_copy()` - Downloads the artifact and caches it for later use, returning the path to the cached copy.
* `get()` - Returns a Python object constructed from the downloaded artifact file.
The code below demonstrates how to access a file artifact using the previously generated preprocessed data:

View File

@ -83,7 +83,7 @@ logged as required packages for the pipeline execution step.
```
The third step in the pipeline uses the `step_three` function and uses as its input the second steps output. This
reference implicitly defines the pipeline structure, making `step_two`the parent step of `step_three`.
reference implicitly defines the pipeline structure, making `step_two` the parent step of `step_three`.
Its return object will be stored as an artifact under the name `model`:

View File

@ -28,7 +28,7 @@ StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip")
```
:::note
Zip and tar.gz files will be automatically extracted to cache. This can be controlled with the`extract_archive` flag.
Zip and tar.gz files will be automatically extracted to cache. This can be controlled with the `extract_archive` flag.
:::
To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument: