Small edits (#433)

This commit is contained in:
pollfly 2023-01-10 10:29:40 +02:00 committed by GitHub
parent 6452838046
commit d0e4d14573
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 20 additions and 21 deletions

View File

@ -94,7 +94,7 @@ continue running. When set to `true`, the agent crashes when encountering an exc
**`agent.disable_ssh_mount`** (*bool*) **`agent.disable_ssh_mount`** (*bool*)
* Set to `true` to disables the auto `.ssh` mount into the docker. The environment variable `CLEARML_AGENT_DISABLE_SSH_MOUNT` * Set to `true` to disable the auto `.ssh` mount into the docker. The environment variable `CLEARML_AGENT_DISABLE_SSH_MOUNT`
overrides this configuration option. overrides this configuration option.
___ ___
@ -340,8 +340,8 @@ ___
**`agent.worker_name`** (*string*) **`agent.worker_name`** (*string*)
* Use to replace the hostname when creating a worker, if `agent.worker_id` is not specified. For example, if `worker_name` * Use to replace the hostname when creating a worker if `agent.worker_id` is not specified. For example, if `worker_name`
is `MyMachine` and the process_id is `12345`, then the worker is name `MyMachine.12345`. is `MyMachine` and the `process_id` is `12345`, then the worker is named `MyMachine.12345`.
Alternatively, specify the environment variable `CLEARML_WORKER_ID` to override this worker name. Alternatively, specify the environment variable `CLEARML_WORKER_ID` to override this worker name.
@ -420,7 +420,7 @@ match_rules: [
**`agent.package_manager.conda_channels`** (*[string]*) **`agent.package_manager.conda_channels`** (*[string]*)
* If conda is used, then this is list of conda channels to use when installing Python packages. * If conda is used, then this is the list of conda channels to use when installing Python packages.
--- ---
@ -875,13 +875,13 @@ and limitations on bucket naming.
**`sdk.azure.storage.containers.account_name`** (*string*) **`sdk.azure.storage.containers.account_name`** (*string*)
* For Azure Storage, this is account name. * For Azure Storage, this is the account name.
--- ---
**`sdk.azure.storage.containers.container_name`** (*string*) **`sdk.azure.storage.containers.container_name`** (*string*)
* For Azure Storage, this the container name. * For Azure Storage, this is the container name.
<br/> <br/>

View File

@ -114,11 +114,11 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended.
* Linux: * Linux:
sudo chown -R 1000:1000 /opt/clearml sudo chown -R 1000:1000 /opt/clearml
* macOS: * macOS:
sudo chown -R $(whoami):staff /opt/clearml sudo chown -R $(whoami):staff /opt/clearml
1. Download the ClearML Server docker-compose YAML file. 1. Download the ClearML Server docker-compose YAML file.

View File

@ -21,11 +21,11 @@ Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capabilit
1. Shutdown the ClearML Server executing the following command (which assumes the configuration file is in the environment path). 1. Shutdown the ClearML Server executing the following command (which assumes the configuration file is in the environment path).
docker-compose -f /opt/clearml/docker-compose.yml down docker-compose -f /opt/clearml/docker-compose.yml down
If you are upgrading from **Trains Server**, use this command: If you are upgrading from **Trains Server**, use this command:
docker-compose -f /opt/trains/docker-compose.yml down docker-compose -f /opt/trains/docker-compose.yml down
1. [Backing up your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) is recommended, 1. [Backing up your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) is recommended,
and if your configuration folder is not empty, backing up your configuration. and if your configuration folder is not empty, backing up your configuration.

View File

@ -19,13 +19,12 @@ class. The storage examples include:
### Downloading a File ### Downloading a File
To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy) To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy)
method, and specify the destination location as the `remote_url` argument: class method, and specify the destination location as the `remote_url` argument:
```python ```python
# create a StorageManager instance from clearml import StorageManager
manager = StorageManager()
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip") StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip")
``` ```
:::note :::note
@ -35,13 +34,13 @@ Zip and tar.gz files will be automatically extracted to cache. This can be contr
To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument: To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument:
```python ```python
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test") StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test")
``` ```
To download a non-compressed file, set the `extract_archive` argument to `False`. To download a non-compressed file, set the `extract_archive` argument to `False`.
```python ```python
manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False) StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False)
``` ```
By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the
@ -51,11 +50,11 @@ class method, and specifying the chunk size in MB (not supported for Azure and G
### Uploading a File ### Uploading a File
To upload a file to storage, call the [StorageManager.upload_file](../../references/sdk/storage.md#storagemanagerupload_file) To upload a file to storage, call the [StorageManager.upload_file](../../references/sdk/storage.md#storagemanagerupload_file)
method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url` class method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url`
argument. argument.
```python ```python
manager.upload_file( StorageManager.upload_file(
local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder" local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder"
) )
``` ```
@ -70,9 +69,9 @@ class method, and specifying the chunk size in MB (not supported for Azure and G
### Setting Cache Limits ### Setting Cache Limits
To set a limit on the number of files cached, call the [StorageManager.set_cache_file_limit](../../references/sdk/storage.md#storagemanagerset_cache_file_limit) To set a limit on the number of files cached, call the [StorageManager.set_cache_file_limit](../../references/sdk/storage.md#storagemanagerset_cache_file_limit)
method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size, class method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size,
only the number of files. only the number of files.
```python ```python
new_cache_limit = manager.set_cache_file_limit(cache_file_limit=100) new_cache_limit = StorageManager.set_cache_file_limit(cache_file_limit=100)
``` ```