From d0e4d145734671c841cd01f60c26dbe9a1b8454e Mon Sep 17 00:00:00 2001 From: pollfly <75068813+pollfly@users.noreply.github.com> Date: Tue, 10 Jan 2023 10:29:40 +0200 Subject: [PATCH] Small edits (#433) --- docs/configs/clearml_conf.md | 12 +++++------ .../clearml_server_linux_mac.md | 4 ++-- .../upgrade_server_aws_ec2_ami.md | 4 ++-- docs/guides/storage/examples_storagehelper.md | 21 +++++++++---------- 4 files changed, 20 insertions(+), 21 deletions(-) diff --git a/docs/configs/clearml_conf.md b/docs/configs/clearml_conf.md index 48e937d4..cfd6e4cd 100644 --- a/docs/configs/clearml_conf.md +++ b/docs/configs/clearml_conf.md @@ -94,7 +94,7 @@ continue running. When set to `true`, the agent crashes when encountering an exc **`agent.disable_ssh_mount`** (*bool*) -* Set to `true` to disables the auto `.ssh` mount into the docker. The environment variable `CLEARML_AGENT_DISABLE_SSH_MOUNT` +* Set to `true` to disable the auto `.ssh` mount into the docker. The environment variable `CLEARML_AGENT_DISABLE_SSH_MOUNT` overrides this configuration option. ___ @@ -340,8 +340,8 @@ ___ **`agent.worker_name`** (*string*) -* Use to replace the hostname when creating a worker, if `agent.worker_id` is not specified. For example, if `worker_name` - is `MyMachine` and the process_id is `12345`, then the worker is name `MyMachine.12345`. +* Use to replace the hostname when creating a worker if `agent.worker_id` is not specified. For example, if `worker_name` + is `MyMachine` and the `process_id` is `12345`, then the worker is named `MyMachine.12345`. Alternatively, specify the environment variable `CLEARML_WORKER_ID` to override this worker name. @@ -420,7 +420,7 @@ match_rules: [ **`agent.package_manager.conda_channels`** (*[string]*) -* If conda is used, then this is list of conda channels to use when installing Python packages. +* If conda is used, then this is the list of conda channels to use when installing Python packages. --- @@ -875,13 +875,13 @@ and limitations on bucket naming. **`sdk.azure.storage.containers.account_name`** (*string*) -* For Azure Storage, this is account name. +* For Azure Storage, this is the account name. --- **`sdk.azure.storage.containers.container_name`** (*string*) -* For Azure Storage, this the container name. +* For Azure Storage, this is the container name.
diff --git a/docs/deploying_clearml/clearml_server_linux_mac.md b/docs/deploying_clearml/clearml_server_linux_mac.md index aabdbc93..a134ef9d 100644 --- a/docs/deploying_clearml/clearml_server_linux_mac.md +++ b/docs/deploying_clearml/clearml_server_linux_mac.md @@ -114,11 +114,11 @@ Deploying the server requires a minimum of 4 GB of memory, 8 GB is recommended. * Linux: - sudo chown -R 1000:1000 /opt/clearml + sudo chown -R 1000:1000 /opt/clearml * macOS: - sudo chown -R $(whoami):staff /opt/clearml + sudo chown -R $(whoami):staff /opt/clearml 1. Download the ClearML Server docker-compose YAML file. diff --git a/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md b/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md index ccbf6d0b..dfe7929e 100644 --- a/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md +++ b/docs/deploying_clearml/upgrade_server_aws_ec2_ami.md @@ -21,11 +21,11 @@ Some legacy **Trains Server** AMIs provided an auto-upgrade on restart capabilit 1. Shutdown the ClearML Server executing the following command (which assumes the configuration file is in the environment path). - docker-compose -f /opt/clearml/docker-compose.yml down + docker-compose -f /opt/clearml/docker-compose.yml down If you are upgrading from **Trains Server**, use this command: - docker-compose -f /opt/trains/docker-compose.yml down + docker-compose -f /opt/trains/docker-compose.yml down 1. [Backing up your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) is recommended, and if your configuration folder is not empty, backing up your configuration. diff --git a/docs/guides/storage/examples_storagehelper.md b/docs/guides/storage/examples_storagehelper.md index a335abe0..5f6b18dd 100644 --- a/docs/guides/storage/examples_storagehelper.md +++ b/docs/guides/storage/examples_storagehelper.md @@ -19,13 +19,12 @@ class. The storage examples include: ### Downloading a File To download a ZIP file from storage to the `global` cache context, call the [StorageManager.get_local_copy](../../references/sdk/storage.md#storagemanagerget_local_copy) -method, and specify the destination location as the `remote_url` argument: +class method, and specify the destination location as the `remote_url` argument: ```python -# create a StorageManager instance -manager = StorageManager() - -manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip") +from clearml import StorageManager + +StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.zip") ``` :::note @@ -35,13 +34,13 @@ Zip and tar.gz files will be automatically extracted to cache. This can be contr To download a file to a specific context in cache, specify the name of the context as the `cache_context` argument: ```python -manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test") +StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", cache_context="test") ``` To download a non-compressed file, set the `extract_archive` argument to `False`. ```python -manager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False) +StorageManager.get_local_copy(remote_url="s3://MyBucket/MyFolder/file.ext", extract_archive=False) ``` By default, the `StorageManager` reports its download progress to the console every 5MB. You can change this using the @@ -51,11 +50,11 @@ class method, and specifying the chunk size in MB (not supported for Azure and G ### Uploading a File To upload a file to storage, call the [StorageManager.upload_file](../../references/sdk/storage.md#storagemanagerupload_file) -method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url` +class method. Specify the full path of the local file as the `local_file` argument, and the remote URL as the `remote_url` argument. ```python -manager.upload_file( +StorageManager.upload_file( local_file="/mnt/data/also_file.ext", remote_url="s3://MyBucket/MyFolder" ) ``` @@ -70,9 +69,9 @@ class method, and specifying the chunk size in MB (not supported for Azure and G ### Setting Cache Limits To set a limit on the number of files cached, call the [StorageManager.set_cache_file_limit](../../references/sdk/storage.md#storagemanagerset_cache_file_limit) -method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size, +class method and specify the `cache_file_limit` argument as the maximum number of files. This does not limit the cache size, only the number of files. ```python -new_cache_limit = manager.set_cache_file_limit(cache_file_limit=100) +new_cache_limit = StorageManager.set_cache_file_limit(cache_file_limit=100) ``` \ No newline at end of file