mirror of
https://github.com/clearml/clearml-docs
synced 2025-04-02 12:21:08 +00:00
Small edits (#796)
This commit is contained in:
parent
dce8b12932
commit
67cfbb1ef6
@ -122,7 +122,7 @@ When `clearml-session` is launched, it initializes a task with a unique ID in th
|
||||
|
||||
To connect to an existing session:
|
||||
1. Go to the web UI, find the interactive session task (by default, it's in project "DevOps").
|
||||
1. Click on the ID button in the task page's header, and copy the unique ID.
|
||||
1. Click the `ID` button in the task page's header to copy the unique ID.
|
||||
1. Run the following command: `clearml-session --attach <session_id>`.
|
||||
1. Click on the JupyterLab / VS Code link that is outputted, or connect directly to the SSH session
|
||||
|
||||
@ -251,6 +251,6 @@ This feature is available under the ClearML Enterprise plan
|
||||
|
||||
The ClearML Enterprise Server provides GUI applications for setting up remote sessions in VS Code and JupyterLab. These
|
||||
apps provide local links to access JupyterLab or VS Code on a remote machine over a secure and encrypted SSH connection,
|
||||
letting you use the IDE as if you're running on the target machine itself
|
||||
letting you use the IDE as if you're running on the target machine itself.
|
||||
|
||||
For more information, see [JupyterLab](../webapp/applications/apps_jupyter_lab.md) and/or [VS Code](../webapp/applications/apps_vscode.md).
|
||||
|
@ -218,7 +218,7 @@ deploy an agent with a different value to what is specified for `agent.default_d
|
||||
:::note NOTES
|
||||
* Since configuration fields may contain JSON-parsable values, make sure to always quote strings (otherwise the agent
|
||||
might fail to parse them)
|
||||
* In order to comply with environment variables standards, it is recommended to use only upper-case characters in
|
||||
* To comply with environment variables standards, it is recommended to use only upper-case characters in
|
||||
environment variable keys. For this reason, ClearML Agent will always convert the configuration path specified in the
|
||||
dynamic environment variable's key to lower-case before overriding configuration values with the environment variable
|
||||
value.
|
||||
@ -321,7 +321,7 @@ Agents can be deployed bare-metal or as dockers in a Kubernetes cluster. ClearML
|
||||
capabilities to Kubernetes, allows for more flexible automation from code, and gives access to all of ClearML Agent's
|
||||
features.
|
||||
|
||||
ClearML Agent is deployed onto a Kubernetes cluster through its Kubernetes-Glue which maps ClearML jobs directly to K8s
|
||||
ClearML Agent is deployed onto a Kubernetes cluster through its Kubernetes-Glue which maps ClearML jobs directly to K8s
|
||||
jobs:
|
||||
* Use the [ClearML Agent Helm Chart](https://github.com/allegroai/clearml-helm-charts/tree/main/charts/clearml-agent) to
|
||||
spin an agent pod acting as a controller. Alternatively (less recommended) run a [k8s glue script](https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py)
|
||||
@ -670,7 +670,7 @@ Let's say a server has three queues:
|
||||
* `quad_gpu`
|
||||
* `opportunistic`
|
||||
|
||||
An agent can be spun on multiple GPUs (e.g. 8 GPUs, `--gpus 0-7`), and then attached to multiple
|
||||
An agent can be spun on multiple GPUs (for example: 8 GPUs, `--gpus 0-7`), and then attached to multiple
|
||||
queues that are configured to run with a certain amount of resources:
|
||||
|
||||
```console
|
||||
@ -702,7 +702,7 @@ clearml-agent daemon --services-mode --queue services --create-queue --docker <d
|
||||
```
|
||||
|
||||
To limit the number of simultaneous tasks run in services mode, pass the maximum number immediately after the
|
||||
`--services-mode` option (e.g. `--services-mode 5`)
|
||||
`--services-mode` option (for example: `--services-mode 5`).
|
||||
|
||||
:::note Notes
|
||||
* `services-mode` currently only supports Docker mode. Each service spins on its own Docker image.
|
||||
|
@ -93,7 +93,7 @@ clearml-data remove [-h] [--id ID] [--files [FILES [FILES ...]]]
|
||||
|Name|Description|Optional|
|
||||
|---|---|---|
|
||||
|`--id` | Dataset's ID. Default: previously created / accessed dataset| <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--files` | Files / folders to remove (wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`). Notice: file path is the path within the dataset, not the local path. For links, you can specify their URL (e.g. `s3://bucket/data`) | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" /> |
|
||||
|`--files` | Files / folders to remove (wildcard selection is supported, for example: `~/data/*.jpg ~/data/json`). Notice: file path is the path within the dataset, not the local path. For links, you can specify their URL (for example, `s3://bucket/data`) | <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" /> |
|
||||
|`--non-recursive` | Disable recursive scan of files | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" /> |
|
||||
|`--verbose` | Verbose reporting | <img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|
||||
@ -107,7 +107,7 @@ Upload the local dataset changes to the server. By default, it's uploaded to the
|
||||
medium by entering an upload destination. For example:
|
||||
* A shared folder: `/mnt/shared/folder`
|
||||
* S3: `s3://bucket/folder`
|
||||
* Non-AWS S3-like services (e.g. MinIO): `s3://host_addr:port/bucket`
|
||||
* Non-AWS S3-like services (such as MinIO): `s3://host_addr:port/bucket`
|
||||
* Google Cloud Storage: `gs://bucket-name/folder`
|
||||
* Azure Storage: `azure://<account name>.blob.core.windows.net/path/to/file`
|
||||
|
||||
@ -253,7 +253,7 @@ Deletes dataset(s). Pass any of the attributes of the dataset(s) you want to del
|
||||
request will raise an exception, unless you pass `--entire-dataset` and `--force`. In this case, all matching datasets
|
||||
will be deleted.
|
||||
|
||||
If a dataset is a parent to a dataset(s), you must pass `--force` in order to delete it.
|
||||
If a dataset is a parent to a dataset(s), you must pass `--force` to delete it.
|
||||
|
||||
:::caution
|
||||
Deleting a parent dataset may cause child datasets to lose data!
|
||||
|
@ -56,7 +56,7 @@ For datasets created with earlier versions of `clearml`, or if using an earlier
|
||||
:::
|
||||
|
||||
:::info Dataset Version
|
||||
Input the dataset's version using the [semantic versioning](https://semver.org) scheme (e.g. `1.0.1`, `2.0`). If a version
|
||||
Input the dataset's version using the [semantic versioning](https://semver.org) scheme (for example: `1.0.1`, `2.0`). If a version
|
||||
is not input, the method tries finding the latest dataset version with the specified `dataset_name` and `dataset_project`
|
||||
and auto-increments the version number.
|
||||
:::
|
||||
@ -65,7 +65,7 @@ Use the `output_uri` parameter to specify a network storage target to upload the
|
||||
(such as previews) to. For example:
|
||||
* A shared folder: `/mnt/share/folder`
|
||||
* S3: `s3://bucket/folder`
|
||||
* Non-AWS S3-like services (e.g. MinIO): `s3://host_addr:port/bucket`
|
||||
* Non-AWS S3-like services (such as MinIO): `s3://host_addr:port/bucket`
|
||||
* Google Cloud Storage: `gs://bucket-name/folder`
|
||||
* Azure Storage: `azure://<account name>.blob.core.windows.net/path/to/file`
|
||||
|
||||
@ -209,12 +209,12 @@ dataset.add_external_files(
|
||||
```
|
||||
|
||||
### remove_files()
|
||||
To remove files from a current dataset, use the [`Dataset.remove_files`](../references/sdk/dataset.md#remove_files) method.
|
||||
To remove files from a current dataset, use [`Dataset.remove_files()`](../references/sdk/dataset.md#remove_files).
|
||||
Input the path to the folder or file to be removed in the `dataset_path` parameter. The path is relative to the dataset.
|
||||
To remove links, specify their URL (e.g. `s3://bucket/file`).
|
||||
To remove links, specify their URL (for example, `s3://bucket/file`).
|
||||
|
||||
You can also input a wildcard into `dataset_path` in order to remove a set of files matching the wildcard.
|
||||
Set the `recursive` parameter to `True` in order to match all wildcard files recursively
|
||||
You can also input a wildcard into `dataset_path` to remove a set of files matching the wildcard.
|
||||
Set the `recursive` parameter to `True` to match all wildcard files recursively
|
||||
|
||||
For example:
|
||||
|
||||
@ -257,7 +257,7 @@ To upload the dataset files to network storage, use the [`Dataset.upload`](../re
|
||||
Use the `output_url` parameter to specify storage target, such as S3 / GS / Azure. For example:
|
||||
* A shared folder: `/mnt/share/folder`
|
||||
* S3: `s3://bucket/folder`
|
||||
* Non-AWS S3-like services (e.g. MinIO): `s3://host_addr:port/bucket`
|
||||
* Non-AWS S3-like services (such as MinIO): `s3://host_addr:port/bucket`
|
||||
* Google Cloud Storage: `gs://bucket-name/folder`
|
||||
* Azure Storage: `azure://<account name>.blob.core.windows.net/path/to/file`
|
||||
|
||||
@ -369,7 +369,7 @@ ClearML Task: created new task id=offline-372657bb04444c25a31bc6af86552cc9
|
||||
ClearML Task: Offline session stored in /home/user/.clearml/cache/offline/b786845decb14eecadf2be24affc7418.zip
|
||||
```
|
||||
|
||||
Note that in offline mode, any methods that require communicating with the server have no effect (e.g. `squash()`,
|
||||
Note that in offline mode, any methods that require communicating with the server have no effect (such as `squash()`,
|
||||
`finalize()`, `get_local_copy()`, `get()`, `move_to_project()`, etc.).
|
||||
|
||||
Upload the offline dataset to the ClearML Server using [`Dataset.import_offline_session()`](../references/sdk/dataset.md#datasetimport_offline_session).
|
||||
|
@ -106,7 +106,7 @@ and [example](https://github.com/allegroai/clearml/blob/master/examples/schedule
|
||||
|
||||
### TriggerScheduler
|
||||
The `TriggerScheduler` class facilitates triggering task execution in the case that specific events occur in the system
|
||||
(e.g. model publication, dataset creation, task failure). See [code](https://github.com/allegroai/clearml/blob/master/clearml/automation/trigger.py#L148)
|
||||
(such as model publication, dataset creation, task failure). See [code](https://github.com/allegroai/clearml/blob/master/clearml/automation/trigger.py#L148)
|
||||
and [usage example](https://github.com/allegroai/clearml/blob/master/examples/scheduler/trigger_example.py).
|
||||
|
||||
## Examples
|
||||
|
@ -181,7 +181,7 @@ The default operator for a query is `or`, unless `and` is placed at the beginnin
|
||||
### Retrieving Models
|
||||
Retrieve a local copy of a ClearML model through a `Model`/`InputModel` object's [`get_local_copy()`](../references/sdk/model_model.md#get_local_copy).
|
||||
The method returns a path to a cached local copy of the model. In the case that the model is already cached, you can set
|
||||
`force_download` to `True` in order to download a fresh version.
|
||||
`force_download` to `True` to download a fresh version.
|
||||
|
||||
## Logging Metrics and Plots
|
||||
|
||||
|
@ -19,7 +19,7 @@ To ensure every run will provide the same results, ClearML controls the determin
|
||||
:::
|
||||
|
||||
:::note
|
||||
ClearML object (e.g. task, project) names are required to be at least 3 characters long
|
||||
ClearML object (such as task, project) names are required to be at least 3 characters long
|
||||
:::
|
||||
|
||||
```python
|
||||
@ -100,8 +100,8 @@ By default, when ClearML is integrated into your script, it automatically captur
|
||||
and parameters from supported argument parsers. But, you may want to have more control over what your experiment logs.
|
||||
|
||||
#### Frameworks
|
||||
To control a task's framework logging, use the `auto_connect_frameworks` parameter of the [`Task.init`](../references/sdk/task.md#taskinit)
|
||||
method. Turn off all automatic logging by setting the parameter to `False`. For finer grained control of logged frameworks,
|
||||
To control a task's framework logging, use the `auto_connect_frameworks` parameter of [`Task.init()`](../references/sdk/task.md#taskinit).
|
||||
Turn off all automatic logging by setting the parameter to `False`. For finer grained control of logged frameworks,
|
||||
input a dictionary, with framework-boolean pairs.
|
||||
|
||||
For example:
|
||||
@ -165,7 +165,7 @@ auto_connect_arg_parser={}
|
||||
|
||||
### Task Reuse
|
||||
Every `Task.init` call will create a new task for the current execution.
|
||||
In order to mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if:
|
||||
To mitigate the clutter that a multitude of debugging tasks might create, a task will be reused if:
|
||||
* The last time it was executed (on this machine) was under 24 hours ago (configurable, see
|
||||
[`sdk.development.task_reuse_time_window_in_hours`](../configs/clearml_conf.md#task_reuse) in
|
||||
the ClearML configuration reference)
|
||||
@ -183,7 +183,7 @@ The task will continue reporting its outputs based on the iteration in which it
|
||||
train/loss scalar reported was for iteration 100, when continued, the next report will be as iteration 101.
|
||||
|
||||
:::note Reproducibility
|
||||
Continued tasks may not be reproducible. In order to guarantee task reproducibility, you must ensure that all steps are
|
||||
Continued tasks may not be reproducible. To guarantee task reproducibility, you must ensure that all steps are
|
||||
done in the same order (e.g. maintaining learning rate profile, ensuring data is fed in the same order).
|
||||
:::
|
||||
|
||||
@ -786,7 +786,7 @@ task = Task.init(
|
||||
Specify the model storage URI location using the relevant format:
|
||||
* A shared folder: `/mnt/share/folder`
|
||||
* S3: `s3://bucket/folder`
|
||||
* Non-AWS S3-like services (e.g. MinIO): `s3://host_addr:port/bucket`
|
||||
* Non-AWS S3-like services (such as MinIO): `s3://host_addr:port/bucket`
|
||||
* Google Cloud Storage: `gs://bucket-name/folder`
|
||||
* Azure Storage: `azure://<account name>.blob.core.windows.net/path/to/file`
|
||||
:::
|
||||
|
@ -63,7 +63,7 @@ different queue. When a queue detects a task, the autoscaler spins up the approp
|
||||

|
||||
|
||||
The diagram above demonstrates an example where an autoscaler app instance is attached to two queues. Each queue is
|
||||
associated with a different resource, CPU and GPU, and each queue has two enqueued tasks. In order to execute the tasks,
|
||||
associated with a different resource, CPU and GPU, and each queue has two enqueued tasks. To execute the tasks,
|
||||
the autoscaler spins up four machines, two CPU machines to execute the tasks in the CPU queue and two GPU machines to
|
||||
execute the tasks in the GPU queue.
|
||||
|
||||
|
@ -1248,7 +1248,7 @@ This configuration is deprecated. This plot behavior is now controlled via the U
|
||||
|
||||
**`sdk.metrics.images.format`** (*string*)
|
||||
|
||||
* The image file format for generated debug images (e.g., JPEG).
|
||||
* The image file format for generated debug images (such as "JPEG").
|
||||
|
||||
---
|
||||
|
||||
@ -1329,10 +1329,10 @@ This configuration is deprecated. This plot behavior is now controlled via the U
|
||||
:::important Enterprise features
|
||||
The ClearML Enterprise plan also supports the following configuration options under `sdk.storage.cache`:
|
||||
* `size.max_used_bytes` (*str*) - Maximum size of the local cache directory. If set to `-1`, the directory can use
|
||||
the available disk space. Specified in storage units (e.g. `1GB`, `2TB`, `500MB`).
|
||||
the available disk space. Specified in storage units (for example: `1GB`, `2TB`, `500MB`).
|
||||
* `size.min_free_bytes` (*str*) - Minimum amount of free disk space that should be left. If `size.max_used_bytes` is
|
||||
set to `-1`, this configuration will limit the cache directory maximum size to `free disk space - size.min_free_bytes`.
|
||||
Specified in storage units (e.g. `1GB`, `2TB`, `500MB`).
|
||||
Specified in storage units (for example: `1GB`, `2TB`, `500MB`).
|
||||
* `zero_file_size_check` (*bool*)- If set to `True`, each cache hit will also check the cached file size, making sure
|
||||
it is not zero (default `False`)
|
||||
* `secondary` (*dict*) - Set up a secondary cache (acts as an L2 cache). When a request is made, the primary cache is
|
||||
@ -1403,7 +1403,7 @@ base64-encoded contents string, otherwise ignored
|
||||
* `path` - Target file's path, may include `~` and inplace env vars
|
||||
* `target_format` - Format used to encode contents before writing into the target file. Supported values are `json`, `yaml`,
|
||||
`yml`, and `bytes` (in which case the file will be written in binary mode). Default is text mode.
|
||||
* `mode` - File-system mode (permissions) to apply to the file after its creation. The mode string will be parsed into an integer (e.g. `"0o777"` for `-rwxrwxrwx`)
|
||||
* `mode` - File-system mode (permissions) to apply to the file after its creation. The mode string will be parsed into an integer (for example: `"0o777"` for `-rwxrwxrwx`)
|
||||
* `overwrite` - Overwrite the target file in case it exists. Default is `true`.
|
||||
|
||||
Example:
|
||||
|
@ -69,7 +69,7 @@ By default, ClearML Server launches with unrestricted access. To restrict ClearM
|
||||
instructions in the [Security](clearml_server_security.md) page.
|
||||
:::
|
||||
|
||||
To launch ClearML Server using a GCP Custom Image, see the [Manually importing virtual disks](https://cloud.google.com/compute/docs/import/import-existing-image#overview) in the "Google Cloud Storage" documentation, [Compute Engine documentation](https://cloud.google.com/compute/docs). For more information on Custom Images, see [Custom Images](https://cloud.google.com/compute/docs/images#custom_images) in the "Compute Engine documentation".
|
||||
To launch ClearML Server using a GCP Custom Image, see the [Manually importing virtual disks](https://cloud.google.com/compute/docs/import/import-existing-image#overview) in the "Google Cloud Storage" documentation, [Compute Engine documentation](https://cloud.google.com/compute/docs). For more information about Custom Images, see [Custom Images](https://cloud.google.com/compute/docs/images#custom_images) in the "Compute Engine documentation".
|
||||
|
||||
The minimum requirements for ClearML Server are:
|
||||
|
||||
|
13
docs/faq.md
13
docs/faq.md
@ -129,7 +129,10 @@ When a new ClearML Server version is available, the notification is:
|
||||
#### How do I find out ClearML version information? <a id="versions"></a>
|
||||
|
||||
ClearML server version information is available in the ClearML WebApp **Settings** page. On the bottom right of the page,
|
||||
it says **Version**, followed by three numbers: the web application version, the API server version, and the API version.
|
||||
the following numbers are displayed:
|
||||
* Web application version
|
||||
* API server version
|
||||
* API version
|
||||
|
||||

|
||||
|
||||
@ -576,8 +579,8 @@ coordinates plot:
|
||||
|
||||
#### I want to add more graphs, not just with TensorBoard. Is this supported? <a id="more-graph-types"></a>
|
||||
|
||||
Yes! The [Logger](fundamentals/logger.md) module includes methods for explicit reporting. For examples of explicit reporting, see the [Explicit Reporting](guides/reporting/explicit_reporting.md)
|
||||
tutorial, which includes a list of methods for explicit reporting.
|
||||
Yes! The [`Logger`](fundamentals/logger.md) module includes methods for explicit reporting. For examples of explicit reporting, see the [Explicit Reporting](guides/reporting/explicit_reporting.md)
|
||||
tutorial.
|
||||
|
||||
<br/>
|
||||
|
||||
@ -638,7 +641,7 @@ the experiment's ID. If the experiment's ID is `6ea4f0b56d994320a713aeaf13a86d9d
|
||||
|
||||
ClearML supports other storage types for `output_uri`:
|
||||
* S3: `s3://bucket/folder`
|
||||
* Non-AWS S3-like services (e.g. MinIO): `s3://host_addr:port/bucket`
|
||||
* Non-AWS S3-like services (such as MinIO): `s3://host_addr:port/bucket`
|
||||
* Google Cloud Storage: `gs://bucket-name/folder`
|
||||
* Azure Storage: `azure://<account name>.blob.core.windows.net/path/to/file`
|
||||
|
||||
@ -767,7 +770,7 @@ If the thread does not complete, it times out.
|
||||
|
||||
This can occur for scripts that do not import any packages, for example short test scripts.
|
||||
|
||||
To fix this issue, you could import the `time` package and add a `time.sleep(20)` statement to the end of your script.
|
||||
To fix this issue, you can import the `time` package and add a `time.sleep(20)` statement to the end of your script.
|
||||
|
||||
## scikit-learn
|
||||
|
||||
|
@ -33,7 +33,7 @@ the following types of parameters:
|
||||
as well as values overridden during runtime.
|
||||
|
||||
:::tip Disabling Automatic Logging
|
||||
Automatic logging can be disabled. See this [FAQ](../faq.md#controlling_logging).
|
||||
Automatic logging can be disabled. See [Control Automatic Logging](../clearml_sdk/task_sdk.md#control-automatic-logging).
|
||||
:::
|
||||
|
||||
### Environment Variables
|
||||
|
@ -28,7 +28,7 @@ on model performance, saving and comparing these between experiments is sometime
|
||||
|
||||
ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
|
||||
|
||||
You can also log parameter dictionaries (very useful when parsing an external config file and storing as a dict object),
|
||||
You can also log parameter dictionaries (very useful when parsing an external configuration file and storing as a dict object),
|
||||
whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
|
||||
|
||||
```python
|
||||
|
@ -21,7 +21,7 @@ keywords: [mlops, components, Experiment Manager]
|
||||
|
||||
<Collapsible type="info" title="Video Transcript">
|
||||
Welcome to ClearML. In this video, we’ll go deeper into some of the best practices and advanced tricks you can use while working with ClearML experiment management.
|
||||
|
||||
<br/>
|
||||
The first thing to know is that the Task object is the central pillar of both the experiment manager and the orchestration and automation components. This means that if you manage the task well in the experiment phase, it will be much easier to scale to production later down the line.
|
||||
|
||||
So let’s take a look at the task object in more detail. We have inputs called hyperparameters and configuration objects for external config files. Outputs can be anything like we saw in the last video. Things like debug images, plots and console output kind of speak for themselves, so the ones we’ll focus on here are scalars and artifacts.
|
||||
|
@ -76,7 +76,7 @@ Make use of the container you've just built by having a ClearML agent make use o
|
||||
:::
|
||||
|
||||
This agent will pull the enqueued task and run it using the `new_docker` image to create the execution environment.
|
||||
In the task's **CONSOLE** tab, one of the first logs should be:
|
||||
In the task's **CONSOLE** tab, one of the first logs displays the following:
|
||||
|
||||
```console
|
||||
Executing: ['docker', 'run', ..., 'CLEARML_DOCKER_IMAGE=new_docker', ...].
|
||||
|
@ -12,7 +12,7 @@ and running, users can send Tasks to be executed on Google Colab's hardware.
|
||||
|
||||
## Prerequisites
|
||||
* Be signed up for ClearML (or have a server deployed).
|
||||
* Have a Google account to access Google Colab
|
||||
* Have a Google account to access Google Colab.
|
||||
|
||||
|
||||
## Steps
|
||||
|
@ -20,7 +20,7 @@ example script from ClearML's GitHub repo:
|
||||
## Before Starting
|
||||
|
||||
Make a copy of [pytorch_mnist.py](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py)
|
||||
in order to add explicit reporting to it.
|
||||
to add explicit reporting to it.
|
||||
|
||||
```bash
|
||||
cp pytorch_mnist.py pytorch_mnist_tutorial.py
|
||||
|
@ -87,7 +87,7 @@ Auto refresh allows monitoring the progress of experiments in real time. It is e
|
||||
|
||||
**To enable / disable auto refresh:**
|
||||
|
||||
* Hover over refresh and then check / uncheck the **Auto Refresh** checkbox.
|
||||
* Hover over refresh and then check / clear the **Auto Refresh** checkbox.
|
||||
|
||||
## Step 6: Save the Tracking Leaderboard
|
||||
|
||||
|
@ -223,7 +223,7 @@ frame = SingleFrame(
|
||||
For the ClearML UI to be able to show frames stored in non-AWS S3-like services (e.g. MinIO), make sure the `preview_uri` link
|
||||
uses the `s3://` prefix and explicitly specifies the port number in the URL (e.g. `s3://my_address.com:80/bucket/my_image.png`).
|
||||
|
||||
Additionally, make sure to provide cloud storage access in the WebApp [**Settings > Web App Cloud Access**](../webapp/webapp_profile.md#browser-cloud-storage-access).
|
||||
Additionally, make sure to provide cloud storage access in the WebApp [**Settings > Configuration > Web App Cloud Access**](../webapp/webapp_profile.md#browser-cloud-storage-access).
|
||||
Input `<host_address>:<port_number>` in the **Host** field.
|
||||
:::
|
||||
|
||||
|
@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Scalars (loss, learning rates)
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard python packages (such as argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
You can view all the task details in the [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
@ -28,7 +28,7 @@ ClearML logs the OmegaConf as a blob and can be viewed in the
|
||||
## Modifying Hydra Values
|
||||
|
||||
### Via Command Line
|
||||
You can use Hydra's command line syntax to modify your OmegaConf: override, append, or remove config values:
|
||||
You can use Hydra's command line syntax to modify your OmegaConf: override, append, or remove configuration values:
|
||||
* Override config value: `foo.bar=value`
|
||||
* Append config value: `+foo.bar=value`
|
||||
* Remove config value: `~foo.bar` or `~foo.bar=value`
|
||||
|
@ -46,7 +46,7 @@ ClearML uses `/` as a delimiter for subprojects: using `example/sample` as a nam
|
||||
task within the `example` project.
|
||||
:::
|
||||
|
||||
In order to log the models created during training, set the `CLEARML_LOG_MODEL` environment variable to `True`.
|
||||
To log the models created during training, set the `CLEARML_LOG_MODEL` environment variable to `True`.
|
||||
|
||||
You can see all the captured data in the task's page of the ClearML [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
|
@ -74,7 +74,7 @@ See [Automatic Logging](clearml_sdk/task_sdk.md#automatic-logging) for more info
|
||||
|
||||
### Manual Logging
|
||||
|
||||
You can explicitly specify an experiment’s models using ClearML InputModel and OutputModel classes.
|
||||
You can explicitly specify an experiment’s models using ClearML `InputModel` and `OutputModel` classes.
|
||||
|
||||
#### InputModel
|
||||
|
||||
|
@ -38,7 +38,7 @@ def main(pickle_url, mock_parameter='mock'):
|
||||
|
||||
* `name` - The name for the pipeline controller task
|
||||
* `project` - The ClearML project where the pipeline controller task is stored
|
||||
* `version` - Numbered version string (e.g. `1.2.3`). If not set, find the pipeline's latest version and increment
|
||||
* `version` - Numbered version string (for example, `1.2.3`). If not set, find the pipeline's latest version and increment
|
||||
it. If no such version is found, defaults to `1.0.0`
|
||||
* `default_queue` - The default [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) in which to enqueue all pipeline steps (unless otherwise specified in the pipeline step).
|
||||
* `args_map` - Map arguments to their [configuration section](../fundamentals/hyperparameters.md#webapp-interface) in
|
||||
|
@ -16,7 +16,7 @@ pipe = PipelineController(
|
||||
|
||||
* `name` - The name for the pipeline controller task
|
||||
* `project` - The ClearML project where the pipeline tasks will be created.
|
||||
* `version` - Numbered version string (e.g. `1.2.3`). If not set, find the pipeline's latest version and increment
|
||||
* `version` - Numbered version string (for example, `1.2.3`). If not set, find the pipeline's latest version and increment
|
||||
it. If no such version is found, defaults to `1.0.0`
|
||||
|
||||
See [PipelineController](../references/sdk/automation_controller_pipelinecontroller.md) for all arguments.
|
||||
@ -109,7 +109,7 @@ See [`PipelineController.add_step`](../references/sdk/automation_controller_pipe
|
||||
#### parameter_override
|
||||
Use the `parameter_override` argument to modify the step's parameter values. The `parameter_override` dictionary key is
|
||||
the task parameter's full path, which includes the parameter section's name and the parameter name separated by a slash
|
||||
(e.g. `'General/dataset_url'`). Passing `"${}"` in the argument value lets you reference input/output configurations
|
||||
(for example, `'General/dataset_url'`). Passing `"${}"` in the argument value lets you reference input/output configurations
|
||||
from other pipeline steps. For example: `"${<step_name>.id}"` will be converted to the Task ID of the referenced pipeline
|
||||
step.
|
||||
|
||||
@ -189,7 +189,7 @@ ClearmlJob.
|
||||
|
||||
If the callback returned value is False, the step is skipped and so is any step in the pipeline that relies on this step.
|
||||
|
||||
Notice the parameters are already parsed (e.g. `${step1.parameters.Args/param}` is replaced with relevant value).
|
||||
Notice the parameters are already parsed (for example, `${step1.parameters.Args/param}` is replaced with relevant value).
|
||||
|
||||
```python
|
||||
def step_created_callback(
|
||||
|
@ -19,7 +19,7 @@ on completed/failed Tasks via Slack integration.
|
||||
* Entire workspace - Monitor all projects in your workspace
|
||||
|
||||
:::caution
|
||||
If your workspace or specified project contains a large number of experiments, the dashboard could take a while to update
|
||||
If your workspace or specified project contains a large number of experiments, the dashboard can take a while to update.
|
||||
:::
|
||||
|
||||
* **Monitored Metric** - Specify a metric for the app instance to monitor. The dashboard will present an aggregated view
|
||||
|
@ -135,7 +135,7 @@ used.
|
||||
```
|
||||
src="<web_server>/widgets/?objectType=task&xaxis=iter&type=scalar&metrics=<metric_name>&variants=<variant>&project=<project_id>&page_size=1&page=0&order_by[]=-last_update
|
||||
```
|
||||
Notice that the `project` parameter is specified. In order to get the most recent single experiment,
|
||||
Notice that the `project` parameter is specified. To get the most recent single experiment,
|
||||
`page_size=1&page=0&order_by[]=-last_update` is added. `page_size` specifies how many results are returned in each
|
||||
page, and `page` specifies which page to return (in this case the first page)--this way you can specify how many
|
||||
experiments you want in your graph. `order_by[]=-last_update` orders the results by update time in descending order
|
||||
|
@ -43,7 +43,7 @@ const features = [
|
||||
imageUrl: 'img/ico-data-management.svg',
|
||||
description: (
|
||||
<>
|
||||
<code>ClearML-Data</code> enables you to <b>abstract the Data from your Code</b>.
|
||||
<code>ClearML-Data</code> lets you <b>abstract the Data from your Code</b>.
|
||||
CLI / programmatic interface easily create datasets from anywhere.
|
||||
ClearML-Data is a fully differentiable solution on top of object-storage / http / NAS layer.
|
||||
<b> We solve your data localization problem, so you can process it anywhere.</b>
|
||||
|
Loading…
Reference in New Issue
Block a user