Merge branch 'main' of https://github.com/allegroai/clearml-docs
@@ -27,9 +27,9 @@ of the optimization results in table and graph forms.
|
||||
|`--args`| List of `<argument>=<value>` strings to pass to the remote execution. Currently only argparse/click/hydra/fire arguments are supported. Example: `--args lr=0.003 batch_size=64`|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--compute-time-limit`|The maximum compute time in minutes that a task can consume. If this time limit is exceeded, all jobs are aborted.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--max-iteration-per-job`|The maximum iterations (of the objective metric) per single job. When iteration maximum is exceeded, the job is aborted.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--max-number-of-concurrent-tasks`|The maximum number of concurrent Tasks running at the same time|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--max-number-of-concurrent-tasks`|The maximum number of concurrent Tasks running at the same time.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--min-iteration-per-job`|The minimum iterations (of the objective metric) per single job.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--local`| If set, run the tasks locally. Notice that no new python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`| <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--local`| If set, run the tasks locally. Notice that no new Python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`.| <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|
||||
|`--objective-metric-series`| Objective metric series to maximize/minimize (e.g. 'loss').|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--objective-metric-sign`| Optimization target, whether to maximize or minimize the value of the objective metric specified. Possible values: "min", "max", "min_global", "max_global". For more information, see [Optimization Objective](#optimization-objective). |<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|`--objective-metric-title`| Objective metric title to maximize/minimize (e.g. 'validation').|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|
||||
|
||||
@@ -101,7 +101,7 @@ When `clearml-session` is launched, it initializes a task with a unique ID in th
|
||||
|
||||
To connect to an existing session:
|
||||
1. Go to the web UI, find the interactive session task (by default, it's in project "DevOps").
|
||||
1. Click the `ID` button in the task page's header to copy the unique ID.
|
||||
1. Copy the unique ID by clicking the `ID` button in the task page's header.
|
||||
1. Run the following command: `clearml-session --attach <session_id>`.
|
||||
1. Click on the JupyterLab / VS Code link that is outputted, or connect directly to the SSH session
|
||||
|
||||
@@ -179,7 +179,7 @@ The Task must be connected to a git repository, since currently single script de
|
||||
:::
|
||||
|
||||
1. In the **ClearML web UI**, find the task that needs debugging.
|
||||
1. Click the `ID` button next to the Task name, and copy the unique ID.
|
||||
1. Copy the unique ID by clicking the `ID` button in the task page's header.
|
||||
1. Enter the following command: `clearml-session --debugging-session <task_id>`
|
||||
1. Click on the JupyterLab / VS Code link, or connect directly to the SSH session.
|
||||
1. In JupyterLab / VS Code, access the task's repository in the `environment/task_repository` folder.
|
||||
@@ -253,9 +253,9 @@ clearml-session --continue-session <session_id> --store-workspace ~/workspace
|
||||
| `--username`| Set your own SSH username for the interactive session | `root` or a previously used username |
|
||||
| `--verbose` | Increase verbosity of logging | `none` |
|
||||
| `--version`| Display the clearml-session utility version| N/A|
|
||||
| `--vscode-extensions` |Install additional VSCode extensions and VSCode python extensions (example: `ms-python.python,ms-python.black-formatter,ms-python.pylint,ms-python.flake8`)|`none`|
|
||||
| `--vscode-extensions` |Install additional VSCode extensions and VSCode Python extensions (example: `ms-python.python,ms-python.black-formatter,ms-python.pylint,ms-python.flake8`)|`none`|
|
||||
| `--vscode-server` | Install VSCode on interactive session | `true` |
|
||||
| `--vscode-version` | Set VSCode server (code-server) version, as well as VSCode python extension version `<vscode:python-ext>` (example: "3.7.4:2020.10.332292344")| `4.14.1:2023.12.0`|
|
||||
| `--vscode-version` | Set VSCode server (code-server) version, as well as VSCode Python extension version `<vscode:python-ext>` (example: "3.7.4:2020.10.332292344")| `4.14.1:2023.12.0`|
|
||||
| `--yes`, `-y`| Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively |N/A|
|
||||
|
||||
</div>
|
||||
|
||||
@@ -35,7 +35,7 @@ The preceding diagram demonstrates a typical flow where an agent executes a task
|
||||
1. Install any required system packages.
|
||||
1. Clone the code from a git repository.
|
||||
1. Apply any uncommitted changes recorded.
|
||||
1. Set up the python environment and required packages.
|
||||
1. Set up the Python environment and required packages.
|
||||
1. The task's script/code is executed.
|
||||
|
||||
:::note Python Version
|
||||
|
||||
@@ -38,7 +38,7 @@ but can be overridden by command-line arguments.
|
||||
|**CLEARML_AGENT_EXTRA_DOCKER_ARGS** | Overrides extra docker args configuration |
|
||||
|**CLEARML_AGENT_EXTRA_DOCKER_LABELS** | List of labels to add to docker container. See [Docker documentation](https://docs.docker.com/config/labels-custom-metadata/). |
|
||||
|**CLEARML_EXTRA_PIP_INSTALL_FLAGS**| List of additional flags to use when the agent installs packages. For example: `CLEARML_EXTRA_PIP_INSTALL_FLAGS=--use-deprecated=legacy-resolver` for a single flag or `CLEARML_EXTRA_PIP_INSTALL_FLAGS="--use-deprecated=legacy-resolver --no-warn-conflicts"` for multiple flags|
|
||||
|**CLEARML_AGENT_EXTRA_PYTHON_PATH** | Sets extra python path |
|
||||
|**CLEARML_AGENT_EXTRA_PYTHON_PATH** | Sets extra Python path |
|
||||
|**CLEARML_AGENT_INITIAL_CONNECT_RETRY_OVERRIDE** | Overrides initial server connection behavior (true by default), allows explicit number to specify number of connect retries) |
|
||||
|**CLEARML_AGENT_NO_UPDATE** | Boolean. Set to `1` to skip agent update in the k8s pod container before the agent executes the task |
|
||||
|**CLEARML_AGENT_K8S_HOST_MOUNT / CLEARML_AGENT_DOCKER_HOST_MOUNT** | Specifies Agent's mount point for Docker / K8s |
|
||||
@@ -47,7 +47,7 @@ but can be overridden by command-line arguments.
|
||||
|**CLEARML_AGENT_PACKAGE_PYTORCH_RESOLVE**|Sets the PyTorch resolving mode. The options are: <ul><li>`none` - No resolving. Install PyTorch like any other package</li><li>`pip` (default) - Sets extra index based on cuda and lets pip resolve</li><li>`direct` - Resolve a direct link to the PyTorch wheel by parsing the pytorch.org pip repository, and matching the automatically detected cuda version with the required PyTorch wheel. If the exact cuda version is not found for the required PyTorch wheel, it will try a lower cuda version until a match is found</li></ul> |
|
||||
|**CLEARML_AGENT_DEBUG_INFO** | Provide additional debug information for a specific context (currently only the `docker` value is supported) |
|
||||
|**CLEARML_AGENT_CHILD_AGENTS_COUNT_CMD** | Provide an alternate bash command to list child agents while working in services mode |
|
||||
|**CLEARML_AGENT_SKIP_PIP_VENV_INSTALL** | Instead of creating a new virtual environment inheriting from the system packages, use an existing virtual environment and install missing packages directly to it. Specify the python binary of the existing virtual environment. For example: `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/home/venv/bin/python` |
|
||||
|**CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL** | If set to `1`, the agent will not install any required python packages and will just use the preexisting python environment to run the task. |
|
||||
|**CLEARML_AGENT_SKIP_PIP_VENV_INSTALL** | Instead of creating a new virtual environment inheriting from the system packages, use an existing virtual environment and install missing packages directly to it. Specify the Python binary of the existing virtual environment. For example: `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/home/venv/bin/python` |
|
||||
|**CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL** | If set to `1`, the agent will not install any required Python packages and will just use the preexisting Python environment to run the task. |
|
||||
|**CLEARML_AGENT_VENV_CACHE_PATH** | Overrides venv cache folder configuration |
|
||||
|**CLEARML_MULTI_NODE_SINGLE_TASK**| Control how multi-node resource monitoring is reported. The options are: <ul><li>`-1` - Only master node's (rank zero) console/resources are reported</li><li>`1` - Graph per node i.e. machine/GPU graph for every node (console output prefixed with RANK)</li><li>`2` - Series per node under a unified machine resource graph, graph per type of resource e.g. CPU/GPU utilization (console output prefixed with RANK)</li></ul>|
|
||||
|
||||
@@ -36,14 +36,14 @@ lineage and content information. See [dataset UI](../webapp/datasets/webapp_data
|
||||
|
||||
## Setup
|
||||
|
||||
`clearml-data` comes built-in with the `clearml` python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
|
||||
`clearml-data` comes built-in with the `clearml` Python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
|
||||
guide for more info!
|
||||
|
||||
## Using ClearML Data
|
||||
|
||||
ClearML Data supports two interfaces:
|
||||
- `clearml-data` - A CLI utility for creating, uploading, and managing datasets. See [CLI](clearml_data_cli.md) for a reference of `clearml-data` commands.
|
||||
- `clearml.Dataset` - A python interface for creating, retrieving, managing, and using datasets. See [SDK](clearml_data_sdk.md) for an overview of the basic methods of the `Dataset` module.
|
||||
- `clearml.Dataset` - A Python interface for creating, retrieving, managing, and using datasets. See [SDK](clearml_data_sdk.md) for an overview of the basic methods of the `Dataset` module.
|
||||
|
||||
For an overview of recommendations for ClearML Data workflows and practices, see [Best Practices](best_practices.md).
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ This page covers `clearml-data`, ClearML's file-based data management solution.
|
||||
See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced queryable dataset management solution.
|
||||
:::
|
||||
|
||||
`clearml-data` is a data management CLI tool that comes as part of the `clearml` python package. Use `clearml-data` to
|
||||
`clearml-data` is a data management CLI tool that comes as part of the `clearml` Python package. Use `clearml-data` to
|
||||
create, modify, and manage your datasets. You can upload your dataset to any storage service of your choice (S3 / GS /
|
||||
Azure / Network Storage) by setting the dataset's upload destination (see [`--storage`](#upload)). Once you have uploaded
|
||||
your dataset, you can access it from any machine.
|
||||
|
||||
@@ -7,7 +7,7 @@ This page covers `clearml-data`, ClearML's file-based data management solution.
|
||||
See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced queryable dataset management solution.
|
||||
:::
|
||||
|
||||
Datasets can be created, modified, and managed with ClearML Data's python interface. You can upload your dataset to any
|
||||
Datasets can be created, modified, and managed with ClearML Data's Python interface. You can upload your dataset to any
|
||||
storage service of your choice (S3 / GS / Azure / Network Storage) by setting the dataset's upload destination (see
|
||||
[`output_url`](#uploading-files) parameter of `Dataset.upload()`). Once you have uploaded your dataset, you can access
|
||||
it from any machine.
|
||||
|
||||
@@ -10,7 +10,7 @@ class to ingest the data.
|
||||
### Downloading the Data
|
||||
Before registering the CIFAR dataset with `clearml-data`, you need to obtain a local copy of it.
|
||||
|
||||
Execute this python script to download the data:
|
||||
Execute this Python script to download the data:
|
||||
```python
|
||||
from clearml import StorageManager
|
||||
|
||||
|
||||
@@ -60,7 +60,7 @@ Nesting projects works on multiple levels. For example: `project_name=main_proje
|
||||
### Automatic Logging
|
||||
After invoking `Task.init` in a script, ClearML starts its automagical logging, which includes the following elements:
|
||||
* **Hyperparameters** - ClearML logs the following types of hyperparameters:
|
||||
* Command Line Parsing - ClearML captures any command line parameters passed when invoking code that uses standard python packages, including:
|
||||
* Command Line Parsing - ClearML captures any command line parameters passed when invoking code that uses standard Python packages, including:
|
||||
* [click](../integrations/click.md)
|
||||
* [argparse](../guides/reporting/hyper_parameters.md#argparse-command-line-options)
|
||||
* [Python Fire](../integrations/python_fire.md)
|
||||
@@ -89,7 +89,7 @@ After invoking `Task.init` in a script, ClearML starts its automagical logging,
|
||||
|
||||
* **Execution details** including:
|
||||
* Git information
|
||||
* Uncommitted code modifications - In cases where no git repository is detected (e.g. when a single python script is
|
||||
* Uncommitted code modifications - In cases where no git repository is detected (e.g. when a single Python script is
|
||||
executed outside a git repository, or when running from a Jupyter Notebook), ClearML logs the contents
|
||||
of the executed script
|
||||
* Python environment
|
||||
@@ -257,7 +257,7 @@ task's status. If a task failed or was aborted, you can view how much progress i
|
||||
|
||||
</div>
|
||||
|
||||
Additionally, you can view a task's progress in its [INFO](../webapp/webapp_exp_track_visual.md#general-information) tab
|
||||
Additionally, you can view a task's progress in its [INFO](../webapp/webapp_exp_track_visual.md#info) tab
|
||||
in the WebApp.
|
||||
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ solution.
|
||||
* Flexible
|
||||
* On-line model deployment
|
||||
* On-line endpoint model/version deployment (i.e. no need to take the service down)
|
||||
* Per model standalone preprocessing and postprocessing python code
|
||||
* Per model standalone preprocessing and postprocessing Python code
|
||||
* Scalable
|
||||
* Multi model per container
|
||||
* Multi models per serving service
|
||||
|
||||
@@ -84,7 +84,7 @@ project (default: "DevOps" project).
|
||||
|
||||
## Registering and Deploying New Models Manually
|
||||
|
||||
Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the python interface,
|
||||
Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the Python interface,
|
||||
or with the `clearml-serving` CLI.
|
||||
|
||||
1. Upload the model file to the `clearml-server` file storage and register it. The `--path` parameter is used to input
|
||||
|
||||
@@ -339,13 +339,13 @@ optional shell script executes inside the Docker on startup, before the task sta
|
||||
|
||||
**`agent.ignore_requested_python_version`** (*bool*)
|
||||
|
||||
* Indicates whether to ignore any requested python version
|
||||
* Indicates whether to ignore any requested Python version
|
||||
|
||||
* The values are:
|
||||
|
||||
* `true` - ignore any requested python version
|
||||
* `false` - if a task was using a specific python version, and the system supports multiple versions, the agent will
|
||||
use the requested python version (default)
|
||||
* `true` - ignore any requested Python version
|
||||
* `false` - if a task was using a specific Python version, and the system supports multiple versions, the agent will
|
||||
use the requested Python version (default)
|
||||
|
||||
___
|
||||
|
||||
|
||||
10
docs/faq.md
@@ -139,7 +139,7 @@ the following numbers are displayed:
|
||||
|
||||

|
||||
|
||||
ClearML python package information can be obtained by using `pip freeze`.
|
||||
ClearML Python package information can be obtained by using `pip freeze`.
|
||||
|
||||
For example:
|
||||
|
||||
@@ -161,7 +161,7 @@ clearml-session==0.3.2
|
||||
#### How can I sort models by a certain metric? <a id="custom-columns"></a>
|
||||
|
||||
To sort models by a metric, in the ClearML Web UI,
|
||||
add a [custom column](webapp/webapp_model_table.md#customizing-the-models-table) in the models table and sort by
|
||||
add a [custom column](webapp/webapp_model_table.md#customizing-the-models-table) to the model table and sort by
|
||||
that metric column. Available custom column options depend upon the models in the table and the metrics that have been
|
||||
attached to them (see [Logging Metrics and Plots](clearml_sdk/model_sdk.md#logging-metrics-and-plots)).
|
||||
|
||||
@@ -324,7 +324,7 @@ For more task configuration options, see [Hyperparameters](fundamentals/hyperpar
|
||||
|
||||
<br/>
|
||||
|
||||
#### I noticed that all of my tasks appear as "Training". Are there other options? <a id="other-experiment-types"></a>
|
||||
#### I noticed that all of my tasks appear as "Training". Are there other options? <a id="other-task-types"></a>
|
||||
|
||||
Yes! ClearML supports [multiple task types](fundamentals/task.md#task-types). When creating tasks and
|
||||
calling [`Task.init()`](references/sdk/task.md#taskinit), you can provide a task type. For example:
|
||||
@@ -336,7 +336,7 @@ task = Task.init(project_name, task_name, Task.TaskTypes.testing)
|
||||
|
||||
<br/>
|
||||
|
||||
#### Sometimes I see tasks as running when in fact they are not. What's going on? <a id="experiment-running-but-stopped"></a>
|
||||
#### Sometimes I see tasks as running when in fact they are not. What's going on? <a id="task-running-but-stopped"></a>
|
||||
|
||||
ClearML monitors your Python process. When the process exits properly, ClearML closes the task. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right-click the task in the WebApp and abort it.
|
||||
|
||||
@@ -358,7 +358,7 @@ pip install -U clearml
|
||||
|
||||
Your firewall may be preventing the connection. Try one of the following solutions:
|
||||
|
||||
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
|
||||
* Direct Python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
|
||||
* Disable certificate verification
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -48,7 +48,7 @@ The diagram above demonstrates a typical flow where an agent executes a task:
|
||||
1. Install any required system packages.
|
||||
1. Clone the code from a git repository.
|
||||
1. Apply any uncommitted changes recorded.
|
||||
1. Set up the python environment and required packages.
|
||||
1. Set up the Python environment and required packages.
|
||||
1. The task's script/code is executed.
|
||||
|
||||
While the agent is running, it continuously reports system metrics to the ClearML Server. You can monitor these metrics
|
||||
|
||||
@@ -21,7 +21,7 @@ and tracks hyperparameters of various types, supporting automatic logging and ex
|
||||
### Automatic Logging
|
||||
Once a ClearML Task has been [initialized](../references/sdk/task.md#taskinit) in a script, ClearML automatically captures and tracks
|
||||
the following types of parameters:
|
||||
* Command line parsing - command line parameters passed when invoking code that uses standard python packages, including:
|
||||
* Command line parsing - command line parameters passed when invoking code that uses standard Python packages, including:
|
||||
* [click](../integrations/click.md)
|
||||
* [argparse](../guides/reporting/hyper_parameters.md#argparse-command-line-options)
|
||||
* [Python Fire](../integrations/python_fire.md)
|
||||
|
||||
@@ -21,7 +21,7 @@ the project are executed, the model checkpoints (snapshots) and artifacts are st
|
||||
Users can create and modify projects, and see project details in the [WebApp](../webapp/webapp_home.md).
|
||||
A project's description can be edited in its [overview](../webapp/webapp_project_overview.md) page. Each project's tasks,
|
||||
models, and dataviews, can be viewed in the project's [task table](../webapp/webapp_exp_table.md),
|
||||
[models table](../webapp/webapp_model_table.md), and [dataviews table](../hyperdatasets/webapp/webapp_dataviews.md).
|
||||
[model table](../webapp/webapp_model_table.md), and [dataview table](../hyperdatasets/webapp/webapp_dataviews.md).
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -69,7 +69,7 @@ allows tasks to be reproduced, and their hyperparameters and results can be save
|
||||
understanding model behavior.
|
||||
|
||||
Hyperparameters can be added from anywhere in your code, and ClearML provides multiple ways to log them. If you specify
|
||||
your parameters using popular python packages, such as [argparse](https://docs.python.org/3/library/argparse.html) and
|
||||
your parameters using popular Python packages, such as [argparse](https://docs.python.org/3/library/argparse.html) and
|
||||
[click](https://click.palletsprojects.com/), all you need to do is [initialize](../references/sdk/task.md#taskinit) a task, and
|
||||
ClearML will automatically log the parameters. ClearML also provides methods to explicitly report parameters.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ The goal of this phase is to get a code, dataset, and environment set up, so you
|
||||
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
|
||||
This helps visualizing the results and tracking progress.
|
||||
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
|
||||
while also creating an easy queue interface that easily lets you drop your experiments to be executed one by one
|
||||
while also creating an easy queue interface that easily lets you drop your tasks to be executed one by one
|
||||
(great for ensuring that the GPUs are churning during the weekend).
|
||||
- [ClearML Session](../../apps/clearml_session.md) helps with developing on remote machines, in the same way that you'd develop on your local laptop!
|
||||
|
||||
@@ -38,7 +38,7 @@ yields the best performing model for your task!
|
||||
|
||||
- The real training (usually) should **not** be executed on your development machine.
|
||||
- Training sessions should be launched and monitored from a web UI.
|
||||
- You should continue coding while experiments are being executed without interrupting them.
|
||||
- You should continue coding while tasks are being executed without interrupting them.
|
||||
- Stop optimizing your code because your machine struggles, and run it on a beefier machine (cloud / on-prem).
|
||||
|
||||
Visualization and comparison dashboards keep your sanity at bay! At this stage you usually have a docker container with all the binaries
|
||||
@@ -58,23 +58,23 @@ that you need.
|
||||
Track everything--from obscure parameters to weird metrics, it's impossible to know what will end up
|
||||
improving your results later on!
|
||||
|
||||
- Make sure experiments are reproducible! ClearML logs code, parameters, and environment in a single, easily searchable place.
|
||||
- Make sure tasks are reproducible! ClearML logs code, parameters, and environment in a single, easily searchable place.
|
||||
- Development is not linear. Configuration / Parameters should not be stored in your git, as
|
||||
they are temporary and constantly changing. They still need to be logged because who knows, one day...
|
||||
- Uncommitted changes to your code should be stored for later forensics in case that magic number actually saved the day. Not every line change should be committed.
|
||||
- Mark potentially good experiments, make them the new baseline for comparison.
|
||||
- Mark potentially good tasks, make them the new baseline for comparison.
|
||||
|
||||
## Visibility Matters
|
||||
|
||||
While you can track experiments with one tool, and pipeline them with another, having
|
||||
While you can track tasks with one tool, and pipeline them with another, having
|
||||
everything under the same roof has its benefits!
|
||||
|
||||
Being able to track experiment progress and compare experiments, and, based on that, send experiments to execution on remote
|
||||
Being able to track task progress and compare tasks, and, based on that, send tasks to execution on remote
|
||||
machines (that also build the environment themselves) has tremendous benefits in terms of visibility and ease of integration.
|
||||
|
||||
Being able to have visibility in your pipeline, while using experiments already defined in the platform,
|
||||
Being able to have visibility in your pipeline, while using tasks already defined in the platform,
|
||||
enables users to have a clearer picture of the pipeline's status
|
||||
and makes it easier to start using pipelines earlier in the process by simplifying chaining tasks.
|
||||
|
||||
Managing datasets with the same tools and APIs that manage the experiments also lowers the barrier of entry into
|
||||
experiment and data provenance.
|
||||
Managing datasets with the same tools and APIs that manage the tasks also lowers the barrier of entry into
|
||||
task and data provenance.
|
||||
|
||||
@@ -8,7 +8,7 @@ title: First Steps
|
||||
|
||||
First, [sign up for free](https://app.clear.ml).
|
||||
|
||||
Install the `clearml` python package:
|
||||
Install the `clearml` Python package:
|
||||
```bash
|
||||
pip install clearml
|
||||
```
|
||||
@@ -99,7 +99,7 @@ Now you can use ClearML in your notebook!
|
||||
|
||||
In ClearML, experiments are organized as [Tasks](../../fundamentals/task.md).
|
||||
|
||||
ClearML automatically logs your experiment and code, including outputs and parameters from popular ML frameworks,
|
||||
ClearML automatically logs your task and code, including outputs and parameters from popular ML frameworks,
|
||||
once you integrate the ClearML [SDK](../../clearml_sdk/clearml_sdk.md) with your code. To control what ClearML automatically logs, see this [FAQ](../../faq.md#controlling_logging).
|
||||
|
||||
At the beginning of your code, import the `clearml` package:
|
||||
@@ -115,7 +115,7 @@ To ensure full automatic logging, it is recommended to import the `clearml` pack
|
||||
Then initialize the Task object in your `main()` function, or the beginning of the script.
|
||||
|
||||
```python
|
||||
task = Task.init(project_name='great project', task_name='best experiment')
|
||||
task = Task.init(project_name='great project', task_name='best task')
|
||||
```
|
||||
|
||||
If the project does not already exist, a new one is created automatically.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: Next Steps
|
||||
---
|
||||
|
||||
So, you've already [installed ClearML's python package](ds_first_steps.md) and run your first experiment!
|
||||
So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first experiment!
|
||||
|
||||
Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!
|
||||
|
||||
@@ -151,14 +151,14 @@ Once everything is neatly logged and displayed, use the [comparison tool](../../
|
||||
|
||||
## Track Experiments
|
||||
|
||||
The experiments table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
|
||||
The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
|
||||
### Creating Leaderboards
|
||||
Customize the [experiments table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
|
||||
Customize the [task table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
|
||||
You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
|
||||
|
||||
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
|
||||
|
||||
@@ -12,7 +12,7 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
|
||||
- Track the code base. There is no reason not to add metrics to any process in your workflow, even if it is not directly ML. Visibility is key to iterative improvement of your code / workflow.
|
||||
- Create per-project [leaderboards](../../guides/ui/building_leader_board.md) based on custom columns
|
||||
(hyperparameters and performance accuracy), and bookmark them (full URL will always reproduce the same view and table).
|
||||
- Share experiments with your colleagues and team-leaders.
|
||||
- Share tasks with your colleagues and team-leaders.
|
||||
Invite more people to see how your project is progressing, and suggest they add metric reporting for their own.
|
||||
These metrics can later be part of your own in-house monitoring solution, don't let good data go to waste :)
|
||||
|
||||
@@ -26,10 +26,10 @@ Once you have a Task in ClearML, you can clone and edit its definitions in the U
|
||||
## Advanced Automation
|
||||
- Create daily / weekly cron jobs for retraining best performing models on.
|
||||
- Create data monitoring & scheduling and launch inference jobs to test performance on any new coming dataset.
|
||||
- Once there are two or more experiments that run after another, group them together into a [pipeline](../../pipelines/pipelines.md).
|
||||
- Once there are two or more tasks that run after another, group them together into a [pipeline](../../pipelines/pipelines.md).
|
||||
|
||||
## Manage Your Data
|
||||
Use [ClearML Data](../../clearml_data/clearml_data.md) to version your data, then link it to running experiments for easy reproduction.
|
||||
Use [ClearML Data](../../clearml_data/clearml_data.md) to version your data, then link it to running tasks for easy reproduction.
|
||||
Make datasets machine agnostic (i.e. store original dataset in a shared storage location, e.g. shared-folder / S3 / Gs / Azure).
|
||||
ClearML Data supports efficient Dataset storage and caching, differentiable and compressed.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ This tutorial assumes that you've already [signed up](https://app.clear.ml) to C
|
||||
ClearML provides tools for **automation**, **orchestration**, and **tracking**, all key in performing effective MLOps and LLMOps.
|
||||
|
||||
Effective MLOps and LLMOps rely on the ability to scale work beyond one's own computer. Moving from your own machine can be time-consuming.
|
||||
Even assuming that you have all the drivers and applications installed, you still need to manage multiple python environments
|
||||
Even assuming that you have all the drivers and applications installed, you still need to manage multiple Python environments
|
||||
for different packages / package versions, or worse - manage different Dockers for different package versions.
|
||||
|
||||
Not to mention, when working on remote machines, executing experiments, tracking what's running where, and making sure machines
|
||||
@@ -17,11 +17,11 @@ are fully utilized at all times become daunting tasks.
|
||||
|
||||
This can create overhead that derails you from your core work!
|
||||
|
||||
ClearML Agent was designed to deal with such issues and more! It is a tool responsible for executing experiments on remote machines: on-premises or in the cloud! ClearML Agent provides the means to reproduce and track experiments in your
|
||||
ClearML Agent was designed to deal with such issues and more! It is a tool responsible for executing tasks on remote machines: on-premises or in the cloud! ClearML Agent provides the means to reproduce and track tasks in your
|
||||
machine of choice through the ClearML WebApp with no need for additional code.
|
||||
|
||||
The agent will set up the environment for a specific Task's execution (inside a Docker, or bare-metal), install the
|
||||
required python packages, and execute and monitor the process.
|
||||
required Python packages, and execute and monitor the process.
|
||||
|
||||
|
||||
## Set up an Agent
|
||||
@@ -54,40 +54,40 @@ required python packages, and execute and monitor the process.
|
||||
|
||||
:::tip Agent Deployment Modes
|
||||
ClearML Agents can be deployed in:
|
||||
* [Virtual environment mode](../../clearml_agent/clearml_agent_execution_env.md): Agent creates a new venv to execute an experiment.
|
||||
* [Docker mode](../../clearml_agent/clearml_agent_execution_env.md#docker-mode): Agent executes an experiment inside a
|
||||
* [Virtual environment mode](../../clearml_agent/clearml_agent_execution_env.md): Agent creates a new venv to execute a task.
|
||||
* [Docker mode](../../clearml_agent/clearml_agent_execution_env.md#docker-mode): Agent executes a task inside a
|
||||
Docker container.
|
||||
|
||||
For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
|
||||
:::
|
||||
|
||||
## Clone an Experiment
|
||||
Experiments can be reproduced (cloned) for validation or as a baseline for further experimentation.
|
||||
## Clone a Task
|
||||
Tasks can be reproduced (cloned) for validation or as a baseline for further experimentation.
|
||||
Cloning a task duplicates the task's configuration, but not its outputs.
|
||||
|
||||
**To clone an experiment in the ClearML WebApp:**
|
||||
1. Click on any project card to open its [experiments table](../../webapp/webapp_exp_table.md).
|
||||
1. Right-click one of the experiments on the table.
|
||||
1. Click **Clone** in the context menu, which will open a **CLONE EXPERIMENT** window.
|
||||
**To clone a task in the ClearML WebApp:**
|
||||
1. Click on any project card to open its [task table](../../webapp/webapp_exp_table.md).
|
||||
1. Right-click one of the tasks on the table.
|
||||
1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
|
||||
1. Click **CLONE** in the window.
|
||||
|
||||
The newly cloned experiment will appear and its info panel will slide open. The cloned experiment is in draft mode, so
|
||||
it can be modified. You can edit the Git / code references, control the python packages to be installed, specify the
|
||||
Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-experiments) for more information about editing experiments in the UI.
|
||||
The newly cloned task will appear and its info panel will slide open. The cloned task is in draft mode, so
|
||||
it can be modified. You can edit the Git / code references, control the Python packages to be installed, specify the
|
||||
Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-tasks) for more information about editing tasks in the UI.
|
||||
|
||||
## Enqueue an Experiment
|
||||
Once you have set up an experiment, it is now time to execute it.
|
||||
## Enqueue a Task
|
||||
Once you have set up a task, it is now time to execute it.
|
||||
|
||||
**To execute an experiment through the ClearML WebApp:**
|
||||
1. Right-click your draft experiment (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
|
||||
button on the top right of the experiment's info panel)
|
||||
1. Click **ENQUEUE,** which will open the **ENQUEUE EXPERIMENT** window
|
||||
**To execute a task through the ClearML WebApp:**
|
||||
1. Right-click your draft task (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
|
||||
button on the top right of the task's info panel)
|
||||
1. Click **ENQUEUE,** which will open the **ENQUEUE TASK** window
|
||||
1. In the window, select `default` in the queue menu
|
||||
1. Click **ENQUEUE**
|
||||
|
||||
This action pushes the experiment into the `default` queue. The experiment's status becomes *Pending* until an agent
|
||||
assigned to the queue fetches it, at which time the experiment's status becomes *Running*. The agent executes the
|
||||
experiment, and the experiment can be [tracked and its results visualized](../../webapp/webapp_exp_track_visual.md).
|
||||
This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
|
||||
assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
|
||||
task, and the task can be [tracked and its results visualized](../../webapp/webapp_exp_track_visual.md).
|
||||
|
||||
|
||||
## Programmatic Interface
|
||||
@@ -95,7 +95,7 @@ experiment, and the experiment can be [tracked and its results visualized](../..
|
||||
The cloning, modifying, and enqueuing actions described above can also be performed programmatically.
|
||||
|
||||
### First Steps
|
||||
#### Access Previously Executed Experiments
|
||||
#### Access Previously Executed Tasks
|
||||
All Tasks in the system can be accessed through their unique Task ID, or based on their properties using the [`Task.get_task`](../../references/sdk/task.md#taskget_task)
|
||||
method. For example:
|
||||
```python
|
||||
@@ -106,15 +106,15 @@ executed_task = Task.get_task(task_id='aabbcc')
|
||||
|
||||
Once a specific Task object has been obtained, it can be cloned, modified, and more. See [Advanced Usage](#advanced-usage).
|
||||
|
||||
#### Clone an Experiment
|
||||
#### Clone a Task
|
||||
|
||||
To duplicate an experiment, use the [`Task.clone`](../../references/sdk/task.md#taskclone) method, and input either a
|
||||
To duplicate a task, use the [`Task.clone`](../../references/sdk/task.md#taskclone) method, and input either a
|
||||
Task object or the Task's ID as the `source_task` argument.
|
||||
```python
|
||||
cloned_task = Task.clone(source_task=executed_task)
|
||||
```
|
||||
|
||||
#### Enqueue an Experiment
|
||||
#### Enqueue a Task
|
||||
To enqueue the task, use the [`Task.enqueue`](../../references/sdk/task.md#taskenqueue) method, and input the Task object
|
||||
with the `task` argument, and the queue to push the task into with `queue_name`.
|
||||
|
||||
@@ -129,7 +129,7 @@ Before execution, use a variety of programmatic methods to manipulate a task obj
|
||||
[Hyperparameters](../../fundamentals/hyperparameters.md) are an integral part of Machine Learning code as they let you
|
||||
control the code without directly modifying it. Hyperparameters can be added from anywhere in your code, and ClearML supports multiple ways to obtain them!
|
||||
|
||||
Users can programmatically change cloned experiments' parameters.
|
||||
Users can programmatically change cloned tasks' parameters.
|
||||
|
||||
For example:
|
||||
```python
|
||||
@@ -200,7 +200,7 @@ min_max_values = executed_task.get_last_scalar_metrics()
|
||||
full_scalars = executed_task.get_reported_scalars()
|
||||
```
|
||||
|
||||
#### Query Experiments
|
||||
#### Query Tasks
|
||||
You can also search and query Tasks in the system. Use the [`Task.get_tasks`](../../references/sdk/task.md#taskget_tasks)
|
||||
class method to retrieve Task objects and filter based on the specific values of the Task - status, parameters, metrics and more!
|
||||
|
||||
@@ -219,7 +219,7 @@ Data is probably one of the biggest factors that determines the success of a pro
|
||||
the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
|
||||
|
||||
[ClearML Data](../../clearml_data/clearml_data.md) lets you version your data, so it's never lost, fetch it from every
|
||||
machine with minimal code changes, and associate data to experiment results.
|
||||
machine with minimal code changes, and associate data to task results.
|
||||
|
||||
Logging data can be done via command line, or programmatically. If any preprocessing code is involved, ClearML logs it
|
||||
as well! Once data is logged, it can be used by other experiments.
|
||||
as well! Once data is logged, it can be used by other tasks.
|
||||
|
||||
@@ -36,13 +36,13 @@ The most important difference is that you’ll also be asked for your git inform
|
||||
|
||||
Before we run the agent though, let's take a quick look at what will happen when we spin it up.
|
||||
|
||||
Our server hosts one or more queues in which we can put our tasks. And then we have our agent. By default, it will be running in pip mode, or virtual environment mode. Once an agent pulls a new task from the queue to be executed, it will create a new python virtual environment for it. It will then clone the code itself and install all required python packages in the new virtual environment. It then runs the code and injects any new hyperparameters we changed in the UI.
|
||||
Our server hosts one or more queues in which we can put our tasks. And then we have our agent. By default, it will be running in pip mode, or virtual environment mode. Once an agent pulls a new task from the queue to be executed, it will create a new Python virtual environment for it. It will then clone the code itself and install all required Python packages in the new virtual environment. It then runs the code and injects any new hyperparameters we changed in the UI.
|
||||
|
||||
PIP mode is really handy and efficient. It will create a new python virtual environment for every task it pulls and will use smart caching so packages or even whole environments can be reused over multiple tasks.
|
||||
PIP mode is really handy and efficient. It will create a new Python virtual environment for every task it pulls and will use smart caching so packages or even whole environments can be reused over multiple tasks.
|
||||
|
||||
You can also run the agent in conda mode or poetry mode, which essentially do the same thing as pip mode, only with a conda or poetry environment instead.
|
||||
|
||||
However, there’s also docker mode. In this case the agent will run every incoming task in its own docker container instead of just a virtual environment. This makes things much easier if your tasks have system package dependencies for example, or when not every task uses the same python version. For our example, we’ll be using docker mode.
|
||||
However, there’s also docker mode. In this case the agent will run every incoming task in its own docker container instead of just a virtual environment. This makes things much easier if your tasks have system package dependencies for example, or when not every task uses the same Python version. For our example, we’ll be using docker mode.
|
||||
|
||||
Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon –docker`.
|
||||
|
||||
|
||||
@@ -20,13 +20,13 @@ keywords: [mlops, components, ClearML data]
|
||||
<br/>
|
||||
|
||||
<Collapsible type="info" title="Video Transcript">
|
||||
Hello and welcome to ClearML. In this video we'll take a look at both the command line and python interfaces of our data versioning tool called `clearml-data`.
|
||||
Hello and welcome to ClearML. In this video we'll take a look at both the command line and Python interfaces of our data versioning tool called `clearml-data`.
|
||||
|
||||
In the world of machine learning, you are very likely dealing with large amounts of data that you need to put into a dataset. ClearML Data solves 2 important challenges that occur in this situation:
|
||||
|
||||
One is accessibility, making sure the data can be accessed from every machine you use. And two is versioning, linking which dataset version was used in which task. This helps to make experiments more reproducible. Moreover, versioning systems like git were never really designed for the size and number of files in machine learning datasets. We're going to need something else.
|
||||
|
||||
ClearML Data comes built-in with the `clearml` python package and has both a command line interface for easy and quick operations and a python interface if you want more flexibility. Both interfaces are quite similar, so we'll address both of them in the video.
|
||||
ClearML Data comes built-in with the `clearml` Python package and has both a command line interface for easy and quick operations and a Python interface if you want more flexibility. Both interfaces are quite similar, so we'll address both of them in the video.
|
||||
|
||||
Let's start with an example. Say I have some files here that I want to put into a dataset and start to keep track of.
|
||||
|
||||
@@ -36,13 +36,13 @@ We can do that by using the `clearml-data add` command and providing the path to
|
||||
|
||||
Now we need to tell the server that we're done here. We can call `clearml-data close` to upload the files and change the dataset status to done, which finalizes this version of the dataset.
|
||||
|
||||
The process of doing this with the python interface is very similar.
|
||||
The process of doing this with the Python interface is very similar.
|
||||
|
||||
You can create a new Dataset by importing the Dataset object from the `clearml` pip package and calling its `create` method. Now we have to give the dataset a name and a project just like with the command line tool. The create method returns a dataset instance which we will use to do all of our operations on.
|
||||
|
||||
To add some files to this newly created dataset version, call the `add_files` method on the dataset object and provide a path to a local file or folder. Bear in mind that nothing is uploaded just yet, we're simply instructing the dataset object what it should do when we eventually *do* want to upload.
|
||||
|
||||
A really useful thing we can do with the python interface is adding some interesting statistics about the dataset itself, such as a plot for example. Here we simply report a histogram on the amount of files in the train and test folders. You can add anything to a dataset that you can add to a ClearML task, so go nuts!
|
||||
A really useful thing we can do with the Python interface is adding some interesting statistics about the dataset itself, such as a plot for example. Here we simply report a histogram on the amount of files in the train and test folders. You can add anything to a dataset that you can add to a ClearML task, so go nuts!
|
||||
|
||||
Finally, upload the dataset and then finalize it, or just set `auto_upload` to `true` to make it a one-liner.
|
||||
|
||||
@@ -56,7 +56,7 @@ Using the command line tool, you can download a dataset version locally by using
|
||||
|
||||
That path will be a local cached folder, which means that if you try to get the same dataset again, or any other dataset that's based on this one, it will check which files are already on your system, and it will not download these again.
|
||||
|
||||
The python interface is similar, with one major difference. You can also get a dataset using any combination of name, project, ID or tags, but _getting_ the dataset does not mean it is downloaded, we simply got all of the metadata, which we can now access from the dataset object. This is important, as it means you don't have to download the dataset to make changes to it, or to add files. More on that in just a moment.
|
||||
The Python interface is similar, with one major difference. You can also get a dataset using any combination of name, project, ID or tags, but _getting_ the dataset does not mean it is downloaded, we simply got all of the metadata, which we can now access from the dataset object. This is important, as it means you don't have to download the dataset to make changes to it, or to add files. More on that in just a moment.
|
||||
|
||||
If you do want to download a local copy of the dataset, it has to be done explicitly, by calling `get_local_copy` which will return the path to which the data was downloaded for you.
|
||||
|
||||
@@ -70,7 +70,7 @@ Let's say we found an issue with the hamburgers here, so we remove them from the
|
||||
|
||||
Now we can tell ClearML that the changes we made to this folder should become a new version of the previous dataset. We start by creating a new dataset just like we saw before, but now, we add the previous dataset ID as a parent. This tells ClearML that this new dataset version we're creating is based on the previous one and so our dataset object here will already contain all the files that the parent contained.
|
||||
|
||||
Now we can manually remove and add the files that we want, even without actually downloading the dataset. It will just change the metadata inside the python object and sync everything when it's finalized.
|
||||
Now we can manually remove and add the files that we want, even without actually downloading the dataset. It will just change the metadata inside the Python object and sync everything when it's finalized.
|
||||
|
||||
That said, we do have a local copy of the dataset in this case, so we have a better option.
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ ClearML is designed to get you up and running in less than 10 minutes and 2 magi
|
||||
|
||||
At the heart of ClearML lies the experiment manager. It consists of the `clearml` pip package and the ClearML Server.
|
||||
|
||||
After running `pip install clearml` we can add 2 simple lines of python code to your existing codebase. These 2 lines will capture all the output that your code produces: logs, source code, hyperparameters, plots, images, you name it.
|
||||
After running `pip install clearml` we can add 2 simple lines of Python code to your existing codebase. These 2 lines will capture all the output that your code produces: logs, source code, hyperparameters, plots, images, you name it.
|
||||
|
||||
The pip package also includes `clearml-data`. It can help you keep track of your ever-changing datasets and provides an easy way to store, track and version control your data. It's also an easy way to share your dataset with colleagues over multiple machines while keeping track of who has which version. ClearML Data can even keep track of your data's ancestry, making sure you can always figure out where specific parts of your data came from.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ This is the experiment manager's UI, and every row you can see here, is a single
|
||||
|
||||
We’re currently in our project folder. As you can see, we have our very basic toy example here that we want to keep track of by using ClearML’s experiment manager.
|
||||
|
||||
The first thing to do is to install the `clearml` python package in our virtual environment. Installing the package itself, will add 3 commands for you. We’ll cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`.
|
||||
The first thing to do is to install the `clearml` Python package in our virtual environment. Installing the package itself, will add 3 commands for you. We’ll cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`.
|
||||
|
||||
If you paid attention in the first video of this series, you’d remember that we need to connect to a ClearML Server to save all our tracked data. The server is where we saw the list of experiments earlier. This connection is what `clearml-init` will set up for us. When running the command it’ll ask for your server API credentials.
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ We can see that no code was used to log the scalar. It's done automatically beca
|
||||
|
||||
We are using a training script as our task in our example here, but the optimizer doesn’t actually care what’s in our task, it just wants inputs and outputs. So you can optimize basically anything you want.
|
||||
|
||||
The only thing we have to do to start optimizing this model is to write a small python file detailing what exactly we want our optimizer to do.
|
||||
The only thing we have to do to start optimizing this model is to write a small Python file detailing what exactly we want our optimizer to do.
|
||||
|
||||
When you’re a ClearML Pro user, you can just start the optimizer straight from the UI, but more on that later.
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ One is you can easily chain existing ClearML tasks together to create a single p
|
||||
|
||||
Let's say we have some functions that we already use to run ETL and another function that trains a model on the preprocessed data. We already have a main function too, that orchestrates when and how these other components should be run.
|
||||
|
||||
If we want to make this code into a pipeline, the first thing we have to do is to tell ClearML that these functions are supposed to become steps in our pipeline. We can do that by using a python decorator! For each function we want as a step, we can decorate it with `PipelineDecorator.component`.
|
||||
If we want to make this code into a pipeline, the first thing we have to do is to tell ClearML that these functions are supposed to become steps in our pipeline. We can do that by using a Python decorator! For each function we want as a step, we can decorate it with `PipelineDecorator.component`.
|
||||
|
||||
The component call will fully automatically transform this function into a ClearML task, with all the benefits that come with that. It will also make it clear that this task will be part of a larger pipeline.
|
||||
|
||||
|
||||
@@ -60,7 +60,7 @@ clearml-task --project keras --name local_test --script webinar-0620/keras_mnist
|
||||
This sets the following arguments:
|
||||
* `--project keras --name local_test` - The project and task names
|
||||
* `--script /webinar-0620/keras_mnist.py` - The local script to be executed
|
||||
* `-requirements webinar-0620/requirements.txt` - The local python package requirements file
|
||||
* `-requirements webinar-0620/requirements.txt` - The local Python package requirements file
|
||||
* `--args batch_size=64 epochs=1` - Arguments passed to the script. This uses the argparse object to capture CLI parameters
|
||||
* `--queue default` - Selected queue to send the task to
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ The [pipeline_from_decorator.py](https://github.com/allegroai/clearml/blob/maste
|
||||
example demonstrates the creation of a pipeline in ClearML using the [`PipelineDecorator`](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)
|
||||
class.
|
||||
|
||||
This example creates a pipeline incorporating four tasks, each of which is created from a python function using a custom decorator:
|
||||
This example creates a pipeline incorporating four tasks, each of which is created from a Python function using a custom decorator:
|
||||
* `executing_pipeline`- Implements the pipeline controller which defines the pipeline structure and execution logic.
|
||||
* `step_one` - Downloads and processes data.
|
||||
* `step_two` - Further processes the data from `step_one`.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: Code Examples
|
||||
---
|
||||
|
||||
The following examples demonstrate registering, retrieving, and ingesting your data through the Hyper-Datasets python
|
||||
The following examples demonstrate registering, retrieving, and ingesting your data through the Hyper-Datasets Python
|
||||
interface.
|
||||
|
||||
## Registering your Data
|
||||
|
||||
@@ -515,7 +515,7 @@ class method.
|
||||
my_dataview = DataView.get(dataview_id='<dataview_id>')
|
||||
```
|
||||
|
||||
Access the Dataview's frames as a python list, dictionary, or through a pythonic iterator.
|
||||
Access the Dataview's frames as a Python list, dictionary, or through a pythonic iterator.
|
||||
|
||||
[`DataView.to_list()`](../references/hyperdataset/dataview.md#to_list) returns the Dataview queries result as a Python list.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Data
|
||||
|
||||
## Usage
|
||||
|
||||
Hyper-Datasets are supported by the `allegroai` python package.
|
||||
Hyper-Datasets are supported by the `allegroai` Python package.
|
||||
|
||||
### Connecting Dataviews to a Task
|
||||
|
||||
|
||||
@@ -132,7 +132,7 @@ You can add labels which describe the whole frame, with no specific coordinates.
|
||||
## Frame Metadata
|
||||
|
||||
**To edit frame metadata:**
|
||||
1. Expand the **FRAME METADATA** area
|
||||
1. Expand the **FRAMEGROUP METADATA** area
|
||||
1. Click edit <img src="/docs/latest/icons/ico-metadata.svg" alt="edit metadata" className="icon size-md space-sm" />
|
||||
which will open an editing window
|
||||
1. Modify the metadata dictionary in JSON format
|
||||
|
||||
@@ -31,12 +31,12 @@ Use frame viewer controls to navigate between frames in a Hyper-Dataset Version,
|
||||
|<img src="/docs/latest/icons/ico-revert.svg" alt="Reload frame icon" className="icon size-md space-sm" />|Reload the frame.| <img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|<img src="/docs/latest/icons/ico-undo.svg" alt="Undo icon" className="icon size-md space-sm" />|Undo changes.|Ctrl + Z|
|
||||
|<img src="/docs/latest/icons/ico-redo.svg" alt="Redo icon" className="icon size-md space-sm" />|Redo changes.|Ctrl + Y|
|
||||
|<img src="/docs/latest/icons/ico-reset_1.svg" alt="Autofit icon" className="icon size-md space-sm" />|Autofit| <img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|<img src="/docs/latest/icons/ico-zoom-to-fit.svg" alt="Autofit icon" className="icon size-md space-sm" />|Autofit| <img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|<img src="/docs/latest/icons/ico-zoom-1-to-1.svg" alt="Return to original size" className="icon size-md space-sm" />|View image in original size |<img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" />|
|
||||
|<img src="/docs/latest/icons/ico-zoom-in.svg" alt="Zoom in icon" className="icon size-md space-sm" />|Zoom in| **+** or Ctrl + Mouse wheel|
|
||||
|<img src="/docs/latest/icons/ico-zoom-out.svg" alt="Zoom out icon" className="icon size-md space-sm" />|Zoom out| **-** or Ctrl + Mouse wheel |
|
||||
|Percentage textbox|Zoom percentage| <img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|<img src="/docs/latest/icons/ico-shared-item.svg" alt="Copy URL" className="icon size-md space-sm" />| Copy frame URL. A direct link to view the current frame|<img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|<img src="/docs/latest/icons/ico-reset.svg" alt="Refresh" className="icon size-md space-sm" />|Refresh version preview|<img src="/docs/latest/icons/ico-optional-no.svg" alt="Not applicable" className="icon size-md center-md" /> |
|
||||
|
||||
#### Additional Keyboard Shortcuts
|
||||
|
||||
@@ -226,7 +226,7 @@ You can add labels which describe the whole frame, with no specific coordinates.
|
||||
## Frame Metadata
|
||||
|
||||
**To edit frame metadata:**
|
||||
1. Expand the **FRAME METADATA** area
|
||||
1. Expand the **FRAMEGROUP METADATA** area
|
||||
1. Click edit <img src="/docs/latest/icons/ico-metadata.svg" alt="edit metadata" className="icon size-md space-sm" />
|
||||
which will open an editing window
|
||||
1. Modify the metadata dictionary in JSON format
|
||||
|
||||
@@ -317,7 +317,7 @@ The **Metadata** tab presents any additional metadata that has been attached to
|
||||
|
||||
**To edit a version's metadata,**
|
||||
|
||||
1. Hover over the metadata box and click on the **EDIT** button
|
||||
1. Hover over the metadata box and click **EDIT**
|
||||
1. Edit the section contents (JSON format)
|
||||
1. Click **OK**
|
||||
|
||||
|
||||
@@ -1,33 +1,33 @@
|
||||
---
|
||||
title: The Dataviews Table
|
||||
title: The Dataview table
|
||||
---
|
||||
|
||||
The **Dataviews table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
|
||||
The **Dataview table** is a [customizable](#customizing-the-dataviews-table) list of Dataviews associated with a project.
|
||||
Use it to view and create Dataviews, and access their info panels.
|
||||
|
||||
The table lists independent Dataview objects. To see Dataviews logged by a task, go
|
||||
to the specific task's **DATAVIEWS** tab (see [Task Dataviews](webapp_exp_track_visual.md)).
|
||||
|
||||
View the Dataviews table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
||||
View the Dataview table in table view <img src="/docs/latest/icons/ico-table-view.svg" alt="Table view" className="icon size-md space-sm" />
|
||||
or in details view <img src="/docs/latest/icons/ico-split-view.svg" alt="Details view" className="icon size-md space-sm" />,
|
||||
using the buttons on the top left of the page. Use the table view for a comparative view of your Dataviews according to
|
||||
columns of interest. Use the details view to access a selected Dataview's details, while keeping the Dataview list in view.
|
||||
Details view can also be accessed by double-clicking a specific Dataview in the table view to open its details view.
|
||||
|
||||
You can archive Dataviews so the Dataviews table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
|
||||
You can archive Dataviews so the Dataview table doesn't get too cluttered. Click **OPEN ARCHIVE** on the top of the
|
||||
table to open the archive and view all archived Dataviews. From the archive, you can restore
|
||||
Dataviews to remove them from the archive. You can also permanently delete Dataviews.
|
||||
|
||||
You can download the Dataviews table as a CSV file by clicking <img src="/docs/latest/icons/ico-download.svg" alt="Download" className="icon size-md space-sm" />
|
||||
You can download the Dataview table as a CSV file by clicking <img src="/docs/latest/icons/ico-download.svg" alt="Download" className="icon size-md space-sm" />
|
||||
and choosing one of these options:
|
||||
* **Download onscreen items** - Download the values for Dataviews currently visible on screen
|
||||
* **Download all items** - Download the values for all Dataviews in this project that match the current active filters
|
||||
|
||||
The downloaded data consists of the currently displayed table columns.
|
||||
|
||||

|
||||

|
||||
|
||||
The Dataviews table includes the following columns:
|
||||
The Dataview table includes the following columns:
|
||||
|
||||
|Column|Description|Type|
|
||||
|--|--|--|
|
||||
@@ -41,9 +41,9 @@ The Dataviews table includes the following columns:
|
||||
Dynamically order the columns by dragging a column heading
|
||||
to a new position.
|
||||
|
||||
## Customizing the Dataviews Table
|
||||
## Customizing the Dataview Table
|
||||
|
||||
The Dataviews table can be customized. Changes are persistent (cached in the browser), and represented in the URL.
|
||||
The Dataview table can be customized. Changes are persistent (cached in the browser), and represented in the URL.
|
||||
Save customized settings in a browser bookmark, and share the URL with teammates.
|
||||
|
||||
Customize the table using any of the following:
|
||||
@@ -70,17 +70,17 @@ all the Dataviews in the project. The customizations of these two views are save
|
||||
|
||||
## Dataview Actions
|
||||
|
||||
The following table describes the actions that can be performed from the Dataviews table.
|
||||
The following table describes the actions that can be performed from the Dataview table.
|
||||
|
||||
Access these actions with the context menu in any of the following ways:
|
||||
* In the Dataviews table, right-click a Dataview, or hover over a Dataview and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
|
||||
* In the Dataview table, right-click a Dataview, or hover over a Dataview and click <img src="/docs/latest/icons/ico-dots-v-menu.svg" alt="Dot menu" className="icon size-md space-sm" />
|
||||
* In a Dataview info panel, click the menu button <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Bar menu" className="icon size-md space-sm" />
|
||||
|
||||
| ClearML Action | Description |
|
||||
|---|---|
|
||||
| Details | View Dataview details, including input datasets, label mapping, and iteration control. Can also be accessed by double-clicking a Dataview in the Dataviews table. |
|
||||
| Details | View Dataview details, including input datasets, label mapping, and iteration control. Can also be accessed by double-clicking a Dataview in the Dataview table. |
|
||||
| Archive | Move Dataview to the Dataview's archive. |
|
||||
| Restore | Action available in the archive. Restore a Dataview to the active Dataviews table. |
|
||||
| Restore | Action available in the archive. Restore a Dataview to the active Dataview table. |
|
||||
| Delete | Action available in the archive. Permanently delete a Dataview. |
|
||||
| Clone | Make an exact copy of a Dataview that is editable. |
|
||||
| Move to Project | Move a Dataview to another project. |
|
||||
@@ -97,11 +97,11 @@ Select multiple Dataviews, then use either the context menu, or the batch action
|
||||
operations on the selected Dataviews. The context menu shows the number of Dataviews that can be affected by each action.
|
||||
The same information can be found in the batch action bar, in a tooltip that appears when hovering over an action icon.
|
||||
|
||||

|
||||

|
||||
|
||||
## Creating a Dataview
|
||||
|
||||
Create a Dataview by clicking the **+ NEW DATAVIEW** button at the top right of the table, which opens a
|
||||
Create a Dataview by clicking **+ NEW DATAVIEW**, which opens a
|
||||
**NEW DATAVIEW** window.
|
||||
|
||||

|
||||
@@ -24,9 +24,9 @@ tasks are highlighted. Obscure identical fields by switching on the `Hide Identi
|
||||
The task on the left is used as the base task, to which the other tasks are compared. You can set a
|
||||
new base task
|
||||
in one of the following ways:
|
||||
* Hover and click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base task" className="icon size-md space-sm" />
|
||||
* Hover and click <img src="/docs/latest/icons/ico-arrow-from-right.svg" alt="Switch base task" className="icon size-md space-sm" />
|
||||
on the task that will be the new base.
|
||||
* Hover and click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base task and drag it all the way to the left
|
||||
* Hover and click <img src="/docs/latest/icons/ico-drag.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base task and drag it all the way to the left
|
||||
|
||||
|
||||

|
||||
|
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 138 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 140 KiB |
|
Before Width: | Height: | Size: 158 KiB After Width: | Height: | Size: 156 KiB |
|
Before Width: | Height: | Size: 179 KiB After Width: | Height: | Size: 179 KiB |
|
Before Width: | Height: | Size: 93 KiB After Width: | Height: | Size: 92 KiB |
|
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 123 KiB |
|
Before Width: | Height: | Size: 198 KiB After Width: | Height: | Size: 198 KiB |
|
Before Width: | Height: | Size: 198 KiB After Width: | Height: | Size: 199 KiB |
|
Before Width: | Height: | Size: 190 KiB After Width: | Height: | Size: 140 KiB |
|
Before Width: | Height: | Size: 191 KiB After Width: | Height: | Size: 142 KiB |
|
Before Width: | Height: | Size: 114 KiB After Width: | Height: | Size: 70 KiB |
|
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 73 KiB |
|
Before Width: | Height: | Size: 127 KiB After Width: | Height: | Size: 86 KiB |
|
Before Width: | Height: | Size: 127 KiB After Width: | Height: | Size: 87 KiB |
|
Before Width: | Height: | Size: 282 KiB After Width: | Height: | Size: 230 KiB |
|
Before Width: | Height: | Size: 290 KiB After Width: | Height: | Size: 240 KiB |
|
Before Width: | Height: | Size: 237 KiB After Width: | Height: | Size: 237 KiB |
|
Before Width: | Height: | Size: 241 KiB After Width: | Height: | Size: 241 KiB |
|
Before Width: | Height: | Size: 230 KiB After Width: | Height: | Size: 174 KiB |
|
Before Width: | Height: | Size: 235 KiB After Width: | Height: | Size: 181 KiB |
|
Before Width: | Height: | Size: 144 KiB After Width: | Height: | Size: 138 KiB |
|
Before Width: | Height: | Size: 142 KiB After Width: | Height: | Size: 137 KiB |
@@ -111,7 +111,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Scalars (loss, learning rates)
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (such as argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (such as argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
You can view all the task details in the [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
@@ -70,7 +70,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Scalars (loss, learning rates)
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
You can view all the task details in the [WebApp](../webapp/webapp_overview.md).
|
||||
@@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ If you are not already using ClearML, see [Getting Started](../getting_started/d
|
||||
instructions.
|
||||
:::
|
||||
|
||||
[`click`](https://click.palletsprojects.com) is a python package for creating command-line interfaces. ClearML integrates
|
||||
[`click`](https://click.palletsprojects.com) is a Python package for creating command-line interfaces. ClearML integrates
|
||||
seamlessly with `click` and automatically logs its command-line parameters.
|
||||
|
||||
All you have to do is add two lines of code:
|
||||
|
||||
@@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Scalars (loss, learning rates)
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
You can view all the task details in the [WebApp](../webapp/webapp_overview.md).
|
||||
@@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
|
||||
uncommitted code, python environment, all information logged through `TensorboardLogger`, and more.
|
||||
uncommitted code, Python environment, all information logged through `TensorboardLogger`, and more.
|
||||
|
||||
Visualize all the captured information in the task's page in ClearML's [WebApp](#webapp).
|
||||
|
||||
@@ -45,7 +45,7 @@ Integrate ClearML with the following steps:
|
||||
```
|
||||
|
||||
This creates a [ClearML Task](../fundamentals/task.md) called `ignite` in the `examples` project, which captures your
|
||||
script's information, including Git details, uncommitted code, python environment.
|
||||
script's information, including Git details, uncommitted code, Python environment.
|
||||
|
||||
You can also pass the following parameters to the `ClearMLLogger` object:
|
||||
* `task_type` – The type of task (see [task types](../fundamentals/task.md#task-types)).
|
||||
|
||||
@@ -70,7 +70,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ class is used to create a ClearML Task to log LangChain assets and metrics.
|
||||
Integrate ClearML with the following steps:
|
||||
1. Set up the `ClearMLCallbackHandler`. The following code creates a [ClearML Task](../fundamentals/task.md) called
|
||||
`llm` in the `langchain_callback_demo` project, which captures your script's information, including Git details,
|
||||
uncommitted code, and python environment:
|
||||
uncommitted code, and Python environment:
|
||||
```python
|
||||
from langchain.callbacks import ClearMLCallbackHandler
|
||||
from langchain_openai import OpenAI
|
||||
@@ -60,7 +60,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -69,7 +69,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Source code and uncommitted changes
|
||||
* Installed packages
|
||||
* MegEngine model files
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Scalars logged to popular frameworks like TensorBoard
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
@@ -65,7 +65,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -65,7 +65,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -95,7 +95,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Joblib model files
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
You can view all the task details in the [WebApp](../webapp/webapp_exp_track_visual.md).
|
||||
@@ -63,7 +63,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
|
||||
uncommitted code, python environment, your `seaborn` plots, and more. View the seaborn plots in the [WebApp](../webapp/webapp_overview.md),
|
||||
uncommitted code, Python environment, your `seaborn` plots, and more. View the seaborn plots in the [WebApp](../webapp/webapp_overview.md),
|
||||
in the task's **Plots** tab.
|
||||
|
||||

|
||||
|
||||
@@ -8,7 +8,7 @@ logging metrics, model files, plots, debug samples, and more, so you can gain mo
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install the `clearml` python package:
|
||||
1. Install the `clearml` Python package:
|
||||
|
||||
```commandline
|
||||
pip install clearml
|
||||
|
||||
@@ -17,7 +17,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
|
||||
uncommitted code, python environment, your TensorBoard metrics, plots, images, and text.
|
||||
uncommitted code, Python environment, your TensorBoard metrics, plots, images, and text.
|
||||
|
||||
View the TensorBoard outputs in the [WebApp](../webapp/webapp_overview.md), in the task's page.
|
||||
|
||||
@@ -52,7 +52,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
|
||||
```
|
||||
|
||||
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
|
||||
uncommitted code, python environment, your TensorboardX metrics, plots, images, and text.
|
||||
uncommitted code, Python environment, your TensorboardX metrics, plots, images, and text.
|
||||
|
||||
View the TensorboardX outputs in the [WebApp](../webapp/webapp_overview.md), in the task's page.
|
||||
|
||||
@@ -51,7 +51,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ ClearML automatically logs Transformer's models, parameters, scalars, and more.
|
||||
|
||||
All you have to do is install and set up ClearML:
|
||||
|
||||
1. Install the `clearml` python package:
|
||||
1. Install the `clearml` Python package:
|
||||
|
||||
```commandline
|
||||
pip install clearml
|
||||
|
||||
@@ -25,7 +25,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
|
||||
* Scalars (loss, learning rates)
|
||||
* Console output
|
||||
* General details such as machine details, runtime, creation date etc.
|
||||
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
|
||||
* And more
|
||||
|
||||
:::tip Logging Plots
|
||||
@@ -89,7 +89,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
|
||||
See more information about explicitly logging information to a ClearML Task:
|
||||
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
|
||||
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
|
||||
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
|
||||
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
|
||||
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ built in logger:
|
||||
* Turn your newly trained YOLOv5 model into an API with just a few commands using [ClearML Serving](../clearml_serving/clearml_serving.md)
|
||||
|
||||
## Setup
|
||||
1. Install the clearml python package:
|
||||
1. Install the clearml Python package:
|
||||
|
||||
```commandline
|
||||
pip install clearml
|
||||
|
||||
@@ -22,7 +22,7 @@ segmentation, and classification. Get the most out of YOLOv8 with ClearML:
|
||||
|
||||
## Setup
|
||||
|
||||
1. Install the `clearml` python package:
|
||||
1. Install the `clearml` Python package:
|
||||
|
||||
```commandline
|
||||
pip install clearml
|
||||
|
||||
@@ -20,7 +20,7 @@ for more details.
|
||||
|
||||
ClearML pipelines are created from code using one of the following:
|
||||
* [PipelineController](pipelines_sdk_tasks.md) class - A pythonic interface for defining and configuring the pipeline
|
||||
controller and its steps. The controller and steps can be functions in your python code, or existing [ClearML tasks](../fundamentals/task.md).
|
||||
controller and its steps. The controller and steps can be functions in your Python code, or existing [ClearML tasks](../fundamentals/task.md).
|
||||
* [PipelineDecorator](pipelines_sdk_function_decorators.md) class - A set of Python decorators which transform your
|
||||
functions into the pipeline controller and steps
|
||||
|
||||
@@ -35,7 +35,7 @@ example of a pipeline with concurrent steps.
|
||||
ClearML supports multiple modes for pipeline execution:
|
||||
* **Remote Mode** (default) - In this mode, the pipeline controller logic is executed through a designated queue, and all
|
||||
the pipeline steps are launched remotely through their respective queues. Since each task is executed independently,
|
||||
it can have control over its git repository (if needed), required python packages, and the specific container to use.
|
||||
it can have control over its git repository (if needed), required Python packages, and the specific container to use.
|
||||
* **Local Mode** - In this mode, the pipeline is executed locally, and the steps are executed as sub-processes. Each
|
||||
subprocess uses the exact same Python environment as the main pipeline logic.
|
||||
* **Debugging Mode** (for PipelineDecorator) - In this mode, the entire pipeline is executed locally, with the pipeline
|
||||
|
||||
@@ -224,7 +224,7 @@ You can run the pipeline logic locally, while keeping the pipeline components ex
|
||||
:::
|
||||
|
||||
#### Debugging Mode
|
||||
In debugging mode, the pipeline controller and all components are treated as regular python functions, with components
|
||||
In debugging mode, the pipeline controller and all components are treated as regular Python functions, with components
|
||||
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
|
||||
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
|
||||
before the main pipeline logic function call.
|
||||
@@ -241,7 +241,7 @@ if __name__ == '__main__':
|
||||
In local mode, the pipeline controller creates Tasks for each component, and component functions calls are translated
|
||||
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
|
||||
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
|
||||
itself is local. Notice that each subprocess is using the exact same python environment as the main pipeline logic. Call
|
||||
itself is local. Notice that each subprocess is using the exact same Python environment as the main pipeline logic. Call
|
||||
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
|
||||
before the main pipeline logic function.
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ ClearML k8s glue default pod label was changed to `CLEARML=agent` (instead of `T
|
||||
- Update task `status_message` for non-responsive or hanging pods
|
||||
- Support the `agent.docker_force_pull` configuration option for scheduled pods
|
||||
- Add docker example for running the k8s glue as a pod in a k8s cluster
|
||||
- Add `agent.ignore_requested_python_version` configuration option to ignore any requested python version (default false, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L45))
|
||||
- Add `agent.ignore_requested_python_version` configuration option to ignore any requested Python version (default false, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L45))
|
||||
- Add `agent.docker_internal_mounts` configuration option to control containers internal mounts (non-root containers, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L184))
|
||||
- Add support for `-r requirements.txt` in the Installed Packages section
|
||||
- Add support for `CLEARML_AGENT_INITIAL_CONNECT_RETRY_OVERRIDE` environment variable to override initial server connection behavior (defaults to true, allows boolean value or an explicit number specifying the number of connect retries)
|
||||
|
||||
@@ -15,7 +15,7 @@ title: Version 1.2
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Fix `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL` fails to find python executable
|
||||
- Fix `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL` fails to find Python executable
|
||||
- Fix `apt-get update` failure causes `apt-get install` not to be executed
|
||||
|
||||
### ClearML Agent 1.2.1
|
||||
@@ -38,7 +38,7 @@ title: Version 1.2
|
||||
|
||||
**Bug Fixes**
|
||||
|
||||
- Fix virtualenv python interpreter used ([ClearML Agent GitHub PR #98](https://github.com/allegroai/clearml-agent/pull/98))
|
||||
- Fix virtualenv Python interpreter used ([ClearML Agent GitHub PR #98](https://github.com/allegroai/clearml-agent/pull/98))
|
||||
- Fix typing package incorrectly required for Python>3.5 ([ClearML Agent GitHub PR #103](https://github.com/allegroai/clearml-agent/pull/103))
|
||||
- Fix symbolic links not copied from cached VCS into working copy (windows platform will result with default copy content instead of original symbolic link) ([ClearML Agent GitHub PR #89](https://github.com/allegroai/clearml-agent/pull/89))
|
||||
- Fix agent fails to check out code from main branch when branch/commit is not explicitly specified ([ClearML GitHub issue #551](https://github.com/allegroai/clearml/issues/551))
|
||||
|
||||
@@ -40,7 +40,7 @@ those matching these filters to be used when running containers
|
||||
**New Features and Improvements**
|
||||
* Add `NO_DOCKER` flag to `clearml-agent-services` entrypoint ([ClearML Agent GitHub PR #206](https://github.com/allegroai/clearml-agent/pull/206))
|
||||
* Use `venv` module if `virtualenv` is not supported
|
||||
* Find the correct python version when using a pre-installed python environment
|
||||
* Find the correct Python version when using a pre-installed python environment
|
||||
* Add `/bin/bash` support in the task's `script.binary` property
|
||||
* Add support for `.ipynb` script entry files (install nbconvert in runtime, convert file to python and execute the
|
||||
python script). Includes `CLEARML_AGENT_FORCE_TASK_INIT` patching of `.ipynb` files (post-python conversion)
|
||||
|
||||
@@ -36,7 +36,7 @@ title: Version 3.20
|
||||
* Add Administrator identity provider management UI: administrators can add and manage multiple identity providers
|
||||
* New UI experiment table comparative view: compare plots and scalars of all selected experiments
|
||||
* Add UI project metric snapshot support for multiple metrics
|
||||
* Add UI experiment display of original python requirements along with actual packages used.
|
||||
* Add UI experiment display of original Python requirements along with actual packages used.
|
||||
* Add compressed UI experiment table info panel mode displaying only experiment name and status
|
||||
* Add "x unified" hover mode to UI plots
|
||||
* Add option to view metadata of published dataset versions in UI Hyper-Dataset list view
|
||||
|
||||
@@ -20,7 +20,7 @@ title: Version 1.14
|
||||
|
||||
**New Features and Improvements**
|
||||
* New UI experiment table comparative view: compare plots and scalars of all selected experiments
|
||||
* Add UI experiment display of original python requirements along with actual packages used ([ClearML GitHub issue #793](https://github.com/allegroai/clearml/issues/793))
|
||||
* Add UI experiment display of original Python requirements along with actual packages used ([ClearML GitHub issue #793](https://github.com/allegroai/clearml/issues/793))
|
||||
* Add UI project metric snapshot support for multiple metrics
|
||||
* Add compressed UI experiment table info panel mode displaying only experiment name and status
|
||||
* Add "x unified" hover mode to UI plots
|
||||
|
||||
@@ -208,7 +208,7 @@ title: Version 1.1
|
||||
- Add `Task.get_configuration_object_as_dict()`
|
||||
- Add `docker_image` argument to `Task.set_base_docker()` (deprecate `docker_cmd`)
|
||||
- Add `auto_version_bump` argument to `PipelineController`
|
||||
- Add `sdk.development.detailed_import_report` configuration option to provide a detailed report of all python package imports
|
||||
- Add `sdk.development.detailed_import_report` configuration option to provide a detailed report of all Python package imports
|
||||
- Set current Task as Dataset parent when creating dataset
|
||||
- Add support for deferred configuration
|
||||
- Examples
|
||||
|
||||
@@ -51,7 +51,7 @@ title: Version 1.6
|
||||
* Fix error when connecting an input model
|
||||
* Fix deadlocks, including:
|
||||
* Change thread Event/Lock to a process fork safe threading objects
|
||||
* Use file lock instead of process lock to avoid future deadlocks since python process lock is not process safe
|
||||
* Use file lock instead of process lock to avoid future deadlocks since Python process lock is not process safe
|
||||
(killing a process holding a lock will Not release the lock)
|
||||
* Fix `StorageManager.list()` on a local Windows path
|
||||
* Fix model not created in the current project
|
||||
|
||||
@@ -107,7 +107,7 @@ Access these actions with the context menu by right-clicking a version on the da
|
||||
|-----|----|
|
||||
|Add Tag |User-defined labels added to versions for grouping and organization. |
|
||||
|Archive| Move dataset versions to the dataset's archive. |
|
||||
|Restore|Action available in the archive. Restore a version to the active dataset versions table.|
|
||||
|Restore|Action available in the archive. Restore a version to the active dataset version table.|
|
||||
|Delete| Delete an archived version and its artifacts. This action is available only from the dataset's archive. |
|
||||
|
||||

|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: The Pipeline Runs Table
|
||||
title: The Pipeline Run Table
|
||||
---
|
||||
|
||||
The pipeline runs table is a [customizable](#customizing-the-runs-table) list of the pipeline's runs. Use it to
|
||||
@@ -31,7 +31,7 @@ The downloaded data consists of the currently displayed table columns.
|
||||
|
||||
## Run Table Columns
|
||||
|
||||
The models table contains the following columns:
|
||||
The pipeline run table contains the following columns:
|
||||
|
||||
| Column | Description | Type |
|
||||
|---|---|---|
|
||||
|
||||
@@ -90,12 +90,12 @@ information).
|
||||
|
||||
You can edit the labels of credentials in your own workspace, or credentials that you created in other workspaces.
|
||||
|
||||
**To edit the credentials label:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit Pencil" className="icon size-md" />
|
||||
**To edit the credentials label:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit Pencil" className="icon size-md" /> .
|
||||
|
||||
You can revoke any credentials in your own workspace, or credentials that you created in other workspaces. Once revoked,
|
||||
these credentials cannot be recovered.
|
||||
|
||||
**To revoke ClearML credentials:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" />
|
||||
**To revoke ClearML credentials:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" /> .
|
||||
|
||||
### AI Application Gateway Tokens
|
||||
|
||||
|
||||
@@ -71,15 +71,15 @@ The comparison pages provide the following views:
|
||||
### Side-by-side Textual Comparison
|
||||
|
||||
In the **Details** and **Hyperparameters** (Values view) tabs, you can view differences in the tasks' parameters' nominal
|
||||
values. The **Details** tab displays the tasks' execution details (source code, uncommitted changes, python packages),
|
||||
values. The **Details** tab displays the tasks' execution details (source code, uncommitted changes, Python packages),
|
||||
models, artifacts, configuration objects, and additional general information. **Hyperparameters** (Values view) displays the
|
||||
tasks' hyperparameter and their values.
|
||||
|
||||
The tasks are laid out in vertical cards, so each field is lined up side-by-side. The task on the
|
||||
left is used as the base task, to which the other tasks are compared. You can set a new base task in
|
||||
one of the following ways:
|
||||
* Hover and click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base task" className="icon size-md space-sm" /> on the task that will be the new base.
|
||||
* Hover and click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan" className="icon size-md space-sm" /> on the new base task and drag it all the way to the left
|
||||
* Hover and click <img src="/docs/latest/icons/ico-arrow-from-right.svg" alt="Switch base task" className="icon size-md space-sm" /> on the task that will be the new base.
|
||||
* Hover and click <img src="/docs/latest/icons/ico-drag.svg" alt="Pan" className="icon size-md space-sm" /> on the new base task and drag it all the way to the left
|
||||
|
||||
The differences between the tasks are highlighted. Easily locate
|
||||
value differences by clicking click <img src="/docs/latest/icons/ico-previous-diff.svg" alt="Up arrow" className="icon size-md" />
|
||||
|
||||
@@ -32,7 +32,7 @@ the original task to become the clone's parent.
|
||||
## Resetting
|
||||
|
||||
To reset a task:
|
||||
1. In the tasks table, right-click the relevant task and click **Reset**.
|
||||
1. In the task table, right-click the relevant task and click **Reset**.
|
||||
1. In the `Reset Task` modal, if you want the task's artifacts and debug samples to be deleted from the
|
||||
ClearML file server, click the checkbox
|
||||
1. Click **Reset**
|
||||
|
||||
@@ -37,7 +37,7 @@ You can create tasks by:
|
||||
* Running code instrumented with ClearML (see [Task Creation](../clearml_sdk/task_sdk.md#task-creation))
|
||||
* [Cloning an existing task](webapp_exp_reproducing.md)
|
||||
* Via CLI using [`clearml-task`](../apps/clearml_task.md)
|
||||
* Through the UI interface: Input the task's details, including its source code and python requirements, and then
|
||||
* Through the UI interface: Input the task's details, including its source code and Python requirements, and then
|
||||
run it through a [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) or save it as a *draft*.
|
||||
|
||||
To create a task through the UI interface:
|
||||
@@ -57,7 +57,7 @@ To create a task through the UI interface:
|
||||
* Binary - The binary executing the script (e.g. python3, bash etc).
|
||||
* Type – How the code is provided
|
||||
* Script - The name of the file to run using the above specified binary
|
||||
* Module - The name of a python module to run (Python only, see [Python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
|
||||
* Module - The name of a Python module to run (Python only, see [Python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
|
||||
* Custom code - Directly provide the code to run. Write code, or upload a file:
|
||||
* File name - The script in which your code is stored. Click `Upload` to upload an existing file.
|
||||
* Content - The actual code. Click `Edit` to modify the script’s contents.
|
||||
@@ -66,7 +66,7 @@ To create a task through the UI interface:
|
||||
* **Arguments** (*optional*) - Add [hyperparameter](../fundamentals/hyperparameters.md) values.
|
||||
* **Environment** (*optional*) - Set up the task’s execution environment
|
||||
* Python - Python environment settings
|
||||
* Use Poetry - Force Poetry instead of pip package manager. Disables additional python settings.
|
||||
* Use Poetry - Force Poetry instead of pip package manager. Disables additional Python settings.
|
||||
* Preinstalled venv - The name of a virtual environment available in the task’s execution environment to use when
|
||||
running the task. Additionally, specify how to use the virtual environment:
|
||||
* Skip - Try to automatically detect an available virtual environment, and use it as is.
|
||||
|
||||
@@ -7,7 +7,7 @@ You can view the differences in model details, configuration, scalar values, and
|
||||
|
||||
## Selecting Models to Compare
|
||||
To select models to compare:
|
||||
1. Go to a models table that includes the models to be compared.
|
||||
1. Go to a model table that includes the models to be compared.
|
||||
1. Select the models to compare. Once multiple models are selected, the batch action bar appears.
|
||||
1. In the batch action bar, click **COMPARE**.
|
||||
|
||||
@@ -57,9 +57,9 @@ information is displayed in a column, so each field is lined up side-by-side.
|
||||
|
||||
The model on the left is used as the base model, to which the other models are compared. You can set a new base model
|
||||
in one of the following ways:
|
||||
* Hover and click <img src="/docs/latest/icons/ico-switch-base.svg" alt="Switch base task" className="icon size-md space-sm" />
|
||||
* Hover and click <img src="/docs/latest/icons/ico-arrow-from-right.svg" alt="Switch base task" className="icon size-md space-sm" />
|
||||
on the model that will be the new base.
|
||||
* Hover and click <img src="/docs/latest/icons/ico-pan.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base model and drag it all the way to the left
|
||||
* Hover and click <img src="/docs/latest/icons/ico-drag.svg" alt="Pan icon" className="icon size-md space-sm" /> on the new base model and drag it all the way to the left
|
||||
|
||||
The differences between the models are highlighted. You can obscure identical fields by switching on the
|
||||
**Hide Identical Fields** toggle.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
title: Model Endpoints
|
||||
---
|
||||
|
||||
The Model Endpoints table lists all currently live (active, and being brought up) model endpoints, allowing you to view
|
||||
The Model Endpoint table lists all currently live (active, and being brought up) model endpoints, allowing you to view
|
||||
endpoint details and monitor status over time. Whenever you deploy a model through the [ClearML Deploy UI applications](applications/apps_overview.md#deploy),
|
||||
it will be listed in the table.
|
||||
|
||||
|
||||