Small edits (#1033)
Some checks are pending
CI / build (push) Waiting to run

This commit is contained in:
pollfly 2025-02-09 19:46:40 +02:00 committed by GitHub
parent e99e033b06
commit 79eff642ff
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
87 changed files with 104 additions and 104 deletions

View File

@ -27,9 +27,9 @@ of the optimization results in table and graph forms.
|`--args`| List of `<argument>=<value>` strings to pass to the remote execution. Currently only argparse/click/hydra/fire arguments are supported. Example: `--args lr=0.003 batch_size=64`|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--compute-time-limit`|The maximum compute time in minutes that a task can consume. If this time limit is exceeded, all jobs are aborted.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--max-iteration-per-job`|The maximum iterations (of the objective metric) per single job. When iteration maximum is exceeded, the job is aborted.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--max-number-of-concurrent-tasks`|The maximum number of concurrent Tasks running at the same time|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--max-number-of-concurrent-tasks`|The maximum number of concurrent Tasks running at the same time.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--min-iteration-per-job`|The minimum iterations (of the objective metric) per single job.|<img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--local`| If set, run the tasks locally. Notice that no new python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`| <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--local`| If set, run the tasks locally. Notice that no new Python environment will be created. The `--script` parameter must point to a local file entry point and all arguments must be passed with `--args`.| <img src="/docs/latest/icons/ico-optional-no.svg" alt="No" className="icon size-md center-md" />|
|`--objective-metric-series`| Objective metric series to maximize/minimize (e.g. 'loss').|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--objective-metric-sign`| Optimization target, whether to maximize or minimize the value of the objective metric specified. Possible values: "min", "max", "min_global", "max_global". For more information, see [Optimization Objective](#optimization-objective). |<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|
|`--objective-metric-title`| Objective metric title to maximize/minimize (e.g. 'validation').|<img src="/docs/latest/icons/ico-optional-yes.svg" alt="Yes" className="icon size-md center-md" />|

View File

@ -101,7 +101,7 @@ When `clearml-session` is launched, it initializes a task with a unique ID in th
To connect to an existing session:
1. Go to the web UI, find the interactive session task (by default, it's in project "DevOps").
1. Click the `ID` button in the task page's header to copy the unique ID.
1. Copy the unique ID by clicking the `ID` button in the task page's header.
1. Run the following command: `clearml-session --attach <session_id>`.
1. Click on the JupyterLab / VS Code link that is outputted, or connect directly to the SSH session
@ -179,7 +179,7 @@ The Task must be connected to a git repository, since currently single script de
:::
1. In the **ClearML web UI**, find the task that needs debugging.
1. Click the `ID` button next to the Task name, and copy the unique ID.
1. Copy the unique ID by clicking the `ID` button in the task page's header.
1. Enter the following command: `clearml-session --debugging-session <task_id>`
1. Click on the JupyterLab / VS Code link, or connect directly to the SSH session.
1. In JupyterLab / VS Code, access the task's repository in the `environment/task_repository` folder.
@ -253,9 +253,9 @@ clearml-session --continue-session <session_id> --store-workspace ~/workspace
| `--username`| Set your own SSH username for the interactive session | `root` or a previously used username |
| `--verbose` | Increase verbosity of logging | `none` |
| `--version`| Display the clearml-session utility version| N/A|
| `--vscode-extensions` |Install additional VSCode extensions and VSCode python extensions (example: `ms-python.python,ms-python.black-formatter,ms-python.pylint,ms-python.flake8`)|`none`|
| `--vscode-extensions` |Install additional VSCode extensions and VSCode Python extensions (example: `ms-python.python,ms-python.black-formatter,ms-python.pylint,ms-python.flake8`)|`none`|
| `--vscode-server` | Install VSCode on interactive session | `true` |
| `--vscode-version` | Set VSCode server (code-server) version, as well as VSCode python extension version `<vscode:python-ext>` (example: "3.7.4:2020.10.332292344")| `4.14.1:2023.12.0`|
| `--vscode-version` | Set VSCode server (code-server) version, as well as VSCode Python extension version `<vscode:python-ext>` (example: "3.7.4:2020.10.332292344")| `4.14.1:2023.12.0`|
| `--yes`, `-y`| Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively |N/A|
</div>

View File

@ -35,7 +35,7 @@ The preceding diagram demonstrates a typical flow where an agent executes a task
1. Install any required system packages.
1. Clone the code from a git repository.
1. Apply any uncommitted changes recorded.
1. Set up the python environment and required packages.
1. Set up the Python environment and required packages.
1. The task's script/code is executed.
:::note Python Version

View File

@ -38,7 +38,7 @@ but can be overridden by command-line arguments.
|**CLEARML_AGENT_EXTRA_DOCKER_ARGS** | Overrides extra docker args configuration |
|**CLEARML_AGENT_EXTRA_DOCKER_LABELS** | List of labels to add to docker container. See [Docker documentation](https://docs.docker.com/config/labels-custom-metadata/). |
|**CLEARML_EXTRA_PIP_INSTALL_FLAGS**| List of additional flags to use when the agent installs packages. For example: `CLEARML_EXTRA_PIP_INSTALL_FLAGS=--use-deprecated=legacy-resolver` for a single flag or `CLEARML_EXTRA_PIP_INSTALL_FLAGS="--use-deprecated=legacy-resolver --no-warn-conflicts"` for multiple flags|
|**CLEARML_AGENT_EXTRA_PYTHON_PATH** | Sets extra python path |
|**CLEARML_AGENT_EXTRA_PYTHON_PATH** | Sets extra Python path |
|**CLEARML_AGENT_INITIAL_CONNECT_RETRY_OVERRIDE** | Overrides initial server connection behavior (true by default), allows explicit number to specify number of connect retries) |
|**CLEARML_AGENT_NO_UPDATE** | Boolean. Set to `1` to skip agent update in the k8s pod container before the agent executes the task |
|**CLEARML_AGENT_K8S_HOST_MOUNT / CLEARML_AGENT_DOCKER_HOST_MOUNT** | Specifies Agent's mount point for Docker / K8s |
@ -47,7 +47,7 @@ but can be overridden by command-line arguments.
|**CLEARML_AGENT_PACKAGE_PYTORCH_RESOLVE**|Sets the PyTorch resolving mode. The options are: <ul><li>`none` - No resolving. Install PyTorch like any other package</li><li>`pip` (default) - Sets extra index based on cuda and lets pip resolve</li><li>`direct` - Resolve a direct link to the PyTorch wheel by parsing the pytorch.org pip repository, and matching the automatically detected cuda version with the required PyTorch wheel. If the exact cuda version is not found for the required PyTorch wheel, it will try a lower cuda version until a match is found</li></ul> |
|**CLEARML_AGENT_DEBUG_INFO** | Provide additional debug information for a specific context (currently only the `docker` value is supported) |
|**CLEARML_AGENT_CHILD_AGENTS_COUNT_CMD** | Provide an alternate bash command to list child agents while working in services mode |
|**CLEARML_AGENT_SKIP_PIP_VENV_INSTALL** | Instead of creating a new virtual environment inheriting from the system packages, use an existing virtual environment and install missing packages directly to it. Specify the python binary of the existing virtual environment. For example: `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/home/venv/bin/python` |
|**CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL** | If set to `1`, the agent will not install any required python packages and will just use the preexisting python environment to run the task. |
|**CLEARML_AGENT_SKIP_PIP_VENV_INSTALL** | Instead of creating a new virtual environment inheriting from the system packages, use an existing virtual environment and install missing packages directly to it. Specify the Python binary of the existing virtual environment. For example: `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=/home/venv/bin/python` |
|**CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL** | If set to `1`, the agent will not install any required Python packages and will just use the preexisting Python environment to run the task. |
|**CLEARML_AGENT_VENV_CACHE_PATH** | Overrides venv cache folder configuration |
|**CLEARML_MULTI_NODE_SINGLE_TASK**| Control how multi-node resource monitoring is reported. The options are: <ul><li>`-1` - Only master node's (rank zero) console/resources are reported</li><li>`1` - Graph per node i.e. machine/GPU graph for every node (console output prefixed with RANK)</li><li>`2` - Series per node under a unified machine resource graph, graph per type of resource e.g. CPU/GPU utilization (console output prefixed with RANK)</li></ul>|

View File

@ -36,14 +36,14 @@ lineage and content information. See [dataset UI](../webapp/datasets/webapp_data
## Setup
`clearml-data` comes built-in with the `clearml` python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
`clearml-data` comes built-in with the `clearml` Python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
guide for more info!
## Using ClearML Data
ClearML Data supports two interfaces:
- `clearml-data` - A CLI utility for creating, uploading, and managing datasets. See [CLI](clearml_data_cli.md) for a reference of `clearml-data` commands.
- `clearml.Dataset` - A python interface for creating, retrieving, managing, and using datasets. See [SDK](clearml_data_sdk.md) for an overview of the basic methods of the `Dataset` module.
- `clearml.Dataset` - A Python interface for creating, retrieving, managing, and using datasets. See [SDK](clearml_data_sdk.md) for an overview of the basic methods of the `Dataset` module.
For an overview of recommendations for ClearML Data workflows and practices, see [Best Practices](best_practices.md).

View File

@ -7,7 +7,7 @@ This page covers `clearml-data`, ClearML's file-based data management solution.
See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced queryable dataset management solution.
:::
`clearml-data` is a data management CLI tool that comes as part of the `clearml` python package. Use `clearml-data` to
`clearml-data` is a data management CLI tool that comes as part of the `clearml` Python package. Use `clearml-data` to
create, modify, and manage your datasets. You can upload your dataset to any storage service of your choice (S3 / GS /
Azure / Network Storage) by setting the dataset's upload destination (see [`--storage`](#upload)). Once you have uploaded
your dataset, you can access it from any machine.

View File

@ -7,7 +7,7 @@ This page covers `clearml-data`, ClearML's file-based data management solution.
See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced queryable dataset management solution.
:::
Datasets can be created, modified, and managed with ClearML Data's python interface. You can upload your dataset to any
Datasets can be created, modified, and managed with ClearML Data's Python interface. You can upload your dataset to any
storage service of your choice (S3 / GS / Azure / Network Storage) by setting the dataset's upload destination (see
[`output_url`](#uploading-files) parameter of `Dataset.upload()`). Once you have uploaded your dataset, you can access
it from any machine.

View File

@ -10,7 +10,7 @@ class to ingest the data.
### Downloading the Data
Before registering the CIFAR dataset with `clearml-data`, you need to obtain a local copy of it.
Execute this python script to download the data:
Execute this Python script to download the data:
```python
from clearml import StorageManager

View File

@ -60,7 +60,7 @@ Nesting projects works on multiple levels. For example: `project_name=main_proje
### Automatic Logging
After invoking `Task.init` in a script, ClearML starts its automagical logging, which includes the following elements:
* **Hyperparameters** - ClearML logs the following types of hyperparameters:
* Command Line Parsing - ClearML captures any command line parameters passed when invoking code that uses standard python packages, including:
* Command Line Parsing - ClearML captures any command line parameters passed when invoking code that uses standard Python packages, including:
* [click](../integrations/click.md)
* [argparse](../guides/reporting/hyper_parameters.md#argparse-command-line-options)
* [Python Fire](../integrations/python_fire.md)
@ -89,7 +89,7 @@ After invoking `Task.init` in a script, ClearML starts its automagical logging,
* **Execution details** including:
* Git information
* Uncommitted code modifications - In cases where no git repository is detected (e.g. when a single python script is
* Uncommitted code modifications - In cases where no git repository is detected (e.g. when a single Python script is
executed outside a git repository, or when running from a Jupyter Notebook), ClearML logs the contents
of the executed script
* Python environment
@ -257,7 +257,7 @@ task's status. If a task failed or was aborted, you can view how much progress i
</div>
Additionally, you can view a task's progress in its [INFO](../webapp/webapp_exp_track_visual.md#general-information) tab
Additionally, you can view a task's progress in its [INFO](../webapp/webapp_exp_track_visual.md#info) tab
in the WebApp.

View File

@ -16,7 +16,7 @@ solution.
* Flexible
* On-line model deployment
* On-line endpoint model/version deployment (i.e. no need to take the service down)
* Per model standalone preprocessing and postprocessing python code
* Per model standalone preprocessing and postprocessing Python code
* Scalable
* Multi model per container
* Multi models per serving service

View File

@ -84,7 +84,7 @@ project (default: "DevOps" project).
## Registering and Deploying New Models Manually
Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the python interface,
Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the Python interface,
or with the `clearml-serving` CLI.
1. Upload the model file to the `clearml-server` file storage and register it. The `--path` parameter is used to input

View File

@ -339,13 +339,13 @@ optional shell script executes inside the Docker on startup, before the task sta
**`agent.ignore_requested_python_version`** (*bool*)
* Indicates whether to ignore any requested python version
* Indicates whether to ignore any requested Python version
* The values are:
* `true` - ignore any requested python version
* `false` - if a task was using a specific python version, and the system supports multiple versions, the agent will
use the requested python version (default)
* `true` - ignore any requested Python version
* `false` - if a task was using a specific Python version, and the system supports multiple versions, the agent will
use the requested Python version (default)
___

View File

@ -139,7 +139,7 @@ the following numbers are displayed:
![Server version information](img/faq_server_versions.png)
ClearML python package information can be obtained by using `pip freeze`.
ClearML Python package information can be obtained by using `pip freeze`.
For example:
@ -324,7 +324,7 @@ For more task configuration options, see [Hyperparameters](fundamentals/hyperpar
<br/>
#### I noticed that all of my tasks appear as "Training". Are there other options? <a id="other-experiment-types"></a>
#### I noticed that all of my tasks appear as "Training". Are there other options? <a id="other-task-types"></a>
Yes! ClearML supports [multiple task types](fundamentals/task.md#task-types). When creating tasks and
calling [`Task.init()`](references/sdk/task.md#taskinit), you can provide a task type. For example:
@ -336,7 +336,7 @@ task = Task.init(project_name, task_name, Task.TaskTypes.testing)
<br/>
#### Sometimes I see tasks as running when in fact they are not. What's going on? <a id="experiment-running-but-stopped"></a>
#### Sometimes I see tasks as running when in fact they are not. What's going on? <a id="task-running-but-stopped"></a>
ClearML monitors your Python process. When the process exits properly, ClearML closes the task. When the process crashes and terminates abnormally, it sometimes misses the stop signal. In this case, you can safely right-click the task in the WebApp and abort it.
@ -358,7 +358,7 @@ pip install -U clearml
Your firewall may be preventing the connection. Try one of the following solutions:
* Direct python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Direct Python "requests" to use the enterprise certificate file by setting the OS environment variables `CURL_CA_BUNDLE` or `REQUESTS_CA_BUNDLE`. For a detailed discussion of this topic, see [https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module](https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module).
* Disable certificate verification
:::warning

View File

@ -48,7 +48,7 @@ The diagram above demonstrates a typical flow where an agent executes a task:
1. Install any required system packages.
1. Clone the code from a git repository.
1. Apply any uncommitted changes recorded.
1. Set up the python environment and required packages.
1. Set up the Python environment and required packages.
1. The task's script/code is executed.
While the agent is running, it continuously reports system metrics to the ClearML Server. You can monitor these metrics

View File

@ -21,7 +21,7 @@ and tracks hyperparameters of various types, supporting automatic logging and ex
### Automatic Logging
Once a ClearML Task has been [initialized](../references/sdk/task.md#taskinit) in a script, ClearML automatically captures and tracks
the following types of parameters:
* Command line parsing - command line parameters passed when invoking code that uses standard python packages, including:
* Command line parsing - command line parameters passed when invoking code that uses standard Python packages, including:
* [click](../integrations/click.md)
* [argparse](../guides/reporting/hyper_parameters.md#argparse-command-line-options)
* [Python Fire](../integrations/python_fire.md)

View File

@ -69,7 +69,7 @@ allows tasks to be reproduced, and their hyperparameters and results can be save
understanding model behavior.
Hyperparameters can be added from anywhere in your code, and ClearML provides multiple ways to log them. If you specify
your parameters using popular python packages, such as [argparse](https://docs.python.org/3/library/argparse.html) and
your parameters using popular Python packages, such as [argparse](https://docs.python.org/3/library/argparse.html) and
[click](https://click.palletsprojects.com/), all you need to do is [initialize](../references/sdk/task.md#taskinit) a task, and
ClearML will automatically log the parameters. ClearML also provides methods to explicitly report parameters.

View File

@ -8,7 +8,7 @@ title: First Steps
First, [sign up for free](https://app.clear.ml).
Install the `clearml` python package:
Install the `clearml` Python package:
```bash
pip install clearml
```

View File

@ -2,7 +2,7 @@
title: Next Steps
---
So, you've already [installed ClearML's python package](ds_first_steps.md) and run your first experiment!
So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first experiment!
Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!

View File

@ -9,7 +9,7 @@ This tutorial assumes that you've already [signed up](https://app.clear.ml) to C
ClearML provides tools for **automation**, **orchestration**, and **tracking**, all key in performing effective MLOps and LLMOps.
Effective MLOps and LLMOps rely on the ability to scale work beyond one's own computer. Moving from your own machine can be time-consuming.
Even assuming that you have all the drivers and applications installed, you still need to manage multiple python environments
Even assuming that you have all the drivers and applications installed, you still need to manage multiple Python environments
for different packages / package versions, or worse - manage different Dockers for different package versions.
Not to mention, when working on remote machines, executing experiments, tracking what's running where, and making sure machines
@ -21,7 +21,7 @@ ClearML Agent was designed to deal with such issues and more! It is a tool respo
machine of choice through the ClearML WebApp with no need for additional code.
The agent will set up the environment for a specific Task's execution (inside a Docker, or bare-metal), install the
required python packages, and execute and monitor the process.
required Python packages, and execute and monitor the process.
## Set up an Agent
@ -72,7 +72,7 @@ Cloning a task duplicates the task's configuration, but not its outputs.
1. Click **CLONE** in the window.
The newly cloned task will appear and its info panel will slide open. The cloned task is in draft mode, so
it can be modified. You can edit the Git / code references, control the python packages to be installed, specify the
it can be modified. You can edit the Git / code references, control the Python packages to be installed, specify the
Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-tasks) for more information about editing tasks in the UI.
## Enqueue a Task

View File

@ -36,13 +36,13 @@ The most important difference is that youll also be asked for your git inform
Before we run the agent though, let's take a quick look at what will happen when we spin it up.
Our server hosts one or more queues in which we can put our tasks. And then we have our agent. By default, it will be running in pip mode, or virtual environment mode. Once an agent pulls a new task from the queue to be executed, it will create a new python virtual environment for it. It will then clone the code itself and install all required python packages in the new virtual environment. It then runs the code and injects any new hyperparameters we changed in the UI.
Our server hosts one or more queues in which we can put our tasks. And then we have our agent. By default, it will be running in pip mode, or virtual environment mode. Once an agent pulls a new task from the queue to be executed, it will create a new Python virtual environment for it. It will then clone the code itself and install all required Python packages in the new virtual environment. It then runs the code and injects any new hyperparameters we changed in the UI.
PIP mode is really handy and efficient. It will create a new python virtual environment for every task it pulls and will use smart caching so packages or even whole environments can be reused over multiple tasks.
PIP mode is really handy and efficient. It will create a new Python virtual environment for every task it pulls and will use smart caching so packages or even whole environments can be reused over multiple tasks.
You can also run the agent in conda mode or poetry mode, which essentially do the same thing as pip mode, only with a conda or poetry environment instead.
However, theres also docker mode. In this case the agent will run every incoming task in its own docker container instead of just a virtual environment. This makes things much easier if your tasks have system package dependencies for example, or when not every task uses the same python version. For our example, well be using docker mode.
However, theres also docker mode. In this case the agent will run every incoming task in its own docker container instead of just a virtual environment. This makes things much easier if your tasks have system package dependencies for example, or when not every task uses the same Python version. For our example, well be using docker mode.
Now that our configuration is ready, we can start our agent in docker mode by running the command `clearml-agent daemon docker`.

View File

@ -20,13 +20,13 @@ keywords: [mlops, components, ClearML data]
<br/>
<Collapsible type="info" title="Video Transcript">
Hello and welcome to ClearML. In this video we'll take a look at both the command line and python interfaces of our data versioning tool called `clearml-data`.
Hello and welcome to ClearML. In this video we'll take a look at both the command line and Python interfaces of our data versioning tool called `clearml-data`.
In the world of machine learning, you are very likely dealing with large amounts of data that you need to put into a dataset. ClearML Data solves 2 important challenges that occur in this situation:
One is accessibility, making sure the data can be accessed from every machine you use. And two is versioning, linking which dataset version was used in which task. This helps to make experiments more reproducible. Moreover, versioning systems like git were never really designed for the size and number of files in machine learning datasets. We're going to need something else.
ClearML Data comes built-in with the `clearml` python package and has both a command line interface for easy and quick operations and a python interface if you want more flexibility. Both interfaces are quite similar, so we'll address both of them in the video.
ClearML Data comes built-in with the `clearml` Python package and has both a command line interface for easy and quick operations and a Python interface if you want more flexibility. Both interfaces are quite similar, so we'll address both of them in the video.
Let's start with an example. Say I have some files here that I want to put into a dataset and start to keep track of.
@ -36,13 +36,13 @@ We can do that by using the `clearml-data add` command and providing the path to
Now we need to tell the server that we're done here. We can call `clearml-data close` to upload the files and change the dataset status to done, which finalizes this version of the dataset.
The process of doing this with the python interface is very similar.
The process of doing this with the Python interface is very similar.
You can create a new Dataset by importing the Dataset object from the `clearml` pip package and calling its `create` method. Now we have to give the dataset a name and a project just like with the command line tool. The create method returns a dataset instance which we will use to do all of our operations on.
To add some files to this newly created dataset version, call the `add_files` method on the dataset object and provide a path to a local file or folder. Bear in mind that nothing is uploaded just yet, we're simply instructing the dataset object what it should do when we eventually *do* want to upload.
A really useful thing we can do with the python interface is adding some interesting statistics about the dataset itself, such as a plot for example. Here we simply report a histogram on the amount of files in the train and test folders. You can add anything to a dataset that you can add to a ClearML task, so go nuts!
A really useful thing we can do with the Python interface is adding some interesting statistics about the dataset itself, such as a plot for example. Here we simply report a histogram on the amount of files in the train and test folders. You can add anything to a dataset that you can add to a ClearML task, so go nuts!
Finally, upload the dataset and then finalize it, or just set `auto_upload` to `true` to make it a one-liner.
@ -56,7 +56,7 @@ Using the command line tool, you can download a dataset version locally by using
That path will be a local cached folder, which means that if you try to get the same dataset again, or any other dataset that's based on this one, it will check which files are already on your system, and it will not download these again.
The python interface is similar, with one major difference. You can also get a dataset using any combination of name, project, ID or tags, but _getting_ the dataset does not mean it is downloaded, we simply got all of the metadata, which we can now access from the dataset object. This is important, as it means you don't have to download the dataset to make changes to it, or to add files. More on that in just a moment.
The Python interface is similar, with one major difference. You can also get a dataset using any combination of name, project, ID or tags, but _getting_ the dataset does not mean it is downloaded, we simply got all of the metadata, which we can now access from the dataset object. This is important, as it means you don't have to download the dataset to make changes to it, or to add files. More on that in just a moment.
If you do want to download a local copy of the dataset, it has to be done explicitly, by calling `get_local_copy` which will return the path to which the data was downloaded for you.
@ -70,7 +70,7 @@ Let's say we found an issue with the hamburgers here, so we remove them from the
Now we can tell ClearML that the changes we made to this folder should become a new version of the previous dataset. We start by creating a new dataset just like we saw before, but now, we add the previous dataset ID as a parent. This tells ClearML that this new dataset version we're creating is based on the previous one and so our dataset object here will already contain all the files that the parent contained.
Now we can manually remove and add the files that we want, even without actually downloading the dataset. It will just change the metadata inside the python object and sync everything when it's finalized.
Now we can manually remove and add the files that we want, even without actually downloading the dataset. It will just change the metadata inside the Python object and sync everything when it's finalized.
That said, we do have a local copy of the dataset in this case, so we have a better option.

View File

@ -25,7 +25,7 @@ ClearML is designed to get you up and running in less than 10 minutes and 2 magi
At the heart of ClearML lies the experiment manager. It consists of the `clearml` pip package and the ClearML Server.
After running `pip install clearml` we can add 2 simple lines of python code to your existing codebase. These 2 lines will capture all the output that your code produces: logs, source code, hyperparameters, plots, images, you name it.
After running `pip install clearml` we can add 2 simple lines of Python code to your existing codebase. These 2 lines will capture all the output that your code produces: logs, source code, hyperparameters, plots, images, you name it.
The pip package also includes `clearml-data`. It can help you keep track of your ever-changing datasets and provides an easy way to store, track and version control your data. It's also an easy way to share your dataset with colleagues over multiple machines while keeping track of who has which version. ClearML Data can even keep track of your data's ancestry, making sure you can always figure out where specific parts of your data came from.

View File

@ -26,7 +26,7 @@ This is the experiment manager's UI, and every row you can see here, is a single
Were currently in our project folder. As you can see, we have our very basic toy example here that we want to keep track of by using ClearMLs experiment manager.
The first thing to do is to install the `clearml` python package in our virtual environment. Installing the package itself, will add 3 commands for you. Well cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`.
The first thing to do is to install the `clearml` Python package in our virtual environment. Installing the package itself, will add 3 commands for you. Well cover the `clearml-data` and `clearml-task` commands later. For now the one we need is `clearml-init`.
If you paid attention in the first video of this series, youd remember that we need to connect to a ClearML Server to save all our tracked data. The server is where we saw the list of experiments earlier. This connection is what `clearml-init` will set up for us. When running the command itll ask for your server API credentials.

View File

@ -36,7 +36,7 @@ We can see that no code was used to log the scalar. It's done automatically beca
We are using a training script as our task in our example here, but the optimizer doesnt actually care whats in our task, it just wants inputs and outputs. So you can optimize basically anything you want.
The only thing we have to do to start optimizing this model is to write a small python file detailing what exactly we want our optimizer to do.
The only thing we have to do to start optimizing this model is to write a small Python file detailing what exactly we want our optimizer to do.
When youre a ClearML Pro user, you can just start the optimizer straight from the UI, but more on that later.

View File

@ -34,7 +34,7 @@ One is you can easily chain existing ClearML tasks together to create a single p
Let's say we have some functions that we already use to run ETL and another function that trains a model on the preprocessed data. We already have a main function too, that orchestrates when and how these other components should be run.
If we want to make this code into a pipeline, the first thing we have to do is to tell ClearML that these functions are supposed to become steps in our pipeline. We can do that by using a python decorator! For each function we want as a step, we can decorate it with `PipelineDecorator.component`.
If we want to make this code into a pipeline, the first thing we have to do is to tell ClearML that these functions are supposed to become steps in our pipeline. We can do that by using a Python decorator! For each function we want as a step, we can decorate it with `PipelineDecorator.component`.
The component call will fully automatically transform this function into a ClearML task, with all the benefits that come with that. It will also make it clear that this task will be part of a larger pipeline.

View File

@ -60,7 +60,7 @@ clearml-task --project keras --name local_test --script webinar-0620/keras_mnist
This sets the following arguments:
* `--project keras --name local_test` - The project and task names
* `--script /webinar-0620/keras_mnist.py` - The local script to be executed
* `-requirements webinar-0620/requirements.txt` - The local python package requirements file
* `-requirements webinar-0620/requirements.txt` - The local Python package requirements file
* `--args batch_size=64 epochs=1` - Arguments passed to the script. This uses the argparse object to capture CLI parameters
* `--queue default` - Selected queue to send the task to

View File

@ -6,7 +6,7 @@ The [pipeline_from_decorator.py](https://github.com/allegroai/clearml/blob/maste
example demonstrates the creation of a pipeline in ClearML using the [`PipelineDecorator`](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)
class.
This example creates a pipeline incorporating four tasks, each of which is created from a python function using a custom decorator:
This example creates a pipeline incorporating four tasks, each of which is created from a Python function using a custom decorator:
* `executing_pipeline`- Implements the pipeline controller which defines the pipeline structure and execution logic.
* `step_one` - Downloads and processes data.
* `step_two` - Further processes the data from `step_one`.

View File

@ -2,7 +2,7 @@
title: Code Examples
---
The following examples demonstrate registering, retrieving, and ingesting your data through the Hyper-Datasets python
The following examples demonstrate registering, retrieving, and ingesting your data through the Hyper-Datasets Python
interface.
## Registering your Data

View File

@ -515,7 +515,7 @@ class method.
my_dataview = DataView.get(dataview_id='<dataview_id>')
```
Access the Dataview's frames as a python list, dictionary, or through a pythonic iterator.
Access the Dataview's frames as a Python list, dictionary, or through a pythonic iterator.
[`DataView.to_list()`](../references/hyperdataset/dataview.md#to_list) returns the Dataview queries result as a Python list.

View File

@ -6,7 +6,7 @@ Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Data
## Usage
Hyper-Datasets are supported by the `allegroai` python package.
Hyper-Datasets are supported by the `allegroai` Python package.
### Connecting Dataviews to a Task

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 179 KiB

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 122 KiB

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 198 KiB

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 198 KiB

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 282 KiB

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 290 KiB

After

Width:  |  Height:  |  Size: 240 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 237 KiB

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 241 KiB

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 230 KiB

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 235 KiB

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

@ -111,7 +111,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Scalars (loss, learning rates)
* Console output
* General details such as machine details, runtime, creation date etc.
* Hyperparameters created with standard python packages (such as argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (such as argparse, click, Python Fire, etc.)
* And more
You can view all the task details in the [WebApp](../webapp/webapp_exp_track_visual.md).
@ -70,7 +70,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Scalars (loss, learning rates)
* Console output
* General details such as machine details, runtime, creation date etc.
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
* And more
You can view all the task details in the [WebApp](../webapp/webapp_overview.md).
@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -7,7 +7,7 @@ If you are not already using ClearML, see [Getting Started](../getting_started/d
instructions.
:::
[`click`](https://click.palletsprojects.com) is a python package for creating command-line interfaces. ClearML integrates
[`click`](https://click.palletsprojects.com) is a Python package for creating command-line interfaces. ClearML integrates
seamlessly with `click` and automatically logs its command-line parameters.
All you have to do is add two lines of code:

View File

@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Scalars (loss, learning rates)
* Console output
* General details such as machine details, runtime, creation date etc.
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
* And more
You can view all the task details in the [WebApp](../webapp/webapp_overview.md).
@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -25,7 +25,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
```
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
uncommitted code, python environment, all information logged through `TensorboardLogger`, and more.
uncommitted code, Python environment, all information logged through `TensorboardLogger`, and more.
Visualize all the captured information in the task's page in ClearML's [WebApp](#webapp).
@ -45,7 +45,7 @@ Integrate ClearML with the following steps:
```
This creates a [ClearML Task](../fundamentals/task.md) called `ignite` in the `examples` project, which captures your
script's information, including Git details, uncommitted code, python environment.
script's information, including Git details, uncommitted code, Python environment.
You can also pass the following parameters to the `ClearMLLogger` object:
* `task_type` The type of task (see [task types](../fundamentals/task.md#task-types)).

View File

@ -70,7 +70,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -14,7 +14,7 @@ class is used to create a ClearML Task to log LangChain assets and metrics.
Integrate ClearML with the following steps:
1. Set up the `ClearMLCallbackHandler`. The following code creates a [ClearML Task](../fundamentals/task.md) called
`llm` in the `langchain_callback_demo` project, which captures your script's information, including Git details,
uncommitted code, and python environment:
uncommitted code, and Python environment:
```python
from langchain.callbacks import ClearMLCallbackHandler
from langchain_openai import OpenAI
@ -60,7 +60,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -69,7 +69,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -21,7 +21,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Source code and uncommitted changes
* Installed packages
* MegEngine model files
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
* Scalars logged to popular frameworks like TensorBoard
* Console output
* General details such as machine details, runtime, creation date etc.
@ -65,7 +65,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -65,7 +65,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -95,7 +95,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -24,7 +24,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Joblib model files
* Console output
* General details such as machine details, runtime, creation date etc.
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
* And more
You can view all the task details in the [WebApp](../webapp/webapp_exp_track_visual.md).
@ -63,7 +63,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -18,7 +18,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
```
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
uncommitted code, python environment, your `seaborn` plots, and more. View the seaborn plots in the [WebApp](../webapp/webapp_overview.md),
uncommitted code, Python environment, your `seaborn` plots, and more. View the seaborn plots in the [WebApp](../webapp/webapp_overview.md),
in the task's **Plots** tab.
![Seaborn plot](../img/integrations_seaborn_plots.png)

View File

@ -8,7 +8,7 @@ logging metrics, model files, plots, debug samples, and more, so you can gain mo
## Setup
1. Install the `clearml` python package:
1. Install the `clearml` Python package:
```commandline
pip install clearml

View File

@ -17,7 +17,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
```
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
uncommitted code, python environment, your TensorBoard metrics, plots, images, and text.
uncommitted code, Python environment, your TensorBoard metrics, plots, images, and text.
View the TensorBoard outputs in the [WebApp](../webapp/webapp_overview.md), in the task's page.
@ -52,7 +52,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -18,7 +18,7 @@ task = Task.init(task_name="<task_name>", project_name="<project_name>")
```
This will create a [ClearML Task](../fundamentals/task.md) that captures your script's information, including Git details,
uncommitted code, python environment, your TensorboardX metrics, plots, images, and text.
uncommitted code, Python environment, your TensorboardX metrics, plots, images, and text.
View the TensorboardX outputs in the [WebApp](../webapp/webapp_overview.md), in the task's page.
@ -51,7 +51,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -68,7 +68,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -9,7 +9,7 @@ ClearML automatically logs Transformer's models, parameters, scalars, and more.
All you have to do is install and set up ClearML:
1. Install the `clearml` python package:
1. Install the `clearml` Python package:
```commandline
pip install clearml

View File

@ -25,7 +25,7 @@ And that's it! This creates a [ClearML Task](../fundamentals/task.md) which capt
* Scalars (loss, learning rates)
* Console output
* General details such as machine details, runtime, creation date etc.
* Hyperparameters created with standard python packages (e.g. argparse, click, Python Fire, etc.)
* Hyperparameters created with standard Python packages (e.g. argparse, click, Python Fire, etc.)
* And more
:::tip Logging Plots
@ -89,7 +89,7 @@ To augment its automatic logging, ClearML also provides an explicit logging inte
See more information about explicitly logging information to a ClearML Task:
* [Models](../clearml_sdk/model_sdk.md#manually-logging-models)
* [Configuration](../clearml_sdk/task_sdk.md#configuration) (e.g. parameters, configuration files)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or python objects created by a task)
* [Artifacts](../clearml_sdk/task_sdk.md#artifacts) (e.g. output files or Python objects created by a task)
* [Scalars](../clearml_sdk/task_sdk.md#scalars)
* [Text/Plots/Debug Samples](../fundamentals/logger.md#manual-reporting)

View File

@ -11,7 +11,7 @@ built in logger:
* Turn your newly trained YOLOv5 model into an API with just a few commands using [ClearML Serving](../clearml_serving/clearml_serving.md)
## Setup
1. Install the clearml python package:
1. Install the clearml Python package:
```commandline
pip install clearml

View File

@ -22,7 +22,7 @@ segmentation, and classification. Get the most out of YOLOv8 with ClearML:
## Setup
1. Install the `clearml` python package:
1. Install the `clearml` Python package:
```commandline
pip install clearml

View File

@ -20,7 +20,7 @@ for more details.
ClearML pipelines are created from code using one of the following:
* [PipelineController](pipelines_sdk_tasks.md) class - A pythonic interface for defining and configuring the pipeline
controller and its steps. The controller and steps can be functions in your python code, or existing [ClearML tasks](../fundamentals/task.md).
controller and its steps. The controller and steps can be functions in your Python code, or existing [ClearML tasks](../fundamentals/task.md).
* [PipelineDecorator](pipelines_sdk_function_decorators.md) class - A set of Python decorators which transform your
functions into the pipeline controller and steps
@ -35,7 +35,7 @@ example of a pipeline with concurrent steps.
ClearML supports multiple modes for pipeline execution:
* **Remote Mode** (default) - In this mode, the pipeline controller logic is executed through a designated queue, and all
the pipeline steps are launched remotely through their respective queues. Since each task is executed independently,
it can have control over its git repository (if needed), required python packages, and the specific container to use.
it can have control over its git repository (if needed), required Python packages, and the specific container to use.
* **Local Mode** - In this mode, the pipeline is executed locally, and the steps are executed as sub-processes. Each
subprocess uses the exact same Python environment as the main pipeline logic.
* **Debugging Mode** (for PipelineDecorator) - In this mode, the entire pipeline is executed locally, with the pipeline

View File

@ -224,7 +224,7 @@ You can run the pipeline logic locally, while keeping the pipeline components ex
:::
#### Debugging Mode
In debugging mode, the pipeline controller and all components are treated as regular python functions, with components
In debugging mode, the pipeline controller and all components are treated as regular Python functions, with components
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
before the main pipeline logic function call.
@ -241,7 +241,7 @@ if __name__ == '__main__':
In local mode, the pipeline controller creates Tasks for each component, and component functions calls are translated
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
itself is local. Notice that each subprocess is using the exact same python environment as the main pipeline logic. Call
itself is local. Notice that each subprocess is using the exact same Python environment as the main pipeline logic. Call
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
before the main pipeline logic function.

View File

@ -39,7 +39,7 @@ ClearML k8s glue default pod label was changed to `CLEARML=agent` (instead of `T
- Update task `status_message` for non-responsive or hanging pods
- Support the `agent.docker_force_pull` configuration option for scheduled pods
- Add docker example for running the k8s glue as a pod in a k8s cluster
- Add `agent.ignore_requested_python_version` configuration option to ignore any requested python version (default false, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L45))
- Add `agent.ignore_requested_python_version` configuration option to ignore any requested Python version (default false, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L45))
- Add `agent.docker_internal_mounts` configuration option to control containers internal mounts (non-root containers, see [here](https://github.com/allegroai/clearml-agent/blob/db57441c5dda43d8e38f01d7f52f047913e95ba5/docs/clearml.conf#L184))
- Add support for `-r requirements.txt` in the Installed Packages section
- Add support for `CLEARML_AGENT_INITIAL_CONNECT_RETRY_OVERRIDE` environment variable to override initial server connection behavior (defaults to true, allows boolean value or an explicit number specifying the number of connect retries)

View File

@ -15,7 +15,7 @@ title: Version 1.2
**Bug Fixes**
- Fix `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL` fails to find python executable
- Fix `CLEARML_AGENT_SKIP_PIP_VENV_INSTALL` fails to find Python executable
- Fix `apt-get update` failure causes `apt-get install` not to be executed
### ClearML Agent 1.2.1
@ -38,7 +38,7 @@ title: Version 1.2
**Bug Fixes**
- Fix virtualenv python interpreter used ([ClearML Agent GitHub PR #98](https://github.com/allegroai/clearml-agent/pull/98))
- Fix virtualenv Python interpreter used ([ClearML Agent GitHub PR #98](https://github.com/allegroai/clearml-agent/pull/98))
- Fix typing package incorrectly required for Python>3.5 ([ClearML Agent GitHub PR #103](https://github.com/allegroai/clearml-agent/pull/103))
- Fix symbolic links not copied from cached VCS into working copy (windows platform will result with default copy content instead of original symbolic link) ([ClearML Agent GitHub PR #89](https://github.com/allegroai/clearml-agent/pull/89))
- Fix agent fails to check out code from main branch when branch/commit is not explicitly specified ([ClearML GitHub issue #551](https://github.com/allegroai/clearml/issues/551))

View File

@ -40,7 +40,7 @@ those matching these filters to be used when running containers
**New Features and Improvements**
* Add `NO_DOCKER` flag to `clearml-agent-services` entrypoint ([ClearML Agent GitHub PR #206](https://github.com/allegroai/clearml-agent/pull/206))
* Use `venv` module if `virtualenv` is not supported
* Find the correct python version when using a pre-installed python environment
* Find the correct Python version when using a pre-installed python environment
* Add `/bin/bash` support in the task's `script.binary` property
* Add support for `.ipynb` script entry files (install nbconvert in runtime, convert file to python and execute the
python script). Includes `CLEARML_AGENT_FORCE_TASK_INIT` patching of `.ipynb` files (post-python conversion)

View File

@ -36,7 +36,7 @@ title: Version 3.20
* Add Administrator identity provider management UI: administrators can add and manage multiple identity providers
* New UI experiment table comparative view: compare plots and scalars of all selected experiments
* Add UI project metric snapshot support for multiple metrics
* Add UI experiment display of original python requirements along with actual packages used.
* Add UI experiment display of original Python requirements along with actual packages used.
* Add compressed UI experiment table info panel mode displaying only experiment name and status
* Add "x unified" hover mode to UI plots
* Add option to view metadata of published dataset versions in UI Hyper-Dataset list view

View File

@ -20,7 +20,7 @@ title: Version 1.14
**New Features and Improvements**
* New UI experiment table comparative view: compare plots and scalars of all selected experiments
* Add UI experiment display of original python requirements along with actual packages used ([ClearML GitHub issue #793](https://github.com/allegroai/clearml/issues/793))
* Add UI experiment display of original Python requirements along with actual packages used ([ClearML GitHub issue #793](https://github.com/allegroai/clearml/issues/793))
* Add UI project metric snapshot support for multiple metrics
* Add compressed UI experiment table info panel mode displaying only experiment name and status
* Add "x unified" hover mode to UI plots

View File

@ -208,7 +208,7 @@ title: Version 1.1
- Add `Task.get_configuration_object_as_dict()`
- Add `docker_image` argument to `Task.set_base_docker()` (deprecate `docker_cmd`)
- Add `auto_version_bump` argument to `PipelineController`
- Add `sdk.development.detailed_import_report` configuration option to provide a detailed report of all python package imports
- Add `sdk.development.detailed_import_report` configuration option to provide a detailed report of all Python package imports
- Set current Task as Dataset parent when creating dataset
- Add support for deferred configuration
- Examples

View File

@ -51,7 +51,7 @@ title: Version 1.6
* Fix error when connecting an input model
* Fix deadlocks, including:
* Change thread Event/Lock to a process fork safe threading objects
* Use file lock instead of process lock to avoid future deadlocks since python process lock is not process safe
* Use file lock instead of process lock to avoid future deadlocks since Python process lock is not process safe
(killing a process holding a lock will Not release the lock)
* Fix `StorageManager.list()` on a local Windows path
* Fix model not created in the current project

View File

@ -90,12 +90,12 @@ information).
You can edit the labels of credentials in your own workspace, or credentials that you created in other workspaces.
**To edit the credentials label:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit Pencil" className="icon size-md" />
**To edit the credentials label:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-edit.svg" alt="Edit Pencil" className="icon size-md" /> .
You can revoke any credentials in your own workspace, or credentials that you created in other workspaces. Once revoked,
these credentials cannot be recovered.
**To revoke ClearML credentials:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" />
**To revoke ClearML credentials:** hover over the desired credentials, and click <img src="/docs/latest/icons/ico-trash.svg" alt="Trash can" className="icon size-md" /> .
### AI Application Gateway Tokens

View File

@ -71,7 +71,7 @@ The comparison pages provide the following views:
### Side-by-side Textual Comparison
In the **Details** and **Hyperparameters** (Values view) tabs, you can view differences in the tasks' parameters' nominal
values. The **Details** tab displays the tasks' execution details (source code, uncommitted changes, python packages),
values. The **Details** tab displays the tasks' execution details (source code, uncommitted changes, Python packages),
models, artifacts, configuration objects, and additional general information. **Hyperparameters** (Values view) displays the
tasks' hyperparameter and their values.

View File

@ -37,7 +37,7 @@ You can create tasks by:
* Running code instrumented with ClearML (see [Task Creation](../clearml_sdk/task_sdk.md#task-creation))
* [Cloning an existing task](webapp_exp_reproducing.md)
* Via CLI using [`clearml-task`](../apps/clearml_task.md)
* Through the UI interface: Input the task's details, including its source code and python requirements, and then
* Through the UI interface: Input the task's details, including its source code and Python requirements, and then
run it through a [ClearML Queue](../fundamentals/agents_and_queues.md#what-is-a-queue) or save it as a *draft*.
To create a task through the UI interface:
@ -57,7 +57,7 @@ To create a task through the UI interface:
* Binary - The binary executing the script (e.g. python3, bash etc).
* Type How the code is provided
* Script - The name of the file to run using the above specified binary
* Module - The name of a python module to run (Python only, see [Python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
* Module - The name of a Python module to run (Python only, see [Python module specification](https://docs.python.org/3/using/cmdline.html#cmdoption-m))
* Custom code - Directly provide the code to run. Write code, or upload a file:
* File name - The script in which your code is stored. Click `Upload` to upload an existing file.
* Content - The actual code. Click `Edit` to modify the scripts contents.
@ -66,7 +66,7 @@ To create a task through the UI interface:
* **Arguments** (*optional*) - Add [hyperparameter](../fundamentals/hyperparameters.md) values.
* **Environment** (*optional*) - Set up the tasks execution environment
* Python - Python environment settings
* Use Poetry - Force Poetry instead of pip package manager. Disables additional python settings.
* Use Poetry - Force Poetry instead of pip package manager. Disables additional Python settings.
* Preinstalled venv - The name of a virtual environment available in the tasks execution environment to use when
running the task. Additionally, specify how to use the virtual environment:
* Skip - Try to automatically detect an available virtual environment, and use it as is.