Getting Started Refactor part 1

This commit is contained in:
revital
2025-02-20 15:34:07 +02:00
parent ef47124282
commit 1bc295cd86
50 changed files with 379 additions and 536 deletions

View File

@@ -23,7 +23,7 @@ VS Code remote sessions use ports 8878 and 8898 respectively.
## Prerequisites
* `clearml` installed and configured. See [Getting Started](../getting_started/ds/ds_first_steps.md) for details.
* `clearml` installed and configured. See [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for details.
* At least one `clearml-agent` running on a remote host. See [installation](../clearml_agent/clearml_agent_setup.md#installation) for details.
* An SSH client installed on your machine. To verify, open your terminal and execute `ssh`. If you did not receive an
error, you are good to go.

View File

@@ -37,7 +37,7 @@ lineage and content information. See [dataset UI](../webapp/datasets/webapp_data
## Setup
`clearml-data` comes built-in with the `clearml` Python package! Check out the [Getting Started](../getting_started/ds/ds_first_steps.md)
`clearml-data` comes built-in with the `clearml` Python package! Check out the [ClearML Setup](../clearml_sdk/clearml_sdk_setup)
guide for more info!
## Using ClearML Data

View File

@@ -7,7 +7,7 @@ tasks for you, and an extensive set of powerful features and functionality you c
and other workflows.
:::tip Installation
For installation instructions, see [Getting Started](../getting_started/ds/ds_first_steps.md#install-clearml).
For installation instructions, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup#install-clearml).
:::
The ClearML Python Package collects the scripts' entire execution information, including:

View File

@@ -1,7 +1,9 @@
---
title: First Steps
title: ClearML Python Package
---
This is step-by-step guide for installing the `clearml` Python package and connecting it to the ClearML Server. Once done,
you can integrate `clearml` into your code.
## Install ClearML
@@ -44,8 +46,8 @@ pip install clearml
CLEARML_CONFIG_FILE = MyOtherClearML.conf
```
For more information about running tasks inside Docker containers, see [ClearML Agent Deployment](../../clearml_agent/clearml_agent_deployment.md)
and [ClearML Agent Reference](../../clearml_agent/clearml_agent_ref.md).
For more information about running tasks inside Docker containers, see [ClearML Agent Deployment](../clearml_agent/clearml_agent_deployment.md)
and [ClearML Agent Reference](../clearml_agent/clearml_agent_ref.md).
</Collapsible>
@@ -83,7 +85,7 @@ pip install clearml
CLEARML setup completed successfully.
```
Now you can integrate ClearML into your code! Continue [here](#auto-log-experiment).
Now you can integrate ClearML into your code! Continue [here](../clearml_sdk/clearml_sdk_setup#auto-log-experiment).
### Jupyter Notebook
To use ClearML with Jupyter Notebook, you need to configure ClearML Server access credentials for your notebook.
@@ -94,49 +96,3 @@ To use ClearML with Jupyter Notebook, you need to configure ClearML Server acces
1. Add these commands to your notebook
Now you can use ClearML in your notebook!
## Auto-log Experiment
In ClearML, experiments are organized as [Tasks](../../fundamentals/task.md).
ClearML automatically logs your task and code, including outputs and parameters from popular ML frameworks,
once you integrate the ClearML [SDK](../../clearml_sdk/clearml_sdk.md) with your code. To control what ClearML automatically logs, see this [FAQ](../../faq.md#controlling_logging).
At the beginning of your code, import the `clearml` package:
```python
from clearml import Task
```
:::tip Full Automatic Logging
To ensure full automatic logging, it is recommended to import the `clearml` package at the top of your entry script.
:::
Then initialize the Task object in your `main()` function, or the beginning of the script.
```python
task = Task.init(project_name='great project', task_name='best task')
```
If the project does not already exist, a new one is created automatically.
The console should display the following output:
```
ClearML Task: created new task id=1ca59ef1f86d44bd81cb517d529d9e5a
2021-07-25 13:59:09
ClearML results page: https://app.clear.ml/projects/4043a1657f374e9298649c6ba72ad233/experiments/1ca59ef1f86d44bd81cb517d529d9e5a/output/log
2021-07-25 13:59:16
```
**That's it!** You are done integrating ClearML with your code :)
Now, [command-line arguments](../../fundamentals/hyperparameters.md#tracking-hyperparameters), [console output](../../fundamentals/logger.md#types-of-logged-results) as well as Tensorboard and Matplotlib will automatically be logged in the UI under the created Task.
Sit back, relax, and watch your models converge :) or continue to see what else can be done with ClearML [here](ds_second_steps.md).
## YouTube Playlist
Or watch the **Getting Started** playlist on ClearML's YouTube Channel!
[![Watch the video](https://img.youtube.com/vi/bjWwZAzDxTY/hqdefault.jpg)](https://www.youtube.com/watch?v=bjWwZAzDxTY&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=2)

View File

@@ -13,7 +13,7 @@ The following page goes over how to set up and upgrade `clearml-serving`.
## Initial Setup
1. Set up your [ClearML Server](../deploying_clearml/clearml_server.md) or use the
[free hosted service](https://app.clear.ml)
1. Connect `clearml` SDK to the server, see instructions [here](../getting_started/ds/ds_first_steps.md#install-clearml)
1. Connect `clearml` SDK to the server, see instructions [here](../clearml_sdk/clearml_sdk_setup#install-clearml)
1. Install clearml-serving CLI:

View File

@@ -49,7 +49,7 @@ authentication, subdomains, and load balancers, and use any of its many configur
1. Optionally, [configure ClearML Server](clearml_server_config.md) for additional features, including subdomains and load balancers,
web login authentication, and the non-responsive task watchdog.
1. [Connect the ClearML SDK to the ClearML Server](../getting_started/ds/ds_first_steps.md)
1. [Connect the ClearML SDK to the ClearML Server](../clearml_sdk/clearml_sdk_setup)
## Updating

View File

@@ -150,4 +150,4 @@ The following section contains a list of AMI Image IDs per-region for the latest
## Next Step
To keep track of your experiments and/or data, the `clearml` package needs to communicate with your server.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).

View File

@@ -7,7 +7,7 @@ provides custom images for each released version of ClearML Server. For a list o
[ClearML Server GCP Custom Image](#clearml-server-gcp-custom-image).
To keep track of your experiments and/or data, the `clearml` package needs to communicate with the server you have deployed.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).
:::info
In order for `clearml` to work with a ClearML Server on GCP, set `CLEARML_API_DEFAULT_REQ_METHOD=PUT` or
@@ -155,4 +155,4 @@ The following section contains a list of Custom Image URLs (exported in differen
## Next Step
To keep track of your experiments and/or data, the `clearml` package needs to communicate with your server.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).

View File

@@ -32,4 +32,4 @@ instructions in the [Security](clearml_server_security.md) page.
## Next Step
To keep track of your experiments and/or data, the `clearml` package needs to communicate with your server.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).

View File

@@ -227,4 +227,4 @@ If needed, restore data and configuration by doing the following:
## Next Step
To keep track of your experiments and/or data, the `clearml` package needs to communicate with your server.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).

View File

@@ -89,4 +89,4 @@ After deploying ClearML Server, the services expose the following node ports:
## Next Step
To keep track of your experiments and/or data, the `clearml` package needs to communicate with your server.
For instruction to connect the ClearML SDK to the server, see [Getting Started: First Steps](../getting_started/ds/ds_first_steps.md).
For instruction to connect the ClearML SDK to the server, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup).

View File

@@ -2,7 +2,7 @@
title: ClearML Modules
---
- [**ClearML Python Package**](../getting_started/ds/ds_first_steps.md#install-clearml) (`clearml`) for integrating ClearML into your existing code-base.
- [**ClearML Python Package**](auto_log_exp#install-clearml) (`clearml`) for integrating ClearML into your existing code-base.
- [**ClearML Server**](../deploying_clearml/clearml_server.md) (`clearml-server`) for storing task, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the MLOps.
- [**ClearML Agent**](../clearml_agent.md) (`clearml-agent`), the MLOps orchestration agent. Enabling task and workflow reproducibility, and scalability.
- [**ClearML Data**](../clearml_data/clearml_data.md) (`clearml-data`) data management and versioning on top of file-systems/object-storage.

View File

@@ -0,0 +1,59 @@
---
title: Auto-log Experiments
---
In ClearML, experiments are organized as [Tasks](../fundamentals/task.md).
When you integrate the ClearML SDK with your code, the ClearML task manager automatically captures:
* Source code and uncommitted changes
* Installed packages
* General information such as machine details, runtime, creation date etc.
* Model files, parameters, scalars, and plots from popular ML frameworks such as TensorFlow and PyTorch (see list of [supported frameworks](../clearml_sdk/task_sdk.md#automatic-logging))
* Console output
:::tip Automatic logging control
To control what ClearML automatically logs, see this [FAQ](../faq.md#controlling_logging).
:::
## To Auto-log Your Experiments
1. Install `clearml` and connect it to the ClearML Server (see [instructions](../clearml_sdk/clearml_sdk.md))
1. At the beginning of your code, import the `clearml` package:
```python
from clearml import Task
```
:::tip Full Automatic Logging
To ensure full automatic logging, it is recommended to import the `clearml` package at the top of your entry script.
:::
1. Initialize the Task object in your `main()` function, or the beginning of the script.
```python
task = Task.init(project_name='great project', task_name='best task')
```
If the project does not already exist, a new one is created automatically.
The console should display the following output:
```
ClearML Task: created new task id=1ca59ef1f86d44bd81cb517d529d9e5a
2021-07-25 13:59:09
ClearML results page: https://app.clear.ml/projects/4043a1657f374e9298649c6ba72ad233/experiments/1ca59ef1f86d44bd81cb517d529d9e5a/output/log
2025-01-25 13:59:16
```
1. Click the results page link to go to the [task's detail page in the ClearML WebApp](../webapp/webapp_exp_track_visual.md),
where you can monitor the task's status, view all its logged data, visualize its results, and more!
![Info panel](../img/webapp_tracking_40.png#light-mode-only)
![Info panel](../img/webapp_tracking_40_dark.png#dark-mode-only)
**That's it!** You are done integrating ClearML with your code :)
Now, [command-line arguments](../fundamentals/hyperparameters.md#tracking-hyperparameters), [console output](../fundamentals/logger.md#types-of-logged-results), TensorBoard and Matplotlib, and much more will automatically be
logged in the UI under the created Task.
Sit back, relax, and watch your models converge :)

View File

@@ -24,7 +24,7 @@ During early stages of model development, while code is still being modified hea
These setups can be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
The goal of this phase is to get a code, dataset, and environment set up, so you can start digging to find the best model!
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [ClearML Setup](../../clearml_sdk/clearml_sdk_setup.md)).
This helps visualizing the results and tracking progress.
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
while also creating an easy queue interface that easily lets you drop your tasks to be executed one by one

View File

@@ -1,193 +0,0 @@
---
title: Next Steps
---
So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first task!
Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!
## Accessing Tasks
Every previously executed experiment is stored as a Task.
A Task's project and name can be changed after it has been executed.
A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
Retrieve a Task object programmatically by querying the system based on either the Task ID,
or project and name combination. You can also query tasks based on their properties, like tags (see [Querying Tasks](../../clearml_sdk/task_sdk.md#querying--searching-tasks)).
```python
prev_task = Task.get_task(task_id='123456deadbeef')
```
Once you have a Task object you can query the state of the Task, get its model(s), scalars, parameters, etc.
## Log Hyperparameters
For full reproducibility, it's paramount to save each task's hyperparameters. Since hyperparameters can have substantial impact
on model performance, saving and comparing them between tasks is sometimes the key to understanding model behavior.
ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
You can also log parameter dictionaries (very useful when parsing an external configuration file and storing as a dict object),
whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
```python
params_dictionary = {'epochs': 3, 'lr': 0.4}
task.connect(params_dictionary)
```
See [Configuration](../../clearml_sdk/task_sdk.md#configuration) for all hyperparameter logging options.
## Log Artifacts
ClearML lets you easily store the output products of a task: Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
Essentially, artifacts are files (or Python objects) uploaded from a script and are stored alongside the Task.
These artifacts can be easily accessed by the web UI or programmatically.
Artifacts can be stored anywhere, either on the ClearML server, or any object storage solution or shared folder.
See all [storage capabilities](../../integrations/storage.md).
### Adding Artifacts
Upload a local file containing the preprocessed results of the data:
```python
task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
```
You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
```python
task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
```
Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
If the object type is unknown, ClearML pickles it and uploads the pickle file.
```python
numpy_object = np.eye(100, 100)
task.upload_artifact(name='features', artifact_object=numpy_object)
```
For more artifact logging options, see [Artifacts](../../clearml_sdk/task_sdk.md#artifacts).
### Using Artifacts
Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
To use an artifact, first you have to get an instance of the Task that originally created it,
then you either download it and get its path, or get the artifact object directly.
For example, using a previously generated preprocessed data.
```python
preprocess_task = Task.get_task(task_id='preprocessing_task_id')
local_csv = preprocess_task.artifacts['data'].get_local_copy()
```
`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
need to download the artifact again.
Calling `get()` gets a deserialized pickled object.
Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
### Models
Models are a special kind of artifact.
Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket:
```python
task = Task.init(
project_name='examples',
task_name='storing model',
output_uri='s3://my_models/'
)
```
Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
under the "Input Models" section.
Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
#### Loading Models
Loading a previously trained model is quite similar to loading artifacts.
```python
prev_task = Task.get_task(task_id='the_training_task')
last_snapshot = prev_task.models['output'][-1]
local_weights_path = last_snapshot.get_local_copy()
```
Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
:::note
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
:::
As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
This feature lets you easily get a full genealogy of every trained and used model by your system!
## Log Metrics
Full metrics logging is the key to finding the best performing model!
By default, ClearML automatically captures and logs everything reported to TensorBoard and Matplotlib.
Since not all metrics are tracked that way, you can also manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
You can log everything, from time series data and confusion matrices to HTML, Audio, and Video, to custom plotly graphs! Everything goes!
![Experiment plots](../../img/report_plotly.png#light-mode-only)
![Experiment plots](../../img/report_plotly_dark.png#dark-mode-only)
Once everything is neatly logged and displayed, use the [comparison tool](../../webapp/webapp_exp_comparing.md) to find the best configuration!
## Track Tasks
The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
![Task table](../../img/webapp_experiment_table.png#light-mode-only)
![Task table](../../img/webapp_experiment_table_dark.png#dark-mode-only)
### Creating Leaderboards
Customize the [task table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
It can also be used as a live leaderboard, showing the best performing tasks' status, updated in real time.
This is helpful to monitor your projects' progress, and to share it across the organization.
Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific task or a comparison page.
You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the task.
Later you can search based on task name in the search bar, and filter tasks based on their tags, parameters, status, and more.
## What's Next?
This covers the basics of ClearML! Running through this guide you've learned how to log Parameters, Artifacts and Metrics!
If you want to learn more look at how we see the data science process in our [best practices](best_practices.md) page,
or check these pages out:
- Scale you work and deploy [ClearML Agents](../../clearml_agent.md)
- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
- Improve your tasks with [Hyperparameter Optimization](../../hpo.md)
- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../integrations/tensorflow.md),
[PyTorch](../../integrations/pytorch.md), [Keras](../../integrations/keras.md),
and more
## YouTube Playlist
All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube. Go check it out :)
[![Watch the video](https://img.youtube.com/vi/kyOfwVg05EM/hqdefault.jpg)](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)

View File

@@ -0,0 +1,120 @@
---
title: Logging and Using Task Artifacts
---
:::note
This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md)
:::
ClearML lets you easily store a task's output products--or **Artifacts**:
* [Model](#models) snapshot / weights file
* Preprocessing of your data
* Feature representation of data
* And more!
**Artifacts** are files or Python objects that are uploaded and stored alongside the Task.
These artifacts can be easily accessed by the web UI or programmatically.
Artifacts can be stored anywhere, either on the ClearML Server, or any object storage solution or shared folder.
See all [storage capabilities](../integrations/storage.md).
## Adding Artifacts
Let's create a [Task](../fundamentals/task.md) and add some artifacts to it.
1. Create a task using [`Task.init()`](../references/sdk/task.md#taskinit)
```python
from clearml import Task
task = Task.init(project_name='great project', task_name='task with artifacts')
```
1. Upload a local **file** using [`Task.upload_folder()`](../references/sdk/task.md#upload_artifact) and specifying the artifact's
name and its path:
```python
task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
```
1. Upload an **entire folder** with all its content by passing the folder path (the folder will be zipped and uploaded as a single zip file).
```python
task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
```
1. Upload an instance of an object. Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
If the object type is unknown, ClearML pickles it and uploads the pickle file.
```python
numpy_object = np.eye(100, 100)
task.upload_artifact(name='features', artifact_object=numpy_object)
```
For more artifact logging options, see [Artifacts](../clearml_sdk/task_sdk.md#artifacts).
### Using Artifacts
Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
To use an artifact, first you have to get an instance of the Task that originally created it,
then you either download it and get its path, or get the artifact object directly.
For example, using a previously generated preprocessed data.
```python
preprocess_task = Task.get_task(task_id='preprocessing_task_id')
local_csv = preprocess_task.artifacts['data'].get_local_copy()
```
`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
need to download the artifact again.
Calling `get()` gets a deserialized pickled object.
Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
### Models
Models are a special kind of artifact.
Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket:
```python
task = Task.init(
project_name='examples',
task_name='storing model',
output_uri='s3://my_models/'
)
```
Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
under the "Input Models" section.
Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
#### Loading Models
Loading a previously trained model is quite similar to loading artifacts.
```python
prev_task = Task.get_task(task_id='the_training_task')
last_snapshot = prev_task.models['output'][-1]
local_weights_path = last_snapshot.get_local_copy()
```
Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
:::note
Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
:::
As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
This feature lets you easily get a full genealogy of every trained and used model by your system!

View File

@@ -1,225 +0,0 @@
---
title: First Steps
---
:::note
This tutorial assumes that you've already [signed up](https://app.clear.ml) to ClearML
:::
ClearML provides tools for **automation**, **orchestration**, and **tracking**, all key in performing effective MLOps and LLMOps.
Effective MLOps and LLMOps rely on the ability to scale work beyond one's own computer. Moving from your own machine can be time-consuming.
Even assuming that you have all the drivers and applications installed, you still need to manage multiple Python environments
for different packages / package versions, or worse - manage different Dockers for different package versions.
Not to mention, when working on remote machines, executing experiments, tracking what's running where, and making sure machines
are fully utilized at all times become daunting tasks.
This can create overhead that derails you from your core work!
ClearML Agent was designed to deal with such issues and more! It is a tool responsible for executing tasks on remote machines: on-premises or in the cloud! ClearML Agent provides the means to reproduce and track tasks in your
machine of choice through the ClearML WebApp with no need for additional code.
The agent will set up the environment for a specific Task's execution (inside a Docker, or bare-metal), install the
required Python packages, and execute and monitor the process.
## Set up an Agent
1. Install the agent:
```bash
pip install clearml-agent
```
1. Connect the agent to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration), then run this:
```bash
clearml-agent init
```
:::note
If you've already created credentials, you can copy-paste the default agent section from [here](https://github.com/clearml/clearml-agent/blob/master/docs/clearml.conf#L15) (this is optional. If the section is not provided the default values will be used)
:::
1. Start the agent's daemon and assign it to a [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue):
```bash
clearml-agent daemon --queue default
```
A queue is an ordered list of Tasks that are scheduled for execution. The agent will pull Tasks from its assigned
queue (`default` in this case), and execute them one after the other. Multiple agents can listen to the same queue
(or even multiple queues), but only a single agent will pull a Task to be executed.
:::tip Agent Deployment Modes
ClearML Agents can be deployed in:
* [Virtual environment mode](../../clearml_agent/clearml_agent_execution_env.md): Agent creates a new venv to execute a task.
* [Docker mode](../../clearml_agent/clearml_agent_execution_env.md#docker-mode): Agent executes a task inside a
Docker container.
For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
:::
## Clone a Task
Tasks can be reproduced (cloned) for validation or as a baseline for further experimentation.
Cloning a task duplicates the task's configuration, but not its outputs.
**To clone a task in the ClearML WebApp:**
1. Click on any project card to open its [task table](../../webapp/webapp_exp_table.md).
1. Right-click one of the tasks on the table.
1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
1. Click **CLONE** in the window.
The newly cloned task will appear and its info panel will slide open. The cloned task is in draft mode, so
it can be modified. You can edit the Git / code references, control the Python packages to be installed, specify the
Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-tasks) for more information about editing tasks in the UI.
## Enqueue a Task
Once you have set up a task, it is now time to execute it.
**To execute a task through the ClearML WebApp:**
1. Right-click your draft task (the context menu is also available through the <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
button on the top right of the task's info panel)
1. Click **ENQUEUE,** which will open the **ENQUEUE TASK** window
1. In the window, select `default` in the queue menu
1. Click **ENQUEUE**
This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
task, and the task can be [tracked and its results visualized](../../webapp/webapp_exp_track_visual.md).
## Programmatic Interface
The cloning, modifying, and enqueuing actions described above can also be performed programmatically.
### First Steps
#### Access Previously Executed Tasks
All Tasks in the system can be accessed through their unique Task ID, or based on their properties using the [`Task.get_task`](../../references/sdk/task.md#taskget_task)
method. For example:
```python
from clearml import Task
executed_task = Task.get_task(task_id='aabbcc')
```
Once a specific Task object has been obtained, it can be cloned, modified, and more. See [Advanced Usage](#advanced-usage).
#### Clone a Task
To duplicate a task, use the [`Task.clone`](../../references/sdk/task.md#taskclone) method, and input either a
Task object or the Task's ID as the `source_task` argument.
```python
cloned_task = Task.clone(source_task=executed_task)
```
#### Enqueue a Task
To enqueue the task, use the [`Task.enqueue`](../../references/sdk/task.md#taskenqueue) method, and input the Task object
with the `task` argument, and the queue to push the task into with `queue_name`.
```python
Task.enqueue(task=cloned_task, queue_name='default')
```
### Advanced Usage
Before execution, use a variety of programmatic methods to manipulate a task object.
#### Modify Hyperparameters
[Hyperparameters](../../fundamentals/hyperparameters.md) are an integral part of Machine Learning code as they let you
control the code without directly modifying it. Hyperparameters can be added from anywhere in your code, and ClearML supports multiple ways to obtain them!
Users can programmatically change cloned tasks' parameters.
For example:
```python
from clearml import Task
cloned_task = Task.clone(task_id='aabbcc')
cloned_task.set_parameter(name='internal/magic', value=42)
```
#### Report Artifacts
Artifacts are files created by your task. Users can upload [multiple types of data](../../clearml_sdk/task_sdk.md#logging-artifacts),
objects and files to a task anywhere from code.
```python
import numpy as np
from clearml import Task
Task.current_task().upload_artifact(name='a_file', artifact_object='local_file.bin')
Task.current_task().upload_artifact(name='numpy', artifact_object=np.ones(4,4))
```
Artifacts serve as a great way to pass and reuse data between tasks. Artifacts can be [retrieved](../../clearml_sdk/task_sdk.md#using-artifacts)
by accessing the Task that created them. These artifacts can be modified and uploaded to other tasks.
```python
from clearml import Task
executed_task = Task.get_task(task_id='aabbcc')
# artifact as a file
local_file = executed_task.artifacts['file'].get_local_copy()
# artifact as object
a_numpy = executed_task.artifacts['numpy'].get()
```
By facilitating the communication of complex objects between tasks, artifacts serve as the foundation of ClearML's [Data Management](../../clearml_data/clearml_data.md)
and [pipeline](../../pipelines/pipelines.md) solutions.
#### Log Models
Logging models into the model repository is the easiest way to integrate the development process directly with production.
Any model stored by a supported framework (Keras / TensorFlow / PyTorch / Joblib etc.) will be automatically logged into ClearML.
ClearML also supports methods to explicitly log models. Models can be automatically stored on a preferred storage medium
(S3 bucket, Google storage, etc.).
#### Log Metrics
Log as many metrics as you want from your processes using the [Logger](../../fundamentals/logger.md) module. This
improves the visibility of your processes' progress.
```python
from clearml import Logger
Logger.current_logger().report_scalar(
graph='metric',
series='variant',
value=13.37,
iteration=counter
)
```
You can also retrieve reported scalars for programmatic analysis:
```python
from clearml import Task
executed_task = Task.get_task(task_id='aabbcc')
# get a summary of the min/max/last value of all reported scalars
min_max_values = executed_task.get_last_scalar_metrics()
# get detailed graphs of all scalars
full_scalars = executed_task.get_reported_scalars()
```
#### Query Tasks
You can also search and query Tasks in the system. Use the [`Task.get_tasks`](../../references/sdk/task.md#taskget_tasks)
class method to retrieve Task objects and filter based on the specific values of the Task - status, parameters, metrics and more!
```python
from clearml import Task
tasks = Task.get_tasks(
project_name='examples',
task_name='partial_name_match',
task_filter={'status': 'in_progress'}
)
```
#### Manage Your Data
Data is probably one of the biggest factors that determines the success of a project. Associating a model's data with
the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
[ClearML Data](../../clearml_data/clearml_data.md) lets you version your data, so it's never lost, fetch it from every
machine with minimal code changes, and associate data to task results.
Logging data can be done via command line, or programmatically. If any preprocessing code is involved, ClearML logs it
as well! Once data is logged, it can be used by other tasks.

View File

@@ -0,0 +1,82 @@
---
title: Reproduce Tasks
---
:::note
This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md) and [ClearML Agent](../clearml_agent/clearml_agent_setup.md).
:::
Tasks can be reproduced--or **Cloned**--for validation or as a baseline for further experimentation. When you initialize a task in your
code, ClearML logs everything needed to reproduce your task and its environment:
* Uncommitted changes
* Used packages and their versions
* Parameters
* and more
Cloning a task duplicates the task's configuration, but not its outputs.
ClearML offers two ways to clone your task:
* [Via the WebApp](#via-the-webapp)--no further code required
* [Via programmatic interface](#via-programmatic-interface) using the `clearml` Python package
Once you have cloned your task, you can modify its setup, and then execute it remotely on a machine of your choice using a ClearML Agent.
## Via the WebApp
**To clone a task in the ClearML WebApp:**
1. Click on any project card to open its [task table](../webapp/webapp_exp_table.md).
1. Right-click the task you want to reproduce.
1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
1. Click **CLONE** in the window.
The newly cloned task's details page will open up. The cloned task is in *draft* mode, which means
it can be modified. You can edit any of the Task's setup details, including:
* Git and/or code references
* Python packages to be installed
* Container image to be used
You can adjust the values of the task's hyperparameters and configuration files. See [Modifying Tasks](../webapp/webapp_exp_tuning.md#modifying-tasks) for more
information about editing tasks in the UI.
### Enqueue a Task
Once you have set up a task, it is now time to execute it.
**To execute a task through the ClearML WebApp:**
1. In the task's details page, click "Menu" <img src="/docs/latest/icons/ico-bars-menu.svg" alt="Menu" className="icon size-md space-sm" />
1. Click **ENQUEUE** to open the **ENQUEUE TASK** window
1. In the window, select `default` in the `Queue` menu
1. Click **ENQUEUE**
This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).
## Via Programmatic Interface
The cloning, modifying, and enqueuing actions described above can also be performed programmatically using `clearml`.
### Clone the Task
To duplicate the task, use [`Task.clone()`](../references/sdk/task.md#taskclone), and input either a
Task object or the Task's ID as the `source_task` argument.
```python
cloned_task = Task.clone(source_task='qw03485je3hap903ere54')
```
The cloned task is in *draft* mode, which means it can be modified. For modification options, such as setting new parameter
values, see [Task SDK](../clearml_sdk/task_sdk.md).
### Enqueue the Task
To enqueue the task, use [`Task.enqueue()`](../references/sdk/task.md#taskenqueue), and input the Task object
with the `task` argument, and the queue to push the task into with `queue_name`.
```python
Task.enqueue(task=cloned_task, queue_name='default')
```
This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).

View File

@@ -0,0 +1,46 @@
---
title: Track Tasks
---
Every ClearML [task](../fundamentals/task.md) you create can be found in the **All Tasks** table and in its project's
task table.
The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the
entire development.
![Task table](../img/webapp_experiment_table.png#light-mode-only)
![Task table](../img/webapp_experiment_table_dark.png#dark-mode-only)
Customize the [task table](../webapp/webapp_exp_table.md) to fit your own needs by adding views of parameters, metrics, and tags.
Filter and sort based on various criteria, such as parameters and metrics, making it simple to create custom
views. This allows you to:
* Create a dashboard for a project, presenting the latest model accuracy scores, for immediate insights.
* Create a live leaderboard displaying the best-performing tasks, updated in real time
* Monitor a projects' progress and share it across the organization.
## Creating Leaderboards
To create a leaderboard:
1. Select a project in the ClearML WebApp and go to its task table
1. Customize the column selection. Click "Settings" <img src="/docs/latest/icons/ico-settings.svg" alt="Setting Gear" className="icon size-md" />
to view and select columns to display.
1. Filter tasks by name using the search bar to find tasks containing any search term
1. Filter by other categories by clicking "Filter" <img src="/docs/latest/icons/ico-filter-off.svg" alt="Filter" className="icon size-md" />
on the relevant column. There are a few types of filters:
* Value set - Choose which values to include from a list of all values in the column
* Numerical ranges - Insert minimum and/or maximum value
* Date ranges - Insert starting and/or ending date and time
* Tags - Choose which tags to filter by from a list of all tags used in the column.
* Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR" respectively. These
options appear on the top of the tag list.
* Filter by the absence of a tag (logical "NOT") by clicking its checkbox twice. An `X` will appear in the tag's checkbox.
1. Enable auto-refresh for real-time monitoring
For more detailed instructions, see the [Tracking Leaderboards Tutorial](../guides/ui/building_leader_board.md).
## Sharing Leaderboards
Bookmark the URL of your customized leaderboard to save and share your view. The URL contains all parameters and values
for your specific leaderboard view.

View File

@@ -7,7 +7,7 @@ on a remote or local machine, from a remote repository and your local machine.
### Prerequisites
- [`clearml`](../../getting_started/ds/ds_first_steps.md) Python package installed and configured
- [`clearml`](../../clearml_sdk/clearml_sdk_setup) Python package installed and configured
- [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) running on at least one machine (to execute the task), configured to listen to `default` queue
### Executing Code from a Remote Repository

View File

@@ -9,7 +9,7 @@ script.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task

View File

@@ -11,7 +11,7 @@ be used when running optimization tasks.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task

View File

@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
If you are not already using ClearML, see [Getting Started](../../../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
Integrate ClearML into code that uses [Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner). By
specifying `ClearMLTunerLogger` (see [kerastuner.py](https://github.com/clearml/clearml/blob/master/clearml/external/kerastuner.py))
as the Keras Tuner logger, ClearML automatically logs scalars and hyperparameter optimization.

View File

@@ -9,7 +9,7 @@ such as required packages and uncommitted changes, and supports reporting scalar
## Setup
To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../getting_started/ds/ds_first_steps.md#install-clearml)
To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../clearml_sdk/clearml_sdk_setup#install-clearml)
in your environment, and use the `log_with` parameter when instantiating an `Accelerator`:
```python

View File

@@ -3,7 +3,7 @@ title: AutoKeras
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
If you are not already using ClearML, see [Getting Started](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::

View File

@@ -3,7 +3,7 @@ title: CatBoost
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::

View File

@@ -3,7 +3,7 @@ title: Click
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::

View File

@@ -3,8 +3,7 @@ title: Fast.ai
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [fast.ai](https://www.fast.ai/), automatically logging its models and scalars.

View File

@@ -3,8 +3,7 @@ title: Hydra
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::

View File

@@ -3,8 +3,7 @@ title: PyTorch Ignite
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[PyTorch Ignite](https://pytorch.org/ignite/index.html) is a library for training and evaluating neural networks in

View File

@@ -3,11 +3,11 @@ title: jsonargparse
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[jsonargparse](https://github.com/omni-us/jsonargparse) is a Python package for creating command-line interfaces.
ClearML integrates seamlessly with `jsonargparse` and automatically logs its command-line parameters and connected
configuration files.

View File

@@ -3,10 +3,10 @@ title: Keras
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates with [Keras](https://keras.io/) out-of-the-box, automatically logging its models, scalars,
TensorFlow definitions, and TensorBoard outputs.

View File

@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner) is a library that helps you pick the optimal set
of hyperparameters for training your models. ClearML integrates seamlessly with `kerastuner` and automatically logs
task scalars, the output model, and hyperparameter optimization summary.

View File

@@ -3,10 +3,10 @@ title: LangChain
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[LangChain](https://github.com/langchain-ai/langchain) is a popular framework for developing applications powered by
language models. You can integrate ClearML into your LangChain code using the built-in `ClearMLCallbackHandler`. This
class is used to create a ClearML Task to log LangChain assets and metrics.

View File

@@ -3,10 +3,10 @@ title: LightGBM
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [LightGBM](https://github.com/microsoft/LightGBM), automatically logging its models,
metric plots, and parameters.

View File

@@ -3,10 +3,10 @@ title: Matplotlib
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[Matplotlib](https://matplotlib.org/) is a Python library for data visualization. ClearML automatically captures plots
and images created with `matplotlib`.

View File

@@ -3,10 +3,10 @@ title: MegEngine
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [MegEngine](https://github.com/MegEngine/MegEngine), automatically logging its models.
All you have to do is simply add two lines of code to your MegEngine script:

View File

@@ -7,10 +7,10 @@ title: MMCV v1.x
:::
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[MMCV](https://github.com/open-mmlab/mmcv/tree/1.x) is a computer vision framework developed by OpenMMLab. You can integrate ClearML into your
code using the `mmcv` package's [`ClearMLLoggerHook`](https://mmcv.readthedocs.io/en/master/_modules/mmcv/runner/hooks/logger/clearml.html)
class. This class is used to create a ClearML Task and to automatically log metrics.

View File

@@ -3,10 +3,10 @@ title: MMEngine
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[MMEngine](https://github.com/open-mmlab/mmengine) is a library for training deep learning models based on PyTorch.
MMEngine supports ClearML through a builtin logger: It automatically logs task environment information, such as
required packages and uncommitted changes, and supports reporting scalars, parameters, and debug samples.

View File

@@ -3,10 +3,10 @@ title: MONAI
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[MONAI](https://github.com/Project-MONAI/MONAI) is a PyTorch-based, open-source framework for deep learning in healthcare
imaging. You can integrate ClearML into your code using MONAI's built-in handlers: [`ClearMLImageHandler`, `ClearMLStatsHandler`](#clearmlimagehandler-and-clearmlstatshandler),
and [`ModelCheckpoint`](#modelcheckpoint).

View File

@@ -3,10 +3,10 @@ title: PyTorch
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [PyTorch](https://pytorch.org/), automatically logging its models.
All you have to do is simply add two lines of code to your PyTorch script:

View File

@@ -3,10 +3,10 @@ title: PyTorch Lightning
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[PyTorch Lightning](https://github.com/Lightning-AI/lightning) is a framework that simplifies the process of training and deploying PyTorch models. ClearML seamlessly
integrates with PyTorch Lightning, automatically logging PyTorch models, parameters supplied by [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html),
and more.

View File

@@ -3,10 +3,10 @@ title: scikit-learn
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [scikit-learn](https://scikit-learn.org/stable/), automatically logging models created
with `joblib`.

View File

@@ -3,10 +3,10 @@ title: Seaborn
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[seaborn](https://seaborn.pydata.org/) is a Python library for data visualization.
ClearML automatically captures plots created using `seaborn`. All you have to do is add two
lines of code to your script:

View File

@@ -3,9 +3,10 @@ title: TensorBoard
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[TensorBoard](https://www.tensorflow.org/tensorboard) is TensorFlow's data visualization toolkit.
ClearML automatically captures all data logged to TensorBoard. All you have to do is add two
lines of code to your script:

View File

@@ -3,7 +3,7 @@ title: TensorboardX
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[TensorboardX](https://tensorboardx.readthedocs.io/en/latest/tutorial.html#what-is-tensorboard-x) is a data

View File

@@ -3,10 +3,10 @@ title: TensorFlow
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates with [TensorFlow](https://www.tensorflow.org/) out-of-the-box, automatically logging its models,
definitions, scalars, as well as TensorBoard outputs.

View File

@@ -3,8 +3,7 @@ title: XGBoost
---
:::tip
If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
instructions.
If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [XGBoost](https://xgboost.readthedocs.io/en/stable/), automatically logging its models,

View File

@@ -92,7 +92,7 @@ module.exports = {
position: 'left'
},
{
to: '/docs/getting_started/ds/ds_first_steps',
to: '/docs/getting_started/auto_log_exp',
label: 'Using ClearML',
position: 'left'
},

View File

@@ -96,12 +96,12 @@ module.exports = {
collapsible: true,
label: 'Where do I start?',
items: [
{'Data Scientists': [
'getting_started/ds/ds_first_steps',
'getting_started/ds/ds_second_steps',
]},
'getting_started/auto_log_exp',
'getting_started/track_tasks',
'getting_started/reproduce_tasks',
'getting_started/logging_using_artifacts',
{'MLOps and LLMOps': [
'getting_started/mlops/mlops_first_steps',
'getting_started/mlops/mlops_second_steps',
]}
],
@@ -615,6 +615,7 @@ module.exports = {
'hyperdatasets/code_examples'
],
installationSidebar: [
'clearml_sdk/clearml_sdk_setup',
{
type: 'category',
collapsible: true,