(ex: mongo_backup.sh mybucket/path/in/bucket)
+ ```
+
+3. Recommended to add this to the crontab
+
+:::note
+The MongoDB script does not deal with deletion of old backups. It's recommended to create an S3 lifecycle rule for
+deletion beyond the company's required retention period.
+:::
+
+## Monitoring
+### Hardware Monitoring
+
+#### CPU
+
+CPU usage varies depending on system usage. We recommend to monitor CPU usage and to alert when the usage is higher
+than normal. Recommended starting alerts would be 5-minute CPU load
+level of 5 and 10, and adjusting according to performance.
+
+#### RAM
+
+Available memory usage also varies depending on system usage. Due to spikes in usage when performing certain tasks, 6-8 GB
+of available RAM is recommended as the standard baseline. Some use cases may require more. Thus, we recommend to have 8 GB
+of available memory on top of the regular system usage. Alert levels should alert if the available memory is below normal.
+
+#### Disk Usage
+
+There are several disks used by the system. We recommend monitoring all of them. Standard alert levels are 20%, 10% and
+5% of free disk space.
+
+### Service Availability
+
+The following services should be monitored periodically for availability and for response time:
+
+* `apiserver` - [http://localhost:10000/api/debug.ping](http://localhost:10000/api/debug.ping) should return HTTP 200
+* `webserver` - [http://localhost:10000](http://localhost:10000/) should return HTTP 200
+* `fileserver` - [http://localhost:10000/files/](http://localhost:10000/files/) should return HTTP 405 ("method not allowed")
+
+### API Server Docker Memory Usage
+
+A usage spike can happen during normal operation. But very high spikes (above 6GB) are not expected. We recommend using
+`docker stats` to get this information.
+
+For example, the following comment retrieves the API server's information from the Docker server:
+
+```
+sudo curl -s --unix-socket /var/run/docker.sock http://localhost/containers/allegro-apiserver/stats?stream=false
+```
+
+We recommend monitoring the API server memory in addition to the system's available RAM. Alerts should be triggered
+when memory usage of the API server exceeds the normal behavior. A starting value can be 6 GB.
+
+### Backup Failures
+
+All scripts provided use exit code 0 when successfully completing the backups. Other exit codes indicate problems. The
+log would usually indicate the reason for the failure.
+
+## Maintenance
+
+### Removing app containers
+
+To remove old application containers, add the following to the cron:
+
+```
+0 0 * * * root docker container prune --force --filter "until=96h"
+```
diff --git a/docs/deploying_models.md b/docs/deploying_models.md
new file mode 100644
index 00000000..5afb240b
--- /dev/null
+++ b/docs/deploying_models.md
@@ -0,0 +1,34 @@
+---
+title: Model Deployment
+---
+
+Model deployment makes trained models accessible for real-world applications. ClearML provides a comprehensive suite of
+tools for seamless model deployment, which supports
+features including:
+* Version control
+* Automatic updates
+* Performance monitoring
+
+ClearML's offerings optimize the deployment process
+while ensuring scalability and security. The solutions include:
+* **Model Deployment UI Applications** (available under the Enterprise Plan) - The UI applications simplify deploying models
+ as network services through secure endpoints, providing an interface for managing deployments--no code required.
+ See more information about the following applications:
+ * [vLLM Deployment](webapp/applications/apps_model_deployment.md)
+ * [Embedding Model Deployment](webapp/applications/apps_embed_model_deployment.md)
+ * [Llama.cpp Model Deployment](webapp/applications/apps_llama_deployment.md)
+* **Command-line Interface** - `clearml-serving` is a CLI for model deployment and orchestration.
+ It supports integration with Kubernetes clusters or custom container-based
+ solutions, offering flexibility for diverse infrastructure setups.
+ For more information, see [ClearML Serving](clearml_serving/clearml_serving.md).
+
+## Model Endpoint Monitoring
+All deployed models are displayed in a unified **Model Endpoints** list in the UI. This
+allows users to monitor endpoint activity and manage deployments from a single location.
+
+For more information, see [Model Endpoints](webapp/webapp_model_endpoints.md).
+
+
+
+
+
diff --git a/docs/fundamentals/agents_and_queues.md b/docs/fundamentals/agents_and_queues.md
index 50509f53..f246bc84 100644
--- a/docs/fundamentals/agents_and_queues.md
+++ b/docs/fundamentals/agents_and_queues.md
@@ -17,7 +17,7 @@ from installing required packages to setting environment variables,
all leading to executing the code (supporting both virtual environment or flexible docker container configurations).
The agent also supports overriding parameter values on-the-fly without code modification, thus enabling no-code experimentation (this is also the foundation on which
-ClearML [Hyperparameter Optimization](hpo.md) is implemented).
+ClearML [Hyperparameter Optimization](../hpo.md) is implemented).
An agent can be associated with specific GPUs, enabling workload distribution. For example, on a machine with 8 GPUs you
can allocate several GPUs to an agent and use the rest for a different workload, even through another agent (see [Dynamic GPU Allocation](../clearml_agent/clearml_agent_dynamic_gpus.md)).
diff --git a/docs/fundamentals/hyperparameters.md b/docs/fundamentals/hyperparameters.md
index f91da4a4..428011d2 100644
--- a/docs/fundamentals/hyperparameters.md
+++ b/docs/fundamentals/hyperparameters.md
@@ -6,7 +6,7 @@ Hyperparameters are a script's configuration options. Since hyperparameters can
model performance, it is crucial to efficiently track and manage them.
ClearML supports tracking and managing hyperparameters in each task and provides a dedicated [hyperparameter
-optimization module](hpo.md). With ClearML's logging and tracking capabilities, tasks can be reproduced, and their
+optimization module](../hpo.md). With ClearML's logging and tracking capabilities, tasks can be reproduced, and their
hyperparameters and results can be saved and compared, which is key to understanding model behavior.
ClearML lets you easily try out different hyperparameter values without changing your original code. ClearML's [execution
diff --git a/docs/fundamentals/task.md b/docs/fundamentals/task.md
index afdb8633..64d8505a 100644
--- a/docs/fundamentals/task.md
+++ b/docs/fundamentals/task.md
@@ -124,7 +124,7 @@ Available task types are:
* *inference* - Model inference job (e.g. offline / batch model execution)
* *controller* - A task that lays out the logic for other tasks' interactions, manual or automatic (e.g. a pipeline
controller)
-* *optimizer* - A specific type of controller for optimization tasks (e.g. [hyperparameter optimization](hpo.md))
+* *optimizer* - A specific type of controller for optimization tasks (e.g. [hyperparameter optimization](../hpo.md))
* *service* - Long lasting or recurring service (e.g. server cleanup, auto ingress, sync services etc.)
* *monitor* - A specific type of service for monitoring
* *application* - A task implementing custom applicative logic, like [autoscaler](../guides/services/aws_autoscaler.md)
diff --git a/docs/getting_started/architecture.md b/docs/getting_started/architecture.md
index 7f7a7e35..42d2fc2d 100644
--- a/docs/getting_started/architecture.md
+++ b/docs/getting_started/architecture.md
@@ -2,7 +2,7 @@
title: ClearML Modules
---
-- [**ClearML Python Package**](../getting_started/ds/ds_first_steps.md#install-clearml) (`clearml`) for integrating ClearML into your existing code-base.
+- [**ClearML Python Package**](auto_log_exp#install-clearml) (`clearml`) for integrating ClearML into your existing code-base.
- [**ClearML Server**](../deploying_clearml/clearml_server.md) (`clearml-server`) for storing task, model, and workflow data, and supporting the Web UI experiment manager. It is also the control plane for the MLOps.
- [**ClearML Agent**](../clearml_agent.md) (`clearml-agent`), the MLOps orchestration agent. Enabling task and workflow reproducibility, and scalability.
- [**ClearML Data**](../clearml_data/clearml_data.md) (`clearml-data`) data management and versioning on top of file-systems/object-storage.
diff --git a/docs/getting_started/auto_log_exp.md b/docs/getting_started/auto_log_exp.md
new file mode 100644
index 00000000..2f3b44d8
--- /dev/null
+++ b/docs/getting_started/auto_log_exp.md
@@ -0,0 +1,59 @@
+---
+title: Auto-logging Experiments
+---
+
+In ClearML, experiments are organized as [Tasks](../fundamentals/task.md).
+
+When you integrate the ClearML SDK with your code, the ClearML task manager automatically captures:
+* Source code and uncommitted changes
+* Installed packages
+* General information such as machine details, runtime, creation date etc.
+* Model files, parameters, scalars, and plots from popular ML frameworks such as TensorFlow and PyTorch (see list of [supported frameworks](../clearml_sdk/task_sdk.md#automatic-logging))
+* Console output
+
+:::tip Automatic logging control
+To control what ClearML automatically logs, see this [FAQ](../faq.md#controlling_logging).
+:::
+
+## To Auto-log Your Experiments
+
+1. Install `clearml` and connect it to the ClearML Server (see [instructions](../clearml_sdk/clearml_sdk.md))
+1. At the beginning of your code, import the `clearml` package:
+
+ ```python
+ from clearml import Task
+ ```
+
+ :::tip Full Automatic Logging
+ To ensure full automatic logging, it is recommended to import the `clearml` package at the top of your entry script.
+ :::
+
+1. Initialize the Task object in your `main()` function, or the beginning of the script.
+
+ ```python
+ task = Task.init(project_name='great project', task_name='best task')
+ ```
+
+ If the project does not already exist, a new one is created automatically.
+
+ The console should display the following output:
+
+ ```
+ ClearML Task: created new task id=1ca59ef1f86d44bd81cb517d529d9e5a
+ 2021-07-25 13:59:09
+ ClearML results page: https://app.clear.ml/projects/4043a1657f374e9298649c6ba72ad233/experiments/1ca59ef1f86d44bd81cb517d529d9e5a/output/log
+ 2025-01-25 13:59:16
+ ```
+
+1. Click the results page link to go to the [task's detail page in the ClearML WebApp](../webapp/webapp_exp_track_visual.md),
+ where you can monitor the task's status, view all its logged data, visualize its results, and more!
+
+ 
+ 
+
+**That's it!** You are done integrating ClearML with your code :)
+
+Now, [command-line arguments](../fundamentals/hyperparameters.md#tracking-hyperparameters), [console output](../fundamentals/logger.md#types-of-logged-results), TensorBoard and Matplotlib, and much more will automatically be
+logged in the UI under the created Task.
+
+Sit back, relax, and watch your models converge :)
\ No newline at end of file
diff --git a/docs/getting_started/building_pipelines.md b/docs/getting_started/building_pipelines.md
new file mode 100644
index 00000000..a6a7466d
--- /dev/null
+++ b/docs/getting_started/building_pipelines.md
@@ -0,0 +1,25 @@
+---
+title: Building Pipelines
+---
+
+
+Pipelines are a way to streamline and connect multiple processes, plugging the output of one process as the input of another.
+
+ClearML Pipelines are implemented by a Controller Task that holds the logic of the pipeline steps' interactions. The
+execution logic controls which step to launch based on parent steps completing their execution. Depending on the
+specifications laid out in the controller task, a step's parameters can be overridden, enabling users to leverage other
+steps' execution products such as artifacts and parameters.
+
+When run, the controller will sequentially launch the pipeline steps. Pipelines can be executed locally or
+on any machine using the [clearml-agent](../clearml_agent.md).
+
+ClearML pipelines are created from code using one of the following:
+* [PipelineController class](../pipelines/pipelines_sdk_tasks.md) - A pythonic interface for defining and configuring the
+ pipeline controller and its steps. The controller and steps can be functions in your Python code or existing ClearML tasks.
+* [PipelineDecorator class](../pipelines/pipelines_sdk_function_decorators.md) - A set of Python decorators which transform
+ your functions into the pipeline controller and steps
+
+For more information, see [ClearML Pipelines](../pipelines/pipelines.md).
+
+
+
\ No newline at end of file
diff --git a/docs/getting_started/data_management.md b/docs/getting_started/data_management.md
new file mode 100644
index 00000000..3064a51f
--- /dev/null
+++ b/docs/getting_started/data_management.md
@@ -0,0 +1,131 @@
+---
+title: Managing Your Data
+---
+
+Data is probably one of the biggest factors that determines the success of a project. Associating a model's data with
+the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
+
+[ClearML Data](../clearml_data/clearml_data.md) lets you:
+* Version your data
+* Fetch your data from every machine with minimal code changes
+* Use the data with any other task
+* Associate data to task results.
+
+ClearML offers the following data management solutions:
+
+* `clearml.Dataset` - A Python interface for creating, retrieving, managing, and using datasets. See [SDK](../clearml_data/clearml_data_sdk.md)
+ for an overview of the basic methods of the Dataset module.
+* `clearml-data` - A CLI utility for creating, uploading, and managing datasets. See [CLI](../clearml_data/clearml_data_cli.md)
+ for a reference of `clearml-data` commands.
+* Hyper-Datasets - ClearML's advanced queryable dataset management solution. For more information, see [Hyper-Datasets](../hyperdatasets/overview.md)
+
+The following guide will use both the `clearml-data` CLI and the `Dataset` class to do the following:
+1. Create a ClearML dataset
+2. Access the dataset from a ClearML Task in order to preprocess the data
+3. Create a new version of the dataset with the modified data
+4. Use the new version of the dataset to train a model
+
+## Creating Dataset
+
+Let's assume you have some code that extracts data from a production database into a local folder.
+Your goal is to create an immutable copy of the data to be used by further steps.
+
+1. Create the dataset using the `clearml-data create` command and passing the dataset's project and name. You can add a
+ `latest` tag, making it easier to find it later.
+
+ ```bash
+ clearml-data create --project chatbot_data --name dataset_v1 --latest
+ ```
+
+1. Add data to the dataset using `clearml-data sync` and passing the path of the folder to be added to the dataset.
+ This command also uploads the data and finalizes the dataset automatically.
+
+ ```bash
+ clearml-data sync --folder ./work_dataset
+ ```
+
+
+## Preprocessing Data
+The second step is to preprocess the data. First access the data, then modify it,
+and lastly create a new version of the data.
+
+1. Create a task for you data preprocessing (not required):
+
+ ```python
+ from clearml import Task, Dataset
+
+ # create a task for the data processing
+ task = Task.init(project_name='data', task_name='create', task_type='data_processing')
+ ```
+
+1. Access a dataset using [`Dataset.get()`](../references/sdk/dataset.md#datasetget):
+
+ ```python
+ # get the v1 dataset
+ dataset = Dataset.get(dataset_project='data', dataset_name='dataset_v1')
+ ```
+1. Get a local mutable copy of the dataset using [`Dataset.get_mutable_local_copy`](../references/sdk/dataset.md#get_mutable_local_copy). \
+ This downloads the dataset to a specified `target_folder` (non-cached). If the folder already has contents, specify
+ whether to overwrite its contents with the dataset contents using the `overwrite` parameter.
+
+ ```python
+ # get a local mutable copy of the dataset
+ dataset_folder = dataset.get_mutable_local_copy(
+ target_folder='work_dataset',
+ overwrite=True
+ )
+ ```
+
+1. Preprocess the data, including modifying some files in the `./work_dataset` folder.
+
+1. Create a new version of the dataset:
+
+ ```python
+ # create a new version of the dataset with the pickle file
+ new_dataset = Dataset.create(
+ dataset_project='data',
+ dataset_name='dataset_v2',
+ parent_datasets=[dataset],
+ # this will make sure we have the creation code and the actual dataset artifacts on the same Task
+ use_current_task=True,
+ )
+
+1. Add the modified data to the dataset:
+
+ ```python
+ new_dataset.sync_folder(local_path=dataset_folder)
+ new_dataset.upload()
+ new_dataset.finalize()
+ ```
+
+1. Remove the `latest` tag from the previous dataset and add the tag to the new dataset:
+ ```python
+ # now let's remove the previous dataset tag
+ dataset.tags = []
+ new_dataset.tags = ['latest']
+ ```
+
+The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parent_datasets` argument.
+This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
+since it only stores the changed and/or added files from the parent versions.
+When you access the dataset, it automatically merges the files from all parent versions
+in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
+
+## Training
+You can now train your model with the **latest** dataset you have in the system, by getting the instance of the Dataset
+based on the `latest` tag (if you have two Datasets with the same tag you will get the newest).
+Once you have the dataset you can request a local copy of the data. All local copy requests are cached,
+which means that if you access the same dataset multiple times you will not have any unnecessary downloads.
+
+```python
+# create a task for the model training
+task = Task.init(project_name='data', task_name='ingest', task_type='training')
+
+# get the latest dataset with the tag `latest`
+dataset = Dataset.get(dataset_tags='latest')
+
+# get a cached copy of the Dataset files
+dataset_folder = dataset.get_local_copy()
+
+# train model here
+```
\ No newline at end of file
diff --git a/docs/getting_started/ds/best_practices.md b/docs/getting_started/ds/best_practices.md
index 1952e3c4..ff5dcb4d 100644
--- a/docs/getting_started/ds/best_practices.md
+++ b/docs/getting_started/ds/best_practices.md
@@ -24,7 +24,7 @@ During early stages of model development, while code is still being modified hea
These setups can be folded into each other and that's great! If you have a GPU machine for each researcher, that's awesome!
The goal of this phase is to get a code, dataset, and environment set up, so you can start digging to find the best model!
-- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [Getting Started](ds_first_steps.md)).
+- [ClearML SDK](../../clearml_sdk/clearml_sdk.md) should be integrated into your code (check out [ClearML Setup](../../clearml_sdk/clearml_sdk_setup.md)).
This helps visualizing the results and tracking progress.
- [ClearML Agent](../../clearml_agent.md) helps moving your work to other machines without the hassle of rebuilding the environment every time,
while also creating an easy queue interface that easily lets you drop your tasks to be executed one by one
@@ -47,7 +47,7 @@ that you need.
accessed, [compared](../../webapp/webapp_exp_comparing.md) and [tracked](../../webapp/webapp_exp_track_visual.md).
- [ClearML Agent](../../clearml_agent.md) does the heavy lifting. It reproduces the execution environment, clones your code,
applies code patches, manages parameters (including overriding them on the fly), executes the code, and queues multiple tasks.
- It can even [build](../../clearml_agent/clearml_agent_docker.md#exporting-a-task-into-a-standalone-docker-container) the docker container for you!
+ It can even [build](../../clearml_agent/clearml_agent_docker_exec#exporting-a-task-into-a-standalone-docker-container) the docker container for you!
- [ClearML Pipelines](../../pipelines/pipelines.md) ensure that steps run in the same order,
programmatically chaining tasks together, while giving an overview of the execution pipeline's status.
diff --git a/docs/getting_started/ds/ds_second_steps.md b/docs/getting_started/ds/ds_second_steps.md
deleted file mode 100644
index 075a679c..00000000
--- a/docs/getting_started/ds/ds_second_steps.md
+++ /dev/null
@@ -1,193 +0,0 @@
----
-title: Next Steps
----
-
-So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first task!
-
-Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!
-
-## Accessing Tasks
-
-Every previously executed experiment is stored as a Task.
-A Task's project and name can be changed after it has been executed.
-A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
-
-Retrieve a Task object programmatically by querying the system based on either the Task ID,
-or project and name combination. You can also query tasks based on their properties, like tags (see [Querying Tasks](../../clearml_sdk/task_sdk.md#querying--searching-tasks)).
-
-```python
-prev_task = Task.get_task(task_id='123456deadbeef')
-```
-
-Once you have a Task object you can query the state of the Task, get its model(s), scalars, parameters, etc.
-
-## Log Hyperparameters
-
-For full reproducibility, it's paramount to save each task's hyperparameters. Since hyperparameters can have substantial impact
-on model performance, saving and comparing them between tasks is sometimes the key to understanding model behavior.
-
-ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
-
-You can also log parameter dictionaries (very useful when parsing an external configuration file and storing as a dict object),
-whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
-
-```python
-params_dictionary = {'epochs': 3, 'lr': 0.4}
-task.connect(params_dictionary)
-```
-
-See [Configuration](../../clearml_sdk/task_sdk.md#configuration) for all hyperparameter logging options.
-
-## Log Artifacts
-
-ClearML lets you easily store the output products of a task: Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
-
-Essentially, artifacts are files (or Python objects) uploaded from a script and are stored alongside the Task.
-These artifacts can be easily accessed by the web UI or programmatically.
-
-Artifacts can be stored anywhere, either on the ClearML server, or any object storage solution or shared folder.
-See all [storage capabilities](../../integrations/storage.md).
-
-
-### Adding Artifacts
-
-Upload a local file containing the preprocessed results of the data:
-```python
-task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
-```
-
-You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
-```python
-task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
-```
-
-Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
-If the object type is unknown, ClearML pickles it and uploads the pickle file.
-
-```python
-numpy_object = np.eye(100, 100)
-task.upload_artifact(name='features', artifact_object=numpy_object)
-```
-
-For more artifact logging options, see [Artifacts](../../clearml_sdk/task_sdk.md#artifacts).
-
-### Using Artifacts
-
-Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
-To use an artifact, first you have to get an instance of the Task that originally created it,
-then you either download it and get its path, or get the artifact object directly.
-
-For example, using a previously generated preprocessed data.
-
-```python
-preprocess_task = Task.get_task(task_id='preprocessing_task_id')
-local_csv = preprocess_task.artifacts['data'].get_local_copy()
-```
-
-`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
-Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
-need to download the artifact again.
-Calling `get()` gets a deserialized pickled object.
-
-Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
-
-### Models
-
-Models are a special kind of artifact.
-Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
-All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
-pass a storage location for the model files to be uploaded to.
-
-For example, upload all snapshots to an S3 bucket:
-```python
-task = Task.init(
- project_name='examples',
- task_name='storing model',
- output_uri='s3://my_models/'
-)
-```
-
-Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
-
-Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
-under the "Input Models" section.
-
-Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
-[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
-[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
-[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
-
-#### Loading Models
-Loading a previously trained model is quite similar to loading artifacts.
-
-```python
-prev_task = Task.get_task(task_id='the_training_task')
-last_snapshot = prev_task.models['output'][-1]
-local_weights_path = last_snapshot.get_local_copy()
-```
-
-Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
-:::note
-Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
-:::
-As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
-Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
-This feature lets you easily get a full genealogy of every trained and used model by your system!
-
-## Log Metrics
-
-Full metrics logging is the key to finding the best performing model!
-By default, ClearML automatically captures and logs everything reported to TensorBoard and Matplotlib.
-
-Since not all metrics are tracked that way, you can also manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
-
-You can log everything, from time series data and confusion matrices to HTML, Audio, and Video, to custom plotly graphs! Everything goes!
-
-
-
-
-Once everything is neatly logged and displayed, use the [comparison tool](../../webapp/webapp_exp_comparing.md) to find the best configuration!
-
-
-## Track Tasks
-
-The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
-
-
-
-
-
-### Creating Leaderboards
-Customize the [task table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
-You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
-
-Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
-
-It can also be used as a live leaderboard, showing the best performing tasks' status, updated in real time.
-This is helpful to monitor your projects' progress, and to share it across the organization.
-
-Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific task or a comparison page.
-
-You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the task.
-Later you can search based on task name in the search bar, and filter tasks based on their tags, parameters, status, and more.
-
-## What's Next?
-
-This covers the basics of ClearML! Running through this guide you've learned how to log Parameters, Artifacts and Metrics!
-
-If you want to learn more look at how we see the data science process in our [best practices](best_practices.md) page,
-or check these pages out:
-
-- Scale you work and deploy [ClearML Agents](../../clearml_agent.md)
-- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
-- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
-- Improve your tasks with [Hyperparameter Optimization](../../fundamentals/hpo.md)
-- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../integrations/tensorflow.md),
- [PyTorch](../../integrations/pytorch.md), [Keras](../../integrations/keras.md),
- and more
-
-## YouTube Playlist
-
-All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube. Go check it out :)
-
-[](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)
\ No newline at end of file
diff --git a/docs/getting_started/logging_using_artifacts.md b/docs/getting_started/logging_using_artifacts.md
new file mode 100644
index 00000000..27cafc98
--- /dev/null
+++ b/docs/getting_started/logging_using_artifacts.md
@@ -0,0 +1,122 @@
+---
+title: Logging and Using Task Artifacts
+---
+
+:::note
+This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md)
+:::
+
+
+ClearML lets you easily store a task's output products--or **Artifacts**:
+* [Model](#models) snapshot / weights file
+* Preprocessing of your data
+* Feature representation of data
+* And more!
+
+**Artifacts** are files or Python objects that are uploaded and stored alongside the Task.
+These artifacts can be easily accessed by the web UI or programmatically.
+
+Artifacts can be stored anywhere, either on the ClearML Server, or any object storage solution or shared folder.
+See all [storage capabilities](../integrations/storage.md).
+
+
+## Adding Artifacts
+
+Let's create a [Task](../fundamentals/task.md) and add some artifacts to it.
+
+1. Create a task using [`Task.init()`](../references/sdk/task.md#taskinit)
+
+ ```python
+ from clearml import Task
+
+ task = Task.init(project_name='great project', task_name='task with artifacts')
+ ```
+
+1. Upload a local **file** using [`Task.upload_folder()`](../references/sdk/task.md#upload_artifact) and specifying the artifact's
+ name and its path:
+
+ ```python
+ task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
+ ```
+
+1. Upload an **entire folder** with all its content by passing the folder path (the folder will be zipped and uploaded as a single zip file).
+
+ ```python
+ task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
+ ```
+
+1. Upload an instance of an object. Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
+ If the object type is unknown, ClearML pickles it and uploads the pickle file.
+
+ ```python
+ numpy_object = np.eye(100, 100)
+ task.upload_artifact(name='features', artifact_object=numpy_object)
+ ```
+
+For more artifact logging options, see [Artifacts](../clearml_sdk/task_sdk.md#artifacts).
+
+### Using Artifacts
+
+Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
+To use an artifact, first you have to get an instance of the Task that originally created it,
+then you either download it and get its path, or get the artifact object directly.
+
+For example, using a previously generated preprocessed data.
+
+```python
+preprocess_task = Task.get_task(task_id='preprocessing_task_id')
+local_csv = preprocess_task.artifacts['data'].get_local_copy()
+```
+
+`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
+Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
+need to download the artifact again.
+Calling `get()` gets a deserialized pickled object.
+
+Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
+
+## Models
+
+Models are a special kind of artifact.
+Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
+All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
+pass a storage location for the model files to be uploaded to.
+
+For example, upload all snapshots to an S3 bucket:
+```python
+task = Task.init(
+ project_name='examples',
+ task_name='storing model',
+ output_uri='s3://my_models/'
+)
+```
+
+Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
+
+Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
+under the "Input Models" section.
+
+Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
+[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
+[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
+[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
+
+### Loading Models
+Loading a previously trained model is quite similar to loading artifacts.
+
+```python
+prev_task = Task.get_task(task_id='the_training_task')
+last_snapshot = prev_task.models['output'][-1]
+local_weights_path = last_snapshot.get_local_copy()
+```
+
+Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
+
+:::note
+Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
+:::
+
+As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
+Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
+This feature lets you easily get a full genealogy of every trained and used model by your system!
+
diff --git a/docs/getting_started/main.md b/docs/getting_started/main.md
index baf51f90..aad86cd8 100644
--- a/docs/getting_started/main.md
+++ b/docs/getting_started/main.md
@@ -1,8 +1,4 @@
----
-id: main
-title: What is ClearML?
-slug: /
----
+# What is ClearML?
ClearML is an open-source, end-to-end AI Platform designed to streamline AI adoption and the entire development lifecycle.
It supports every phase of AI development, from research to production, allowing users to
@@ -116,7 +112,7 @@ alert you whenever your model improves in accuracy)
- Automatically scale cloud instances according to your resource needs with ClearML's
[AWS Autoscaler](../webapp/applications/apps_aws_autoscaler.md) and [GCP Autoscaler](../webapp/applications/apps_gcp_autoscaler.md)
GUI applications
-- Run [hyperparameter optimization](../fundamentals/hpo.md)
+- Run [hyperparameter optimization](../hpo.md)
- Build [pipelines](../pipelines/pipelines.md) from code
- Much more!
diff --git a/docs/getting_started/mlops/mlops_first_steps.md b/docs/getting_started/mlops/mlops_first_steps.md
deleted file mode 100644
index 34635cd3..00000000
--- a/docs/getting_started/mlops/mlops_first_steps.md
+++ /dev/null
@@ -1,225 +0,0 @@
----
-title: First Steps
----
-
-:::note
-This tutorial assumes that you've already [signed up](https://app.clear.ml) to ClearML
-:::
-
-ClearML provides tools for **automation**, **orchestration**, and **tracking**, all key in performing effective MLOps and LLMOps.
-
-Effective MLOps and LLMOps rely on the ability to scale work beyond one's own computer. Moving from your own machine can be time-consuming.
-Even assuming that you have all the drivers and applications installed, you still need to manage multiple Python environments
-for different packages / package versions, or worse - manage different Dockers for different package versions.
-
-Not to mention, when working on remote machines, executing experiments, tracking what's running where, and making sure machines
-are fully utilized at all times become daunting tasks.
-
-This can create overhead that derails you from your core work!
-
-ClearML Agent was designed to deal with such issues and more! It is a tool responsible for executing tasks on remote machines: on-premises or in the cloud! ClearML Agent provides the means to reproduce and track tasks in your
-machine of choice through the ClearML WebApp with no need for additional code.
-
-The agent will set up the environment for a specific Task's execution (inside a Docker, or bare-metal), install the
-required Python packages, and execute and monitor the process.
-
-
-## Set up an Agent
-
-1. Install the agent:
-
- ```bash
- pip install clearml-agent
- ```
-
-1. Connect the agent to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration), then run this:
-
- ```bash
- clearml-agent init
- ```
-
- :::note
- If you've already created credentials, you can copy-paste the default agent section from [here](https://github.com/clearml/clearml-agent/blob/master/docs/clearml.conf#L15) (this is optional. If the section is not provided the default values will be used)
- :::
-
-1. Start the agent's daemon and assign it to a [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue):
-
- ```bash
- clearml-agent daemon --queue default
- ```
-
- A queue is an ordered list of Tasks that are scheduled for execution. The agent will pull Tasks from its assigned
- queue (`default` in this case), and execute them one after the other. Multiple agents can listen to the same queue
- (or even multiple queues), but only a single agent will pull a Task to be executed.
-
-:::tip Agent Deployment Modes
-ClearML Agents can be deployed in:
-* [Virtual environment mode](../../clearml_agent/clearml_agent_execution_env.md): Agent creates a new venv to execute a task.
-* [Docker mode](../../clearml_agent/clearml_agent_execution_env.md#docker-mode): Agent executes a task inside a
-Docker container.
-
-For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
-:::
-
-## Clone a Task
-Tasks can be reproduced (cloned) for validation or as a baseline for further experimentation.
-Cloning a task duplicates the task's configuration, but not its outputs.
-
-**To clone a task in the ClearML WebApp:**
-1. Click on any project card to open its [task table](../../webapp/webapp_exp_table.md).
-1. Right-click one of the tasks on the table.
-1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
-1. Click **CLONE** in the window.
-
-The newly cloned task will appear and its info panel will slide open. The cloned task is in draft mode, so
-it can be modified. You can edit the Git / code references, control the Python packages to be installed, specify the
-Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-tasks) for more information about editing tasks in the UI.
-
-## Enqueue a Task
-Once you have set up a task, it is now time to execute it.
-
-**To execute a task through the ClearML WebApp:**
-1. Right-click your draft task (the context menu is also available through the
- button on the top right of the task's info panel)
-1. Click **ENQUEUE,** which will open the **ENQUEUE TASK** window
-1. In the window, select `default` in the queue menu
-1. Click **ENQUEUE**
-
-This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
-assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
-task, and the task can be [tracked and its results visualized](../../webapp/webapp_exp_track_visual.md).
-
-
-## Programmatic Interface
-
-The cloning, modifying, and enqueuing actions described above can also be performed programmatically.
-
-### First Steps
-#### Access Previously Executed Tasks
-All Tasks in the system can be accessed through their unique Task ID, or based on their properties using the [`Task.get_task`](../../references/sdk/task.md#taskget_task)
-method. For example:
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-```
-
-Once a specific Task object has been obtained, it can be cloned, modified, and more. See [Advanced Usage](#advanced-usage).
-
-#### Clone a Task
-
-To duplicate a task, use the [`Task.clone`](../../references/sdk/task.md#taskclone) method, and input either a
-Task object or the Task's ID as the `source_task` argument.
-```python
-cloned_task = Task.clone(source_task=executed_task)
-```
-
-#### Enqueue a Task
-To enqueue the task, use the [`Task.enqueue`](../../references/sdk/task.md#taskenqueue) method, and input the Task object
-with the `task` argument, and the queue to push the task into with `queue_name`.
-
-```python
-Task.enqueue(task=cloned_task, queue_name='default')
-```
-
-### Advanced Usage
-Before execution, use a variety of programmatic methods to manipulate a task object.
-
-#### Modify Hyperparameters
-[Hyperparameters](../../fundamentals/hyperparameters.md) are an integral part of Machine Learning code as they let you
-control the code without directly modifying it. Hyperparameters can be added from anywhere in your code, and ClearML supports multiple ways to obtain them!
-
-Users can programmatically change cloned tasks' parameters.
-
-For example:
-```python
-from clearml import Task
-
-cloned_task = Task.clone(task_id='aabbcc')
-cloned_task.set_parameter(name='internal/magic', value=42)
-```
-
-#### Report Artifacts
-Artifacts are files created by your task. Users can upload [multiple types of data](../../clearml_sdk/task_sdk.md#logging-artifacts),
-objects and files to a task anywhere from code.
-
-```python
-import numpy as np
-from clearml import Task
-
-Task.current_task().upload_artifact(name='a_file', artifact_object='local_file.bin')
-Task.current_task().upload_artifact(name='numpy', artifact_object=np.ones(4,4))
-```
-
-Artifacts serve as a great way to pass and reuse data between tasks. Artifacts can be [retrieved](../../clearml_sdk/task_sdk.md#using-artifacts)
-by accessing the Task that created them. These artifacts can be modified and uploaded to other tasks.
-
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-# artifact as a file
-local_file = executed_task.artifacts['file'].get_local_copy()
-# artifact as object
-a_numpy = executed_task.artifacts['numpy'].get()
-```
-
-By facilitating the communication of complex objects between tasks, artifacts serve as the foundation of ClearML's [Data Management](../../clearml_data/clearml_data.md)
-and [pipeline](../../pipelines/pipelines.md) solutions.
-
-#### Log Models
-Logging models into the model repository is the easiest way to integrate the development process directly with production.
-Any model stored by a supported framework (Keras / TensorFlow / PyTorch / Joblib etc.) will be automatically logged into ClearML.
-
-ClearML also supports methods to explicitly log models. Models can be automatically stored on a preferred storage medium
-(S3 bucket, Google storage, etc.).
-
-#### Log Metrics
-Log as many metrics as you want from your processes using the [Logger](../../fundamentals/logger.md) module. This
-improves the visibility of your processes' progress.
-
-```python
-from clearml import Logger
-
-Logger.current_logger().report_scalar(
- graph='metric',
- series='variant',
- value=13.37,
- iteration=counter
-)
-```
-
-You can also retrieve reported scalars for programmatic analysis:
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-# get a summary of the min/max/last value of all reported scalars
-min_max_values = executed_task.get_last_scalar_metrics()
-# get detailed graphs of all scalars
-full_scalars = executed_task.get_reported_scalars()
-```
-
-#### Query Tasks
-You can also search and query Tasks in the system. Use the [`Task.get_tasks`](../../references/sdk/task.md#taskget_tasks)
-class method to retrieve Task objects and filter based on the specific values of the Task - status, parameters, metrics and more!
-
-```python
-from clearml import Task
-
-tasks = Task.get_tasks(
- project_name='examples',
- task_name='partial_name_match',
- task_filter={'status': 'in_progress'}
-)
-```
-
-#### Manage Your Data
-Data is probably one of the biggest factors that determines the success of a project. Associating a model's data with
-the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
-
-[ClearML Data](../../clearml_data/clearml_data.md) lets you version your data, so it's never lost, fetch it from every
-machine with minimal code changes, and associate data to task results.
-
-Logging data can be done via command line, or programmatically. If any preprocessing code is involved, ClearML logs it
-as well! Once data is logged, it can be used by other tasks.
diff --git a/docs/getting_started/mlops/mlops_second_steps.md b/docs/getting_started/mlops/mlops_second_steps.md
deleted file mode 100644
index aa56772b..00000000
--- a/docs/getting_started/mlops/mlops_second_steps.md
+++ /dev/null
@@ -1,121 +0,0 @@
----
-title: Next Steps
----
-
-Once Tasks are defined and in the ClearML system, they can be chained together to create Pipelines.
-Pipelines provide users with a greater level of abstraction and automation, with Tasks running one after the other.
-
-Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
-
-The sections below describe the following scenarios:
-* [Dataset creation](#dataset-creation)
-* Data [processing](#preprocessing-data) and [consumption](#training)
-* [Pipeline building](#building-the-pipeline)
-
-
-## Building Tasks
-### Dataset Creation
-
-Let's assume you have some code that extracts data from a production database into a local folder.
-Your goal is to create an immutable copy of the data to be used by further steps:
-
-```bash
-clearml-data create --project data --name dataset
-clearml-data sync --folder ./from_production
-```
-
-You can add a tag `latest` to the Dataset, marking it as the latest version.
-
-### Preprocessing Data
-The second step is to preprocess the data. First access the data, then modify it,
-and lastly create a new version of the data.
-
-```python
-from clearml import Task, Dataset
-
-# create a task for the data processing part
-task = Task.init(project_name='data', task_name='create', task_type='data_processing')
-
-# get the v1 dataset
-dataset = Dataset.get(dataset_project='data', dataset_name='dataset_v1')
-
-# get a local mutable copy of the dataset
-dataset_folder = dataset.get_mutable_local_copy(
- target_folder='work_dataset',
- overwrite=True
-)
-# change some files in the `./work_dataset` folder
-
-# create a new version of the dataset with the pickle file
-new_dataset = Dataset.create(
- dataset_project='data',
- dataset_name='dataset_v2',
- parent_datasets=[dataset],
- # this will make sure we have the creation code and the actual dataset artifacts on the same Task
- use_current_task=True,
-)
-new_dataset.sync_folder(local_path=dataset_folder)
-new_dataset.upload()
-new_dataset.finalize()
-# now let's remove the previous dataset tag
-dataset.tags = []
-new_dataset.tags = ['latest']
-```
-
-The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parent_datasets` argument.
-This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
-since it only stores the changed and/or added files from the parent versions.
-When you access the Dataset, it automatically merges the files from all parent versions
-in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
-
-### Training
-You can now train your model with the **latest** Dataset you have in the system, by getting the instance of the Dataset
-based on the `latest` tag
-(if by any chance you have two Datasets with the same tag you will get the newest).
-Once you have the dataset you can request a local copy of the data. All local copy requests are cached,
-which means that if you access the same dataset multiple times you will not have any unnecessary downloads.
-
-```python
-# create a task for the model training
-task = Task.init(project_name='data', task_name='ingest', task_type='training')
-
-# get the latest dataset with the tag `latest`
-dataset = Dataset.get(dataset_tags='latest')
-
-# get a cached copy of the Dataset files
-dataset_folder = dataset.get_local_copy()
-
-# train our model here
-```
-
-## Building the Pipeline
-
-Now that you have the data creation step, and the data training step, create a pipeline that when executed,
-will first run the first and then run the second.
-It is important to remember that pipelines are Tasks by themselves and can also be automated by other pipelines (i.e. pipelines of pipelines).
-
-```python
-from clearml import PipelineController
-
-pipe = PipelineController(
- project='data',
- name='pipeline demo',
- version="1.0"
-)
-
-pipe.add_step(
- name='step 1 data',
- base_project_name='data',
- base_task_name='create'
-)
-pipe.add_step(
- name='step 2 train',
- parents=['step 1 data', ],
- base_project_name='data',
- base_task_name='ingest'
-)
-```
-
-You can also pass the parameters from one step to the other (for example `Task.id`).
-In addition to pipelines made up of Task steps, ClearML also supports pipelines consisting of function steps. For more
-information, see the [full pipeline documentation](../../pipelines/pipelines.md).
diff --git a/docs/getting_started/remote_execution.md b/docs/getting_started/remote_execution.md
new file mode 100644
index 00000000..3f7fab5f
--- /dev/null
+++ b/docs/getting_started/remote_execution.md
@@ -0,0 +1,84 @@
+---
+title: Remote Execution
+---
+
+:::note
+This guide assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md) and [ClearML Agent](../clearml_agent/clearml_agent_setup.md).
+:::
+
+ClearML Agent enables seamless remote execution by offloading computations from a local development environment to a more
+powerful remote machine. This is useful for:
+
+* Running initial process (a task or function) locally before scaling up.
+* Offloading resource-intensive tasks to dedicated compute nodes.
+* Managing execution through ClearML's queue system.
+
+This guide focuses on transitioning a locally executed process to a remote machine for scalable execution. To learn how
+to reproduce a previously executed process on a remote machine, see [Reproducing Tasks](reproduce_tasks.md).
+
+## Running a Task Remotely
+
+A compelling workflow is:
+
+1. Run code on a development machine for a few iterations, or just set up the environment.
+1. Move the execution to a beefier remote machine for the actual training.
+
+Use [`Task.execute_remotely()`](../references/sdk/task.md#execute_remotely) to implement this workflow. This method stops the current manual execution, and then
+re-runs it on a remote machine.
+
+1. Deploy a `clearml-agent` from your beefier remote machine and assign it to the `default` queue:
+
+ ```commandline
+ clearml-agent daemon --queue default
+ ```
+
+1. Run the local code to send to the remote machine for execution:
+
+ ```python
+ from clearml import Task
+
+ task = Task.init(project_name="myProject", task_name="myTask")
+
+ # training code
+
+ task.execute_remotely(
+ queue_name='default',
+ clone=False,
+ exit_process=True
+ )
+ ```
+
+Once `execute_remotely()` is called on the machine, it stops the local process and enqueues the current task into the `default`
+queue. From there, an agent assigned to the queue can pull and launch it.
+
+## Running a Function Remotely
+
+You can execute a specific function remotely using [`Task.create_function_task()`](../references/sdk/task.md#create_function_task).
+This method creates a ClearML Task from a Python function and runs it on a remote machine.
+
+For example:
+
+```python
+from clearml import Task
+
+task = Task.init(project_name="myProject", task_name="Remote function")
+
+def run_me_remotely(some_argument):
+ print(some_argument)
+
+a_func_task = task.create_function_task(
+ func=run_me_remotely,
+ func_name='func_id_run_me_remotely',
+ task_name='a func task',
+ # everything below will be passed directly to our function as arguments
+ some_argument=123
+)
+```
+
+:::important Function Task Creation
+Function tasks must be created from within a regular task, created by calling `Task.init`
+:::
+
+Arguments passed to the function will be automatically logged in the task's **CONFIGURATION** tab under the **HYPERPARAMETERS > Function section**.
+Like any other arguments, they can be changed from the UI or programmatically.
+
diff --git a/docs/getting_started/reproduce_tasks.md b/docs/getting_started/reproduce_tasks.md
new file mode 100644
index 00000000..57bb1a98
--- /dev/null
+++ b/docs/getting_started/reproduce_tasks.md
@@ -0,0 +1,82 @@
+---
+title: Reproducing Tasks
+---
+
+:::note
+This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md) and [ClearML Agent](../clearml_agent/clearml_agent_setup.md).
+:::
+
+Tasks can be reproduced--or **Cloned**--for validation or as a baseline for further experimentation. When you initialize a task in your
+code, ClearML logs everything needed to reproduce your task and its environment:
+* Uncommitted changes
+* Used packages and their versions
+* Parameters
+* and more
+
+Cloning a task duplicates the task's configuration, but not its outputs.
+
+ClearML offers two ways to clone your task:
+* [Via the WebApp](#via-the-webapp)--no further code required
+* [Via programmatic interface](#via-programmatic-interface) using the `clearml` Python package
+
+Once you have cloned your task, you can modify its setup, and then execute it remotely on a machine of your choice using a ClearML Agent.
+
+## Via the WebApp
+
+**To clone a task in the ClearML WebApp:**
+1. Click on any project card to open its [task table](../webapp/webapp_exp_table.md).
+1. Right-click the task you want to reproduce.
+1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
+1. Click **CLONE** in the window.
+
+The newly cloned task's details page will open up. The cloned task is in *draft* mode, which means
+it can be modified. You can edit any of the Task's setup details, including:
+* Git and/or code references
+* Python packages to be installed
+* Container image to be used
+
+You can adjust the values of the task's hyperparameters and configuration files. See [Modifying Tasks](../webapp/webapp_exp_tuning.md#modifying-tasks) for more
+information about editing tasks in the UI.
+
+### Enqueue a Task
+Once you have set up a task, it is now time to execute it.
+
+**To execute a task through the ClearML WebApp:**
+1. In the task's details page, click "Menu"
+1. Click **ENQUEUE** to open the **ENQUEUE TASK** window
+1. In the window, select `default` in the `Queue` menu
+1. Click **ENQUEUE**
+
+This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
+assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
+task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).
+
+
+## Via Programmatic Interface
+
+The cloning, modifying, and enqueuing actions described above can also be performed programmatically using `clearml`.
+
+
+### Clone the Task
+
+To duplicate the task, use [`Task.clone()`](../references/sdk/task.md#taskclone), and input either a
+Task object or the Task's ID as the `source_task` argument.
+
+```python
+cloned_task = Task.clone(source_task='qw03485je3hap903ere54')
+```
+
+The cloned task is in *draft* mode, which means it can be modified. For modification options, such as setting new parameter
+values, see [Task SDK](../clearml_sdk/task_sdk.md).
+
+### Enqueue the Task
+To enqueue the task, use [`Task.enqueue()`](../references/sdk/task.md#taskenqueue), and input the Task object
+with the `task` argument, and the queue to push the task into with `queue_name`.
+
+```python
+Task.enqueue(task=cloned_task, queue_name='default')
+```
+
+This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
+assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
+task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).
\ No newline at end of file
diff --git a/docs/getting_started/track_tasks.md b/docs/getting_started/track_tasks.md
new file mode 100644
index 00000000..0b8223f6
--- /dev/null
+++ b/docs/getting_started/track_tasks.md
@@ -0,0 +1,46 @@
+---
+title: Tracking Tasks
+---
+
+Every ClearML [task](../fundamentals/task.md) you create can be found in the **All Tasks** table and in its project's
+task table.
+
+The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the
+entire development.
+
+
+
+
+Customize the [task table](../webapp/webapp_exp_table.md) to fit your own needs by adding views of parameters, metrics, and tags.
+Filter and sort based on various criteria, such as parameters and metrics, making it simple to create custom
+views. This allows you to:
+
+* Create a dashboard for a project, presenting the latest model accuracy scores, for immediate insights.
+* Create a live leaderboard displaying the best-performing tasks, updated in real time
+* Monitor a projects' progress and share it across the organization.
+
+## Creating Leaderboards
+
+To create a leaderboard:
+
+1. Select a project in the ClearML WebApp and go to its task table
+1. Customize the column selection. Click "Settings"
+ to view and select columns to display.
+1. Filter tasks by name using the search bar to find tasks containing any search term
+1. Filter by other categories by clicking "Filter"
+ on the relevant column. There are a few types of filters:
+ * Value set - Choose which values to include from a list of all values in the column
+ * Numerical ranges - Insert minimum and/or maximum value
+ * Date ranges - Insert starting and/or ending date and time
+ * Tags - Choose which tags to filter by from a list of all tags used in the column.
+ * Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR" respectively. These
+ options appear on the top of the tag list.
+ * Filter by the absence of a tag (logical "NOT") by clicking its checkbox twice. An `X` will appear in the tag's checkbox.
+1. Enable auto-refresh for real-time monitoring
+
+For more detailed instructions, see the [Tracking Leaderboards Tutorial](../guides/ui/building_leader_board.md).
+
+## Sharing Leaderboards
+
+Bookmark the URL of your customized leaderboard to save and share your view. The URL contains all parameters and values
+for your specific leaderboard view.
\ No newline at end of file
diff --git a/docs/guides/clearml-task/clearml_task_tutorial.md b/docs/guides/clearml-task/clearml_task_tutorial.md
index 085f352c..99b86e0f 100644
--- a/docs/guides/clearml-task/clearml_task_tutorial.md
+++ b/docs/guides/clearml-task/clearml_task_tutorial.md
@@ -7,7 +7,7 @@ on a remote or local machine, from a remote repository and your local machine.
### Prerequisites
-- [`clearml`](../../getting_started/ds/ds_first_steps.md) Python package installed and configured
+- [`clearml`](../../clearml_sdk/clearml_sdk_setup) Python package installed and configured
- [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) running on at least one machine (to execute the task), configured to listen to `default` queue
### Executing Code from a Remote Repository
diff --git a/docs/guides/clearml_agent/executable_exp_containers.md b/docs/guides/clearml_agent/executable_exp_containers.md
index 35cd57da..884bc53a 100644
--- a/docs/guides/clearml_agent/executable_exp_containers.md
+++ b/docs/guides/clearml_agent/executable_exp_containers.md
@@ -9,7 +9,7 @@ script.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
-* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
+* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task
diff --git a/docs/guides/clearml_agent/exp_environment_containers.md b/docs/guides/clearml_agent/exp_environment_containers.md
index 0398e017..388d932e 100644
--- a/docs/guides/clearml_agent/exp_environment_containers.md
+++ b/docs/guides/clearml_agent/exp_environment_containers.md
@@ -11,7 +11,7 @@ be used when running optimization tasks.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
-* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
+* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task
diff --git a/docs/guides/frameworks/tensorflow/integration_keras_tuner.md b/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
index 4635afd9..5db4d120 100644
--- a/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
+++ b/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
-If you are not already using ClearML, see [Getting Started](../../../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
Integrate ClearML into code that uses [Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner). By
specifying `ClearMLTunerLogger` (see [kerastuner.py](https://github.com/clearml/clearml/blob/master/clearml/external/kerastuner.py))
as the Keras Tuner logger, ClearML automatically logs scalars and hyperparameter optimization.
diff --git a/docs/guides/main.md b/docs/guides/main.md
index 89143186..202eaa40 100644
--- a/docs/guides/main.md
+++ b/docs/guides/main.md
@@ -1,6 +1,6 @@
---
id: guidemain
-title: Examples
+title: ClearML Tutorials
slug: /guides
---
diff --git a/docs/hpo.md b/docs/hpo.md
new file mode 100644
index 00000000..5d648698
--- /dev/null
+++ b/docs/hpo.md
@@ -0,0 +1,34 @@
+---
+title: Hyperparameter Optimization
+---
+
+## What is Hyperparameter Optimization?
+Hyperparameters are variables that directly control the behaviors of training algorithms, and have a significant effect on
+the performance of the resulting machine learning models. Hyperparameter optimization (HPO) is crucial for improving
+model performance and generalization.
+
+Finding the hyperparameter values that yield the best performing models can be complicated. Manually adjusting
+hyperparameters over the course of many training trials can be slow and tedious. Luckily, ClearML offers automated
+solutions to boost hyperparameter optimization efficiency.
+
+## Workflow
+
+
+
+The preceding diagram demonstrates the typical flow of hyperparameter optimization where the parameters of a base task are optimized:
+
+1. Configure an Optimization Task with a base task whose parameters will be optimized, optimization targets, and a set of parameter values to
+ test
+1. Clone the base task. Each clone's parameter is overridden with a value from the optimization task
+1. Enqueue each clone for execution by a ClearML Agent
+1. The Optimization Task records and monitors the cloned tasks' configuration and execution details, and returns a
+ summary of the optimization results.
+
+## ClearML Solutions
+
+ClearML offers three solutions for hyperparameter optimization:
+* [GUI application](webapp/applications/apps_hpo.md): The Hyperparameter Optimization app allows you to run and manage the optimization tasks
+ directly from the web interface--no code necessary (available under the ClearML Pro plan).
+* [Command-Line Interface (CLI)](apps/clearml_param_search.md): The `clearml-param-search` CLI tool enables you to configure and launch the optimization process from your terminal.
+* [Python Interface](clearml_sdk/hpo_sdk.md): The `HyperParameterOptimizer` class within the ClearML SDK allows you to
+ configure and launch optimization tasks, and seamlessly integrate them in your existing model training tasks.
diff --git a/docs/hyperdatasets/task.md b/docs/hyperdatasets/task.md
index ea6a9063..5543acaf 100644
--- a/docs/hyperdatasets/task.md
+++ b/docs/hyperdatasets/task.md
@@ -1,6 +1,10 @@
---
-title: Tasks
+title: Dataviews
---
+
+:::important ENTERPRISE FEATURE
+Dataviews available under the ClearML Enterprise plan.
+:::
Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Dataviews](dataviews.md).
diff --git a/docs/hyperdatasets/webapp/webapp_annotator.md b/docs/hyperdatasets/webapp/webapp_annotator.md
index fb48de89..3a52547f 100644
--- a/docs/hyperdatasets/webapp/webapp_annotator.md
+++ b/docs/hyperdatasets/webapp/webapp_annotator.md
@@ -2,6 +2,10 @@
title: Annotation Tasks
---
+:::important ENTERPRISE FEATURE
+Annotation tasks are available under the ClearML Enterprise plan.
+:::
+
Use the Annotations page to access and manage annotation Tasks.
Use annotation tasks to efficiently organize the annotation of frames in Dataset versions and manage the work of annotators
diff --git a/docs/hyperdatasets/webapp/webapp_datasets.md b/docs/hyperdatasets/webapp/webapp_datasets.md
index cddbe574..5cc3d06f 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets.md
@@ -2,6 +2,10 @@
title: Hyper-Datasets Page
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
Use the Hyper-Datasets Page to navigate between and manage hyper-datasets.
You can view the Hyper-Datasets page in Project view
diff --git a/docs/hyperdatasets/webapp/webapp_datasets_frames.md b/docs/hyperdatasets/webapp/webapp_datasets_frames.md
index ca92d2c8..ee4037b2 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets_frames.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets_frames.md
@@ -2,6 +2,10 @@
title: Working with Frames
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
View and edit SingleFrames in the Dataset page. After selecting a Hyper-Dataset version, the **Version Browser** shows a sample
of frames and enables viewing SingleFrames and FramesGroups, and editing SingleFrames, in the [frame viewer](#frame-viewer).
Before opening the frame viewer, you can filter the frames by applying [simple](webapp_datasets_versioning.md#simple-frame-filtering) or [advanced](webapp_datasets_versioning.md#advanced-frame-filtering)
diff --git a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
index dfa64503..f40d44a3 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
@@ -2,6 +2,10 @@
title: Dataset Versions
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
Use the Dataset versioning WebApp (UI) features for viewing, creating, modifying, and
deleting [Dataset versions](../dataset.md#dataset-versioning).
diff --git a/docs/hyperdatasets/webapp/webapp_dataviews.md b/docs/hyperdatasets/webapp/webapp_dataviews.md
index 73e1d821..9722528b 100644
--- a/docs/hyperdatasets/webapp/webapp_dataviews.md
+++ b/docs/hyperdatasets/webapp/webapp_dataviews.md
@@ -2,6 +2,10 @@
title: The Dataview Table
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
The **Dataview table** is a [customizable](#customizing-the-dataview-table) list of Dataviews associated with a project.
Use it to view and create Dataviews, and access their info panels.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_comparing.md b/docs/hyperdatasets/webapp/webapp_exp_comparing.md
index 8a5b2707..333ba0cb 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_comparing.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_comparing.md
@@ -2,6 +2,10 @@
title: Comparing Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
In addition to [ClearML's comparison features](../../webapp/webapp_exp_comparing.md), the ClearML Enterprise WebApp
supports comparing input data selection criteria of task [Dataviews](../dataviews.md), enabling to easily locate, visualize, and analyze differences.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_modifying.md b/docs/hyperdatasets/webapp/webapp_exp_modifying.md
index 1c616ae2..bbb57e62 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_modifying.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_modifying.md
@@ -2,6 +2,10 @@
title: Modifying Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
A task that has been executed can be [cloned](../../webapp/webapp_exp_reproducing.md), then the cloned task's
execution details can be modified, and the modified task can be executed.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
index 978b613b..569d1fff 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
@@ -2,6 +2,10 @@
title: Task Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
While a task is running, and any time after it finishes, results are tracked and can be visualized in the ClearML
Enterprise WebApp (UI).
diff --git a/docs/img/app_bool_choice.png b/docs/img/app_bool_choice.png
new file mode 100644
index 00000000..d0df5dd8
Binary files /dev/null and b/docs/img/app_bool_choice.png differ
diff --git a/docs/img/app_bool_choice_dark.png b/docs/img/app_bool_choice_dark.png
new file mode 100644
index 00000000..5e28c914
Binary files /dev/null and b/docs/img/app_bool_choice_dark.png differ
diff --git a/docs/img/app_cond_str.png b/docs/img/app_cond_str.png
new file mode 100644
index 00000000..7ac43ae4
Binary files /dev/null and b/docs/img/app_cond_str.png differ
diff --git a/docs/img/app_cond_str_dark.png b/docs/img/app_cond_str_dark.png
new file mode 100644
index 00000000..8b26acbe
Binary files /dev/null and b/docs/img/app_cond_str_dark.png differ
diff --git a/docs/img/app_group.png b/docs/img/app_group.png
new file mode 100644
index 00000000..9d377d5a
Binary files /dev/null and b/docs/img/app_group.png differ
diff --git a/docs/img/app_group_dark.png b/docs/img/app_group_dark.png
new file mode 100644
index 00000000..116fec04
Binary files /dev/null and b/docs/img/app_group_dark.png differ
diff --git a/docs/img/app_html_elements.png b/docs/img/app_html_elements.png
new file mode 100644
index 00000000..67769ac1
Binary files /dev/null and b/docs/img/app_html_elements.png differ
diff --git a/docs/img/app_html_elements_dark.png b/docs/img/app_html_elements_dark.png
new file mode 100644
index 00000000..f9eb9eca
Binary files /dev/null and b/docs/img/app_html_elements_dark.png differ
diff --git a/docs/img/app_log.png b/docs/img/app_log.png
new file mode 100644
index 00000000..272def23
Binary files /dev/null and b/docs/img/app_log.png differ
diff --git a/docs/img/app_log_dark.png b/docs/img/app_log_dark.png
new file mode 100644
index 00000000..16c90163
Binary files /dev/null and b/docs/img/app_log_dark.png differ
diff --git a/docs/img/app_plot.png b/docs/img/app_plot.png
new file mode 100644
index 00000000..26907fce
Binary files /dev/null and b/docs/img/app_plot.png differ
diff --git a/docs/img/app_plot_dark.png b/docs/img/app_plot_dark.png
new file mode 100644
index 00000000..840e772a
Binary files /dev/null and b/docs/img/app_plot_dark.png differ
diff --git a/docs/img/app_proj_selection.png b/docs/img/app_proj_selection.png
new file mode 100644
index 00000000..3b125b91
Binary files /dev/null and b/docs/img/app_proj_selection.png differ
diff --git a/docs/img/app_proj_selection_dark.png b/docs/img/app_proj_selection_dark.png
new file mode 100644
index 00000000..8a3dc9e3
Binary files /dev/null and b/docs/img/app_proj_selection_dark.png differ
diff --git a/docs/img/gif/ai_dev_center.gif b/docs/img/gif/ai_dev_center.gif
new file mode 100644
index 00000000..7a76737a
Binary files /dev/null and b/docs/img/gif/ai_dev_center.gif differ
diff --git a/docs/img/gif/ai_dev_center_dark.gif b/docs/img/gif/ai_dev_center_dark.gif
new file mode 100644
index 00000000..ab5a4efb
Binary files /dev/null and b/docs/img/gif/ai_dev_center_dark.gif differ
diff --git a/docs/img/gif/genai_engine.gif b/docs/img/gif/genai_engine.gif
new file mode 100644
index 00000000..ecca8a5e
Binary files /dev/null and b/docs/img/gif/genai_engine.gif differ
diff --git a/docs/img/gif/genai_engine_dark.gif b/docs/img/gif/genai_engine_dark.gif
new file mode 100644
index 00000000..6af30d0f
Binary files /dev/null and b/docs/img/gif/genai_engine_dark.gif differ
diff --git a/docs/img/gif/infra_control_plane.gif b/docs/img/gif/infra_control_plane.gif
new file mode 100644
index 00000000..bb5b524b
Binary files /dev/null and b/docs/img/gif/infra_control_plane.gif differ
diff --git a/docs/img/gif/infra_control_plane_dark.gif b/docs/img/gif/infra_control_plane_dark.gif
new file mode 100644
index 00000000..92e3bc10
Binary files /dev/null and b/docs/img/gif/infra_control_plane_dark.gif differ
diff --git a/docs/integrations/accelerate.md b/docs/integrations/accelerate.md
index 6be0f9ab..8d5d685e 100644
--- a/docs/integrations/accelerate.md
+++ b/docs/integrations/accelerate.md
@@ -9,7 +9,7 @@ such as required packages and uncommitted changes, and supports reporting scalar
## Setup
-To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../getting_started/ds/ds_first_steps.md#install-clearml)
+To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../clearml_sdk/clearml_sdk_setup#install-clearml)
in your environment, and use the `log_with` parameter when instantiating an `Accelerator`:
```python
diff --git a/docs/integrations/autokeras.md b/docs/integrations/autokeras.md
index a92eb852..dd90106f 100644
--- a/docs/integrations/autokeras.md
+++ b/docs/integrations/autokeras.md
@@ -3,7 +3,7 @@ title: AutoKeras
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [Getting Started](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
diff --git a/docs/integrations/catboost.md b/docs/integrations/catboost.md
index 50c41700..77476e6b 100644
--- a/docs/integrations/catboost.md
+++ b/docs/integrations/catboost.md
@@ -3,7 +3,7 @@ title: CatBoost
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
@@ -117,5 +117,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/click.md b/docs/integrations/click.md
index cf9298bd..c1169615 100644
--- a/docs/integrations/click.md
+++ b/docs/integrations/click.md
@@ -3,7 +3,7 @@ title: Click
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
diff --git a/docs/integrations/fastai.md b/docs/integrations/fastai.md
index e8fd03e5..62bc5e16 100644
--- a/docs/integrations/fastai.md
+++ b/docs/integrations/fastai.md
@@ -3,8 +3,7 @@ title: Fast.ai
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [fast.ai](https://www.fast.ai/), automatically logging its models and scalars.
diff --git a/docs/integrations/hydra.md b/docs/integrations/hydra.md
index d8a05c04..faaa41b0 100644
--- a/docs/integrations/hydra.md
+++ b/docs/integrations/hydra.md
@@ -3,8 +3,7 @@ title: Hydra
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
diff --git a/docs/integrations/ignite.md b/docs/integrations/ignite.md
index 9b2de832..683292ab 100644
--- a/docs/integrations/ignite.md
+++ b/docs/integrations/ignite.md
@@ -3,8 +3,7 @@ title: PyTorch Ignite
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[PyTorch Ignite](https://pytorch.org/ignite/index.html) is a library for training and evaluating neural networks in
diff --git a/docs/integrations/integrations.md b/docs/integrations/integrations.md
new file mode 100644
index 00000000..589d30fa
--- /dev/null
+++ b/docs/integrations/integrations.md
@@ -0,0 +1,40 @@
+# ClearML Integrations
+
+ClearML seamlessly integrates with a wide range of popular machine learning frameworks, tools, and platforms to enhance your ML development workflow. Our integrations enable automatic experiment tracking, model management, and pipeline orchestration across your preferred tools.
+
+### Deep Learning Frameworks
+* [PyTorch](pytorch.md)
+* [TensorFlow](tensorflow.md)
+* [Keras](keras.md)
+* [YOLO v5](yolov5.md)
+* [YOLO v8](yolov8.md)
+* [MMEngine](mmengine.md)
+* [MMCV](mmcv.md)
+* [MONAI](monai.md)
+* [Nvidia TAO](tao.md)
+* [MegEngine](megengine.md)
+* [FastAI](fastai.md)
+
+### ML Frameworks
+* [scikit-learn](scikit_learn.md)
+* [XGBoost](xgboost.md)
+* [LightGBM](lightgbm.md)
+* [CatBoost](catboost.md)
+* [Seaborn](seaborn.md)
+
+### Configuration and Optimization
+* [AutoKeras](autokeras.md)
+* [Keras Tuner](keras_tuner.md)
+* [Optuna](optuna.md)
+* [Hydra](hydra.md)
+* [Click](click.md)
+* [Python Fire](python_fire.md)
+* [jsonargparse](jsonargparse.md)
+
+### MLOps and Visualization
+* [TensorBoard](tensorboard.md)
+* [TensorBoardX](tensorboardx.md)
+* [Matplotlib](matplotlib.md)
+* [LangChain](langchain.md)
+* [Pytorch Ignite](ignite.md)
+* [Pytorch Lightning](pytorch_lightning.md)
diff --git a/docs/integrations/jsonargparse.md b/docs/integrations/jsonargparse.md
index 8f348e45..42cc2fa2 100644
--- a/docs/integrations/jsonargparse.md
+++ b/docs/integrations/jsonargparse.md
@@ -3,11 +3,11 @@ title: jsonargparse
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[jsonargparse](https://github.com/omni-us/jsonargparse) is a Python package for creating command-line interfaces.
ClearML integrates seamlessly with `jsonargparse` and automatically logs its command-line parameters and connected
configuration files.
diff --git a/docs/integrations/keras.md b/docs/integrations/keras.md
index 52f6f487..87c1c140 100644
--- a/docs/integrations/keras.md
+++ b/docs/integrations/keras.md
@@ -3,10 +3,10 @@ title: Keras
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates with [Keras](https://keras.io/) out-of-the-box, automatically logging its models, scalars,
TensorFlow definitions, and TensorBoard outputs.
@@ -129,5 +129,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/keras_tuner.md b/docs/integrations/keras_tuner.md
index d75cffc1..705526b8 100644
--- a/docs/integrations/keras_tuner.md
+++ b/docs/integrations/keras_tuner.md
@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner) is a library that helps you pick the optimal set
of hyperparameters for training your models. ClearML integrates seamlessly with `kerastuner` and automatically logs
task scalars, the output model, and hyperparameter optimization summary.
diff --git a/docs/integrations/langchain.md b/docs/integrations/langchain.md
index f4fef37d..c85f7551 100644
--- a/docs/integrations/langchain.md
+++ b/docs/integrations/langchain.md
@@ -3,10 +3,10 @@ title: LangChain
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[LangChain](https://github.com/langchain-ai/langchain) is a popular framework for developing applications powered by
language models. You can integrate ClearML into your LangChain code using the built-in `ClearMLCallbackHandler`. This
class is used to create a ClearML Task to log LangChain assets and metrics.
diff --git a/docs/integrations/lightgbm.md b/docs/integrations/lightgbm.md
index cce9887e..19e4eb23 100644
--- a/docs/integrations/lightgbm.md
+++ b/docs/integrations/lightgbm.md
@@ -3,10 +3,10 @@ title: LightGBM
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [LightGBM](https://github.com/microsoft/LightGBM), automatically logging its models,
metric plots, and parameters.
@@ -118,5 +118,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/matplotlib.md b/docs/integrations/matplotlib.md
index 06714ff8..dde8e0cd 100644
--- a/docs/integrations/matplotlib.md
+++ b/docs/integrations/matplotlib.md
@@ -3,10 +3,10 @@ title: Matplotlib
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[Matplotlib](https://matplotlib.org/) is a Python library for data visualization. ClearML automatically captures plots
and images created with `matplotlib`.
diff --git a/docs/integrations/megengine.md b/docs/integrations/megengine.md
index 77cad702..af734eb8 100644
--- a/docs/integrations/megengine.md
+++ b/docs/integrations/megengine.md
@@ -3,10 +3,10 @@ title: MegEngine
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [MegEngine](https://github.com/MegEngine/MegEngine), automatically logging its models.
All you have to do is simply add two lines of code to your MegEngine script:
@@ -114,5 +114,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/mmcv.md b/docs/integrations/mmcv.md
index 8c77ca70..b9833820 100644
--- a/docs/integrations/mmcv.md
+++ b/docs/integrations/mmcv.md
@@ -7,10 +7,10 @@ title: MMCV v1.x
:::
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MMCV](https://github.com/open-mmlab/mmcv/tree/1.x) is a computer vision framework developed by OpenMMLab. You can integrate ClearML into your
code using the `mmcv` package's [`ClearMLLoggerHook`](https://mmcv.readthedocs.io/en/master/_modules/mmcv/runner/hooks/logger/clearml.html)
class. This class is used to create a ClearML Task and to automatically log metrics.
diff --git a/docs/integrations/mmengine.md b/docs/integrations/mmengine.md
index 09d64256..733625f6 100644
--- a/docs/integrations/mmengine.md
+++ b/docs/integrations/mmengine.md
@@ -3,10 +3,10 @@ title: MMEngine
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MMEngine](https://github.com/open-mmlab/mmengine) is a library for training deep learning models based on PyTorch.
MMEngine supports ClearML through a builtin logger: It automatically logs task environment information, such as
required packages and uncommitted changes, and supports reporting scalars, parameters, and debug samples.
diff --git a/docs/integrations/monai.md b/docs/integrations/monai.md
index 3dc98233..8b82e036 100644
--- a/docs/integrations/monai.md
+++ b/docs/integrations/monai.md
@@ -3,10 +3,10 @@ title: MONAI
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MONAI](https://github.com/Project-MONAI/MONAI) is a PyTorch-based, open-source framework for deep learning in healthcare
imaging. You can integrate ClearML into your code using MONAI's built-in handlers: [`ClearMLImageHandler`, `ClearMLStatsHandler`](#clearmlimagehandler-and-clearmlstatshandler),
and [`ModelCheckpoint`](#modelcheckpoint).
diff --git a/docs/integrations/optuna.md b/docs/integrations/optuna.md
index f660f78b..5b895ac4 100644
--- a/docs/integrations/optuna.md
+++ b/docs/integrations/optuna.md
@@ -2,7 +2,7 @@
title: Optuna
---
-[Optuna](https://optuna.readthedocs.io/en/latest) is a [hyperparameter optimization](../fundamentals/hpo.md) framework,
+[Optuna](https://optuna.readthedocs.io/en/latest) is a [hyperparameter optimization](../hpo.md) framework,
which makes use of different samplers such as grid search, random, bayesian, and evolutionary algorithms. You can integrate
Optuna into ClearML's automated hyperparameter optimization.
diff --git a/docs/integrations/pytorch.md b/docs/integrations/pytorch.md
index 59191fc9..7f5ed5a0 100644
--- a/docs/integrations/pytorch.md
+++ b/docs/integrations/pytorch.md
@@ -3,10 +3,10 @@ title: PyTorch
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [PyTorch](https://pytorch.org/), automatically logging its models.
All you have to do is simply add two lines of code to your PyTorch script:
diff --git a/docs/integrations/pytorch_lightning.md b/docs/integrations/pytorch_lightning.md
index d01f5cb2..68432d5b 100644
--- a/docs/integrations/pytorch_lightning.md
+++ b/docs/integrations/pytorch_lightning.md
@@ -3,10 +3,10 @@ title: PyTorch Lightning
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[PyTorch Lightning](https://github.com/Lightning-AI/lightning) is a framework that simplifies the process of training and deploying PyTorch models. ClearML seamlessly
integrates with PyTorch Lightning, automatically logging PyTorch models, parameters supplied by [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html),
and more.
@@ -144,6 +144,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/scikit_learn.md b/docs/integrations/scikit_learn.md
index 5a6afbab..3af78f22 100644
--- a/docs/integrations/scikit_learn.md
+++ b/docs/integrations/scikit_learn.md
@@ -3,10 +3,10 @@ title: scikit-learn
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [scikit-learn](https://scikit-learn.org/stable/), automatically logging models created
with `joblib`.
diff --git a/docs/integrations/seaborn.md b/docs/integrations/seaborn.md
index ca2e1a2c..54b65583 100644
--- a/docs/integrations/seaborn.md
+++ b/docs/integrations/seaborn.md
@@ -3,10 +3,10 @@ title: Seaborn
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[seaborn](https://seaborn.pydata.org/) is a Python library for data visualization.
ClearML automatically captures plots created using `seaborn`. All you have to do is add two
lines of code to your script:
diff --git a/docs/integrations/tensorboard.md b/docs/integrations/tensorboard.md
index a0921c3b..317c983f 100644
--- a/docs/integrations/tensorboard.md
+++ b/docs/integrations/tensorboard.md
@@ -3,9 +3,10 @@ title: TensorBoard
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[TensorBoard](https://www.tensorflow.org/tensorboard) is TensorFlow's data visualization toolkit.
ClearML automatically captures all data logged to TensorBoard. All you have to do is add two
lines of code to your script:
diff --git a/docs/integrations/tensorboardx.md b/docs/integrations/tensorboardx.md
index c8bf97bf..673b2c7b 100644
--- a/docs/integrations/tensorboardx.md
+++ b/docs/integrations/tensorboardx.md
@@ -3,7 +3,7 @@ title: TensorboardX
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[TensorboardX](https://tensorboardx.readthedocs.io/en/latest/tutorial.html#what-is-tensorboard-x) is a data
diff --git a/docs/integrations/tensorflow.md b/docs/integrations/tensorflow.md
index 3bdaee58..b87358bb 100644
--- a/docs/integrations/tensorflow.md
+++ b/docs/integrations/tensorflow.md
@@ -3,10 +3,10 @@ title: TensorFlow
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates with [TensorFlow](https://www.tensorflow.org/) out-of-the-box, automatically logging its models,
definitions, scalars, as well as TensorBoard outputs.
@@ -131,5 +131,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/transformers.md b/docs/integrations/transformers.md
index 754fd07f..5bf1d27e 100644
--- a/docs/integrations/transformers.md
+++ b/docs/integrations/transformers.md
@@ -90,5 +90,5 @@ The ClearML Agent executing the task will use the new values to [override any ha
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/xgboost.md b/docs/integrations/xgboost.md
index 7f230f81..8039bb68 100644
--- a/docs/integrations/xgboost.md
+++ b/docs/integrations/xgboost.md
@@ -3,8 +3,7 @@ title: XGBoost
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [XGBoost](https://xgboost.readthedocs.io/en/stable/), automatically logging its models,
@@ -145,5 +144,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../hpo.md)
for more information.
diff --git a/docs/integrations/yolov5.md b/docs/integrations/yolov5.md
index 6690cf75..2629b791 100644
--- a/docs/integrations/yolov5.md
+++ b/docs/integrations/yolov5.md
@@ -7,7 +7,7 @@ built in logger:
* Track every YOLOv5 training run in ClearML
* Version and easily access your custom training data with [ClearML Data](../clearml_data/clearml_data.md)
* Remotely train and monitor your YOLOv5 training runs using [ClearML Agent](../clearml_agent.md)
-* Get the very best mAP using ClearML [Hyperparameter Optimization](../fundamentals/hpo.md)
+* Get the very best mAP using ClearML [Hyperparameter Optimization](../hpo.md)
* Turn your newly trained YOLOv5 model into an API with just a few commands using [ClearML Serving](../clearml_serving/clearml_serving.md)
## Setup
diff --git a/docs/overview.md b/docs/overview.md
new file mode 100644
index 00000000..12cb5402
--- /dev/null
+++ b/docs/overview.md
@@ -0,0 +1,82 @@
+---
+id: overview
+title: What is ClearML?
+slug: /
+---
+
+# ClearML Documentation
+
+## Overview
+Welcome to the documentation for ClearML, the end to end platform for streamlining AI development and deployment. ClearML consists of three essential layers:
+1. [**Infrastructure Control Plane**](#infrastructure-control-plane) (Cloud/On-Prem Agnostic)
+2. [**AI Development Center**](#ai-development-center)
+3. [**GenAI App Engine**](#genai-app-engine)
+
+Each layer provides distinct functionality to ensure an efficient and scalable AI workflow from development to deployment.
+
+
+
+
+---
+
+## Infrastructure Control Plane
+The Infrastructure Control Plane serves as the foundation of the ClearML platform, offering compute resource provisioning and management, enabling administrators to make the compute available through GPUaaS capabilities and no-hassle configuration.
+Utilizing the Infrastructure Control Plane, DevOps and IT teams can manage and optimize GPU resources to ensure high performance and cost efficiency.
+
+#### Features
+- **Resource Management:** Automates the allocation and management of GPU resources.
+- **Workload Autoscaling:** Seamlessly scale GPU resources based on workload demands.
+- **Monitoring and Logging:** Provides comprehensive monitoring and logging for GPU utilization and performance.
+- **Cost Optimization:** Consolidate cloud and on-prem compute into a seamless GPUaaS offering
+- **Deployment Flexibility:** Easily run your workloads on both cloud and on-premises compute.
+
+
+
+
+---
+
+## AI Development Center
+The AI Development Center offers a robust environment for developing, training, and testing AI models. It is designed to be cloud and on-premises agnostic, providing flexibility in deployment.
+
+#### Features
+- **Integrated Development Environment:** A comprehensive IDE for training, testing, and debugging AI models.
+- **Model Training:** Scalable and distributed model training and hyperparameter optimization.
+- **Data Management:** Tools for data preprocessing, management, and versioning.
+- **Experiment Tracking:** Track metrics, artifacts and log. manage versions, and compare results.
+- **Workflow Automation:** Build pipelines to formalize your workflow
+
+
+
+
+---
+
+## GenAI App Engine
+The GenAI App Engine is designed to deploy large language models (LLM) into GPU clusters and manage various AI workloads, including Retrieval-Augmented Generation (RAG) tasks. This layer also handles networking, authentication, and role-based access control (RBAC) for deployed services.
+
+#### Features
+- **LLM Deployment:** Seamlessly deploy LLMs into GPU clusters.
+- **RAG Workloads:** Efficiently manage and execute RAG workloads.
+- **Networking and Authentication:** Deploy GenAI through secure, authenticated network endpoints
+- **RBAC:** Implement RBAC to control access to deployed services.
+
+
+
+
+---
+
+## Getting Started
+To begin using the ClearML, follow these steps:
+1. **Set Up Infrastructure Control Plane:** Allocate and manage your GPU resources.
+2. **Develop AI Models:** Use the AI Development Center to develop and train your models.
+3. **Deploy AI Models:** Deploy your models using the GenAI App Engine.
+
+For detailed instructions on each step, refer to the respective sections in this documentation.
+
+---
+
+## Support
+For feature requests or bug reports, see ClearML on [GitHub](https://github.com/clearml/clearml/issues).
+
+If you have any questions, join the discussion on the **ClearML** [Slack channel](https://joinslack.clear.ml), or tag your questions on [stackoverflow](https://stackoverflow.com/questions/tagged/clearml) with the **clearml** tag.
+
+Lastly, you can always find us at [support@clearml.ai](mailto:support@clearml.ai?subject=ClearML).
\ No newline at end of file
diff --git a/docs/pipelines/pipelines.md b/docs/pipelines/pipelines.md
index 1785c34f..2c0e742d 100644
--- a/docs/pipelines/pipelines.md
+++ b/docs/pipelines/pipelines.md
@@ -12,7 +12,8 @@ products such as artifacts and parameters.
When run, the controller will sequentially launch the pipeline steps. The pipeline logic and steps
can be executed locally, or on any machine using the [clearml-agent](../clearml_agent.md).
-
+
+
The [Pipeline Run](../webapp/pipelines/webapp_pipeline_viewing.md) page in the web UI displays the pipeline's structure
in terms of executed steps and their status, as well as the run's configuration parameters and output. See [pipeline UI](../webapp/pipelines/webapp_pipeline_page.md)
diff --git a/docs/remote_session.md b/docs/remote_session.md
index b6c2fc85..8d104534 100644
--- a/docs/remote_session.md
+++ b/docs/remote_session.md
@@ -16,7 +16,7 @@ meets resource needs:
* [Clearml Session CLI](apps/clearml_session.md) - Launch an interactive JupyterLab, VS Code, and SSH session on a remote machine:
* Automatically store and sync your [interactive session workspace](apps/clearml_session.md#storing-and-synchronizing-workspace)
* Replicate a previously executed task's execution environment and [interactively execute and debug](apps/clearml_session.md#starting-a-debugging-session) it on a remote session
- * Develop directly inside your Kubernetes pods ([see ClearML Agent](clearml_agent/clearml_agent_deployment.md#kubernetes))
+ * Develop directly inside your Kubernetes pods ([see ClearML Agent](clearml_agent/clearml_agent_deployment_k8s.md))
* And more!
* GUI Applications (available under ClearML Enterprise Plan) - These apps provide access to remote machines over a
secure and encrypted SSH connection, allowing you to work in a remote environment using your preferred development
diff --git a/docs/webapp/applications/apps_llama_deployment.md b/docs/webapp/applications/apps_llama_deployment.md
index 1f965d1e..596586b3 100644
--- a/docs/webapp/applications/apps_llama_deployment.md
+++ b/docs/webapp/applications/apps_llama_deployment.md
@@ -81,6 +81,6 @@ values from the file, which can be modified before launching the app instance

-
+
\ No newline at end of file
diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md
index 496daa47..010e68ce 100644
--- a/docs/webapp/webapp_exp_track_visual.md
+++ b/docs/webapp/webapp_exp_track_visual.md
@@ -93,7 +93,7 @@ using to set up an environment (`pip` or `conda`) are available. Select which re
### Container
The Container section list the following information:
-* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md))
+* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker_exec))
* Arguments - add container arguments
* Setup shell script - a bash script to be executed inside the container before setting up the task's environment
diff --git a/docs/webapp/webapp_exp_tuning.md b/docs/webapp/webapp_exp_tuning.md
index 6c6ddd96..bf592b2b 100644
--- a/docs/webapp/webapp_exp_tuning.md
+++ b/docs/webapp/webapp_exp_tuning.md
@@ -72,7 +72,7 @@ and/or Reset functions.
#### Default Container
-Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md)).
+Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker_exec)).
**To add, change, or delete a default container:**
diff --git a/docusaurus.config.js b/docusaurus.config.js
index d78ff414..3ba29c30 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -82,54 +82,59 @@ module.exports = {
},
items: [
{
- to: '/docs',
- label: 'Docs',
+ to: '/docs/',
+ label: 'Overview',
position: 'left',
},
{
- to:'/docs/hyperdatasets/overview',
- label: 'Hyper-Datasets',
- position: 'left',
+ to: '/docs/deploying_clearml/clearml_server',
+ label: 'Setup',
+ position: 'left'
},
- // {to: 'tutorials', label: 'Tutorials', position: 'left'},
- // Please keep GitHub link to the right for consistency.
- {to: '/docs/guides', label: 'Examples', position: 'left'},
- //{to: '/docs/references', label: 'API', position: 'left'},
{
- label: 'References',
+ to: '/docs/getting_started/auto_log_exp',
+ label: 'Using ClearML',
+ position: 'left'
+ },
+ {
+ label: 'Developer Center',
position: 'left', // or 'right'
items: [
{
- label: 'SDK',
+ label: 'ClearML Basics',
+ to: '/docs/fundamentals/projects',
+ },
+ {
+ label: 'References',
to: '/docs/references/sdk/task',
},
{
- label: 'ClearML Agent',
- to: '/docs/clearml_agent/clearml_agent_ref',
+ label: 'Best Practices',
+ to: '/docs/getting_started/ds/best_practices'
},
{
- label: 'Server API',
- to: '/docs/references/api',
+ label: 'Tutorials',
+ to: '/docs/guides',
},
{
- label: 'Hyper-Datasets',
- to: '/docs/references/hyperdataset',
+ label: 'Integrations',
+ to: '/docs/integrations'
+ },
+ {
+ label: 'FAQ',
+ to: '/docs/faq',
},
-
{
label: 'Release Notes',
to: '/docs/release_notes/clearml_server/open_source/ver_2_0',
},
- {
- label: 'Community Resources',
- to: '/docs/community',
- }
+
],
},
{
- label: 'FAQ',
+ label: 'Community Resources',
position: 'left', // or 'right'
- to: '/docs/faq'
+ to: '/docs/community',
},
{
href: 'https://joinslack.clear.ml',
diff --git a/sidebars.js b/sidebars.js
index 7ae2d5a8..6a4e5ede 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -9,293 +9,125 @@
module.exports = {
mainSidebar: [
- {'Getting Started': ['getting_started/main', {
- 'Where do I start?': [{'Data Scientists': ['getting_started/ds/ds_first_steps', 'getting_started/ds/ds_second_steps', 'getting_started/ds/best_practices']},
- {'MLOps and LLMOps': ['getting_started/mlops/mlops_first_steps','getting_started/mlops/mlops_second_steps','getting_started/mlops/mlops_best_practices']}]
- }, 'getting_started/architecture', {'Video Tutorials':
- [
- 'getting_started/video_tutorials/quick_introduction',
- 'getting_started/video_tutorials/core_component_overview',
- 'getting_started/video_tutorials/experiment_manager_hands-on',
- 'getting_started/video_tutorials/experiment_management_best_practices',
- 'getting_started/video_tutorials/agent_remote_execution_and_automation',
- 'getting_started/video_tutorials/hyperparameter_optimization',
- 'getting_started/video_tutorials/pipelines_from_code',
- 'getting_started/video_tutorials/pipelines_from_tasks',
- 'getting_started/video_tutorials/clearml-data',
- 'getting_started/video_tutorials/the_clearml_autoscaler',
- 'getting_started/video_tutorials/hyperdatasets_data_versioning',
+ {
+ type: 'doc',
+ id: 'overview',
+ label: 'ClearML at a Glance',
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'Infrastructure Control Plane (GPUaaS)',
+ items: [
+ 'fundamentals/agents_and_queues',
+ 'clearml_agent',
+ 'clearml_agent/clearml_agent_dynamic_gpus',
+ 'clearml_agent/clearml_agent_fractional_gpus',
+ 'cloud_autoscaling/autoscaling_overview',
+ 'remote_session'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'AI Development Center',
+ items: [
+ 'clearml_sdk/clearml_sdk',
+ 'pipelines/pipelines',
+ 'clearml_data/clearml_data',
+ 'hyper_datasets',
+ 'model_registry',
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'GenAI App Engine',
+ items: [
+ 'deploying_clearml/enterprise_deploy/appgw',
+ 'build_interactive_models',
+ 'deploying_models',
+ 'custom_apps'
+ ]
+ },
+ ],
+ usecaseSidebar: [
+ /*'getting_started/main',*/
+ 'getting_started/auto_log_exp',
+ 'getting_started/track_tasks',
+ 'getting_started/reproduce_tasks',
+ 'getting_started/logging_using_artifacts',
+ 'getting_started/data_management',
+ 'getting_started/remote_execution',
+ 'getting_started/building_pipelines',
+ 'hpo',
+ 'clearml_agent/clearml_agent_docker_exec',
+ 'clearml_agent/clearml_agent_base_docker',
+ 'clearml_agent/clearml_agent_scheduling',
+ {"Deploying Model Endpoints": [
{
- 'Hands-on MLOps Tutorials':[
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist',
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_an_mlops_engineer',
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml'
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Serving',
+ link: {type: 'doc', id: 'clearml_serving/clearml_serving'},
+ items: ['clearml_serving/clearml_serving_setup', 'clearml_serving/clearml_serving_cli', 'clearml_serving/clearml_serving_tutorial']
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Model Launchers',
+ items: [
+ 'webapp/applications/apps_embed_model_deployment',
+ 'webapp/applications/apps_model_deployment',
+ 'webapp/applications/apps_llama_deployment'
]
- }
- ]}]},
- {'ClearML Fundamentals': [
- 'fundamentals/projects', 'fundamentals/task', 'fundamentals/hyperparameters',
- 'fundamentals/artifacts', 'fundamentals/models', 'fundamentals/logger', 'fundamentals/agents_and_queues',
- 'fundamentals/hpo'
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML SDK',
- link: {type: 'doc', id: 'clearml_sdk/clearml_sdk'},
- items: ['clearml_sdk/task_sdk', 'clearml_sdk/model_sdk', 'clearml_sdk/apiclient_sdk']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Agent',
- link: {type: 'doc', id: 'clearml_agent'},
- items: ['clearml_agent/clearml_agent_setup', 'clearml_agent/clearml_agent_deployment',
- 'clearml_agent/clearml_agent_execution_env', 'clearml_agent/clearml_agent_env_caching',
- 'clearml_agent/clearml_agent_dynamic_gpus', 'clearml_agent/clearml_agent_fractional_gpus',
- 'clearml_agent/clearml_agent_services_mode', 'clearml_agent/clearml_agent_docker',
- 'clearml_agent/clearml_agent_scheduling']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Cloud Autoscaling',
- link: {type: 'doc', id: 'cloud_autoscaling/autoscaling_overview'},
- items: [
- {'Autoscaler Apps': [
- 'webapp/applications/apps_aws_autoscaler',
- 'webapp/applications/apps_gcp_autoscaler',
- ]
- }
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Pipelines',
- link: {type: 'doc', id: 'pipelines/pipelines'},
- items: [{"Building Pipelines":
- ['pipelines/pipelines_sdk_tasks', 'pipelines/pipelines_sdk_function_decorators']
- }
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Data',
- link: {type: 'doc', id: 'clearml_data/clearml_data'},
- items: ['clearml_data/clearml_data_cli', 'clearml_data/clearml_data_sdk', 'clearml_data/best_practices',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Workflows',
- link: {type: 'doc', id: 'clearml_data/data_management_examples/workflows'},
- items: [
- 'clearml_data/data_management_examples/data_man_simple',
- 'clearml_data/data_management_examples/data_man_folder_sync',
- 'clearml_data/data_management_examples/data_man_cifar_classification',
- 'clearml_data/data_management_examples/data_man_python'
- ]
- },
- ]
- },
- 'hyper_datasets',
- 'model_registry',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Remote IDE',
- link: {type: 'doc', id: 'remote_session'},
- items: [
- 'apps/clearml_session',
- {type: 'ref', id: 'webapp/applications/apps_ssh_session'},
- {type: 'ref', id: 'webapp/applications/apps_jupyter_lab'},
- {type: 'ref', id: 'webapp/applications/apps_vscode'}
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Serving',
- link: {type: 'doc', id: 'clearml_serving/clearml_serving'},
- items: ['clearml_serving/clearml_serving_setup', 'clearml_serving/clearml_serving_cli', 'clearml_serving/clearml_serving_tutorial']
- },
- {'CLI Tools': [
- 'apps/clearml_task',
- {type: 'ref', id: 'clearml_agent/clearml_agent_ref'},
- {type: 'ref', id: 'clearml_data/clearml_data_cli'},
- 'apps/clearml_param_search',
- {type: 'ref', id: 'apps/clearml_session'},
- {type: 'ref', id: 'clearml_serving/clearml_serving_cli'},
- ]
- },
- {'Integrations': [
- 'integrations/autokeras',
- 'integrations/catboost',
- 'integrations/click',
- 'integrations/fastai',
- {"Hugging Face": ['integrations/transformers', 'integrations/accelerate']},
- 'integrations/hydra', 'integrations/jsonargparse',
- 'integrations/keras', 'integrations/keras_tuner',
- 'integrations/langchain',
- 'integrations/lightgbm', 'integrations/matplotlib',
- 'integrations/megengine', 'integrations/monai', 'integrations/tao',
- {"OpenMMLab":['integrations/mmcv', 'integrations/mmengine']},
- 'integrations/optuna',
- 'integrations/python_fire', 'integrations/pytorch',
- 'integrations/ignite',
- 'integrations/pytorch_lightning',
- 'integrations/scikit_learn', 'integrations/seaborn',
- 'integrations/splunk',
- 'integrations/tensorboard', 'integrations/tensorboardx', 'integrations/tensorflow',
- 'integrations/xgboost', 'integrations/yolov5', 'integrations/yolov8'
- ]
- },
- 'integrations/storage',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'WebApp',
- link: {type: 'doc', id: 'webapp/webapp_overview'},
- items: [
- 'webapp/webapp_home',
- {
- 'Projects': [
- 'webapp/webapp_projects_page',
- 'webapp/webapp_project_overview',
- {
- 'Tasks': ['webapp/webapp_exp_table', 'webapp/webapp_exp_track_visual', 'webapp/webapp_exp_reproducing', 'webapp/webapp_exp_tuning',
- 'webapp/webapp_exp_comparing']
- },
- {
- 'Models': ['webapp/webapp_model_table', 'webapp/webapp_model_viewing', 'webapp/webapp_model_comparing']
- },
- 'webapp/webapp_exp_sharing'
- ]
- },
- {
- 'Datasets':[
- 'webapp/datasets/webapp_dataset_page', 'webapp/datasets/webapp_dataset_viewing'
- ]
- },
- {
- 'Pipelines':[
- 'webapp/pipelines/webapp_pipeline_page', 'webapp/pipelines/webapp_pipeline_table', 'webapp/pipelines/webapp_pipeline_viewing'
- ]
- },
- 'webapp/webapp_model_endpoints',
- 'webapp/webapp_reports',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Orchestration',
- link: {type: 'doc', id: 'webapp/webapp_workers_queues'},
- items: ['webapp/webapp_orchestration_dash', 'webapp/resource_policies']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Applications',
- link: {type: 'doc', id: 'webapp/applications/apps_overview'},
- items: [
- {
- "General": [
- 'webapp/applications/apps_hpo',
- 'webapp/applications/apps_dashboard',
- 'webapp/applications/apps_task_scheduler',
- 'webapp/applications/apps_trigger_manager',
- ]
- },
- {
- "AI Dev": [
- 'webapp/applications/apps_ssh_session',
- 'webapp/applications/apps_jupyter_lab',
- 'webapp/applications/apps_vscode',
- ]
- },
- {
- "UI Dev": [
- 'webapp/applications/apps_gradio',
- 'webapp/applications/apps_streamlit'
- ]
- },
- {
- "Deploy": [
- 'webapp/applications/apps_embed_model_deployment',
- 'webapp/applications/apps_model_deployment',
- 'webapp/applications/apps_llama_deployment'
- ]
- },
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Settings',
- link: {type: 'doc', id: 'webapp/settings/webapp_settings_overview'},
- items: ['webapp/settings/webapp_settings_profile',
- 'webapp/settings/webapp_settings_admin_vaults', 'webapp/settings/webapp_settings_users',
- 'webapp/settings/webapp_settings_access_rules', 'webapp/settings/webapp_settings_id_providers',
- 'webapp/settings/webapp_settings_resource_configs', 'webapp/settings/webapp_settings_usage_billing',
- 'webapp/settings/webapp_settings_storage_credentials'
- ]
- },
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Configuring ClearML',
- link: {type: 'doc', id: 'configs/configuring_clearml'},
- items: ['configs/clearml_conf', 'configs/env_vars']
- },
- {'User Management': [
- 'user_management/user_groups',
- 'user_management/access_rules',
- 'user_management/admin_vaults',
- 'user_management/identity_providers'
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Server',
- link: {type: 'doc', id: 'deploying_clearml/clearml_server'},
- items: [
- {'Deploying ClearML Server':
- ['deploying_clearml/clearml_server_aws_ec2_ami', 'deploying_clearml/clearml_server_gcp',
- 'deploying_clearml/clearml_server_linux_mac', 'deploying_clearml/clearml_server_win',
- 'deploying_clearml/clearml_server_kubernetes_helm']
- },
- {'Upgrading ClearML Server':
- ['deploying_clearml/upgrade_server_aws_ec2_ami','deploying_clearml/upgrade_server_gcp',
- 'deploying_clearml/upgrade_server_linux_mac', 'deploying_clearml/upgrade_server_win',
- 'deploying_clearml/upgrade_server_kubernetes_helm',
- 'deploying_clearml/clearml_server_es7_migration', 'deploying_clearml/clearml_server_mongo44_migration']
- },
- 'deploying_clearml/clearml_server_config', 'deploying_clearml/clearml_server_security'
- ]
- },
-
- //'Comments': ['Notes'],
-
-
-
+ }
+ ]},
+ {"Launching a Remote IDE": [
+ 'apps/clearml_session',
+ {type: 'ref', id: 'webapp/applications/apps_ssh_session'},
+ {type: 'ref', id: 'webapp/applications/apps_jupyter_lab'},
+ {type: 'ref', id: 'webapp/applications/apps_vscode'}
+ ]},
+ {"Building Interactive Model Demos": [
+ {type: 'ref', id: 'webapp/applications/apps_gradio'},
+ {type: 'ref', id: 'webapp/applications/apps_streamlit'},
+ ]},
+ {"Automating Task Execution": [
+ {type: 'ref', id: 'webapp/applications/apps_task_scheduler'},
+ {type: 'ref', id: 'webapp/applications/apps_trigger_manager'},
+ ]},
+ {"Monitoring Project Progress": [
+ {type: 'ref', id: 'webapp/applications/apps_dashboard'},
+ ]},
+ ],
+ integrationsSidebar: [
+ {
+ type: 'doc',
+ label: 'Overview',
+ id: 'integrations/integrations',
+ },
+ 'integrations/autokeras',
+ 'integrations/catboost',
+ 'integrations/click',
+ 'integrations/fastai',
+ {"Hugging Face": ['integrations/transformers', 'integrations/accelerate']},
+ 'integrations/hydra', 'integrations/jsonargparse',
+ 'integrations/keras', 'integrations/keras_tuner',
+ 'integrations/langchain',
+ 'integrations/lightgbm', 'integrations/matplotlib',
+ 'integrations/megengine', 'integrations/monai', 'integrations/tao',
+ {"OpenMMLab":['integrations/mmcv', 'integrations/mmengine']},
+ 'integrations/optuna',
+ 'integrations/python_fire', 'integrations/pytorch',
+ 'integrations/ignite',
+ 'integrations/pytorch_lightning',
+ 'integrations/scikit_learn', 'integrations/seaborn',
+ 'integrations/splunk',
+ 'integrations/tensorboard', 'integrations/tensorboardx', 'integrations/tensorflow',
+ 'integrations/xgboost', 'integrations/yolov5', 'integrations/yolov8'
],
guidesSidebar: [
'guides/guidemain',
@@ -304,6 +136,7 @@ module.exports = {
{'ClearML Task': ['guides/clearml-task/clearml_task_tutorial']},
{'ClearML Agent': ['guides/clearml_agent/executable_exp_containers', 'guides/clearml_agent/exp_environment_containers', 'guides/clearml_agent/reproduce_exp']},
{'Datasets': ['clearml_data/data_management_examples/data_man_cifar_classification', 'clearml_data/data_management_examples/data_man_python']},
+ {id: 'hyperdatasets/code_examples', type: 'doc', label: 'Hyper-Datasets'},
{'Distributed': ['guides/distributed/distributed_pytorch_example', 'guides/distributed/subprocess_example']},
{'Docker': ['guides/docker/extra_docker_shell_script']},
{'Frameworks': [
@@ -342,7 +175,6 @@ module.exports = {
{'Offline Mode':['guides/set_offline']},
{'Optimization': ['guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt']},
{'Pipelines': ['guides/pipeline/pipeline_controller', 'guides/pipeline/pipeline_decorator', 'guides/pipeline/pipeline_functions']},
-
{'Reporting': ['guides/reporting/explicit_reporting','guides/reporting/3d_plots_reporting', 'guides/reporting/artifacts', 'guides/reporting/using_artifacts', 'guides/reporting/clearml_logging_example', 'guides/reporting/html_reporting',
'guides/reporting/hyper_parameters', 'guides/reporting/image_reporting', 'guides/reporting/manual_matplotlib_reporting', 'guides/reporting/media_reporting',
'guides/reporting/model_config', 'guides/reporting/pandas_reporting', 'guides/reporting/plotly_reporting',
@@ -352,6 +184,112 @@ module.exports = {
{'Web UI': ['guides/ui/building_leader_board','guides/ui/tuning_exp']}
],
+ knowledgeSidebar: [
+ {'Fundamentals': [
+ 'fundamentals/projects',
+ 'fundamentals/task',
+ 'fundamentals/hyperparameters',
+ 'fundamentals/artifacts',
+ 'fundamentals/models',
+ 'fundamentals/logger',
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML SDK',
+ link: {type: 'doc', id: 'clearml_sdk/clearml_sdk'},
+ items: [
+ 'clearml_sdk/task_sdk',
+ 'clearml_sdk/model_sdk',
+ 'hyperdatasets/task',
+ 'clearml_sdk/hpo_sdk',
+ 'clearml_sdk/apiclient_sdk'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Pipelines',
+ link: {type: 'doc', id: 'pipelines/pipelines'},
+ items: [{
+ "Building Pipelines": [
+ 'pipelines/pipelines_sdk_tasks',
+ 'pipelines/pipelines_sdk_function_decorators'
+ ]
+ }]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Data',
+ link: {type: 'doc', id: 'clearml_data/clearml_data'},
+ items: [
+ 'clearml_data/clearml_data_cli',
+ 'clearml_data/clearml_data_sdk',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Workflows',
+ link: {type: 'doc', id: 'clearml_data/data_management_examples/workflows'},
+ items: [
+ 'clearml_data/data_management_examples/data_man_simple',
+ 'clearml_data/data_management_examples/data_man_folder_sync',
+ 'clearml_data/data_management_examples/data_man_cifar_classification',
+ 'clearml_data/data_management_examples/data_man_python'
+ ]
+ },
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Hyper-Datasets',
+ link: {type: 'doc', id: 'hyperdatasets/overview'},
+ items: [
+ 'hyperdatasets/dataset',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Frames',
+ link: {type: 'doc', id: 'hyperdatasets/frames'},
+ items: [
+ 'hyperdatasets/single_frames',
+ 'hyperdatasets/frame_groups',
+ 'hyperdatasets/sources',
+ 'hyperdatasets/annotations',
+ 'hyperdatasets/masks',
+ 'hyperdatasets/previews',
+ 'hyperdatasets/custom_metadata'
+ ]
+ },
+ 'hyperdatasets/dataviews',
+ ]
+ },
+ {'Video Tutorials': [
+ 'getting_started/video_tutorials/quick_introduction',
+ 'getting_started/video_tutorials/core_component_overview',
+ 'getting_started/video_tutorials/experiment_manager_hands-on',
+ 'getting_started/video_tutorials/experiment_management_best_practices',
+ 'getting_started/video_tutorials/agent_remote_execution_and_automation',
+ 'getting_started/video_tutorials/hyperparameter_optimization',
+ 'getting_started/video_tutorials/pipelines_from_code',
+ 'getting_started/video_tutorials/pipelines_from_tasks',
+ 'getting_started/video_tutorials/clearml-data',
+ 'getting_started/video_tutorials/the_clearml_autoscaler',
+ 'getting_started/video_tutorials/hyperdatasets_data_versioning',
+ {'Hands-on MLOps Tutorials': [
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist',
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_an_mlops_engineer',
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml'
+ ]}
+ ]},
+ ],
rnSidebar: [
{'Server': [
{
@@ -383,7 +321,7 @@ module.exports = {
'release_notes/clearml_server/enterprise/ver_3_24',
{
'Older Versions': [
- 'release_notes/clearml_server/enterprise/ver_3_23','release_notes/clearml_server/enterprise/ver_3_22',
+ 'release_notes/clearml_server/enterprise/ver_3_23', 'release_notes/clearml_server/enterprise/ver_3_22',
'release_notes/clearml_server/enterprise/ver_3_21', 'release_notes/clearml_server/enterprise/ver_3_20'
]
}
@@ -456,15 +394,18 @@ module.exports = {
]
}
],
- sdkSidebar: [
+ referenceSidebar: [
+ {'SDK': [
'references/sdk/task',
'references/sdk/logger',
{'Model': ['references/sdk/model_model',
'references/sdk/model_inputmodel', 'references/sdk/model_outputmodel',]},
'references/sdk/storage',
'references/sdk/dataset',
- {'Pipeline': ['references/sdk/automation_controller_pipelinecontroller',
- 'references/sdk/automation_job_clearmljob']},
+ {'Pipeline': [
+ 'references/sdk/automation_controller_pipelinecontroller',
+ 'references/sdk/automation_job_clearmljob'
+ ]},
'references/sdk/scheduler',
'references/sdk/trigger',
{'HyperParameter Optimization': [
@@ -477,59 +418,294 @@ module.exports = {
'references/sdk/hpo_parameters_uniformintegerparameterrange',
'references/sdk/hpo_parameters_uniformparameterrange',
'references/sdk/hpo_parameters_parameterset',
- ]},
- ],
- clearmlAgentSidebar: [
- 'clearml_agent/clearml_agent_ref', 'clearml_agent/clearml_agent_env_var'
- ],
- hyperdatasetsSidebar: [
- 'hyperdatasets/overview',
- {'Frames': [
- 'hyperdatasets/frames',
- 'hyperdatasets/single_frames',
- 'hyperdatasets/frame_groups',
- 'hyperdatasets/sources',
- 'hyperdatasets/annotations',
- 'hyperdatasets/masks',
- 'hyperdatasets/previews',
- 'hyperdatasets/custom_metadata'
]},
- 'hyperdatasets/dataset',
- 'hyperdatasets/dataviews',
- 'hyperdatasets/task',
- {'WebApp': [
- {'Projects': [
- 'hyperdatasets/webapp/webapp_dataviews', 'hyperdatasets/webapp/webapp_exp_track_visual',
- 'hyperdatasets/webapp/webapp_exp_modifying', 'hyperdatasets/webapp/webapp_exp_comparing',
- ]
- },
- {'Datasets': [
- 'hyperdatasets/webapp/webapp_datasets',
- 'hyperdatasets/webapp/webapp_datasets_versioning',
- 'hyperdatasets/webapp/webapp_datasets_frames'
- ]
- },
- 'hyperdatasets/webapp/webapp_annotator'
+ {'Enterprise Hyper-Datasets': [
+ {'Hyper-Dataset': [
+ 'references/hyperdataset/hyperdataset',
+ 'references/hyperdataset/hyperdatasetversion'
+ ]},
+ {'DataFrame': [
+ 'references/hyperdataset/singleframe',
+ 'references/hyperdataset/framegroup',
+ 'references/hyperdataset/annotation',
+ ]},
+ 'references/hyperdataset/dataview',
+ ]},
+ ]},
+ {'CLI Tools': [
+ 'apps/clearml_task',
+ {type: 'ref', id: 'clearml_data/clearml_data_cli'},
+ 'apps/clearml_param_search',
+ {type: 'ref', id: 'apps/clearml_session'},
+ {type: 'ref', id: 'clearml_serving/clearml_serving_cli'},
+ ] },
+ {'ClearML Agent': [
+ 'clearml_agent/clearml_agent_ref', 'clearml_agent/clearml_agent_env_var'
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Client Configuration',
+ link: {type: 'doc', id: 'configs/configuring_clearml'},
+ items: [
+ 'configs/clearml_conf',
+ 'configs/env_vars'
+ ]
+ },
+ {'Server API': [
+ 'references/api/index',
+ 'references/api/definitions',
+ 'references/api/login',
+ 'references/api/debug',
+ 'references/api/projects',
+ 'references/api/queues',
+ 'references/api/workers',
+ 'references/api/events',
+ 'references/api/models',
+ 'references/api/tasks',
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'WebApp',
+ link: {type: 'doc', id: 'webapp/webapp_overview'},
+ items: [
+ 'webapp/webapp_home',
+ {'Projects': [
+ 'webapp/webapp_projects_page',
+ 'webapp/webapp_project_overview',
+ {'Tasks': [
+ 'webapp/webapp_exp_table',
+ 'webapp/webapp_exp_track_visual',
+ 'webapp/webapp_exp_reproducing',
+ 'webapp/webapp_exp_tuning',
+ 'webapp/webapp_exp_comparing'
+ ]},
+ {'Models': [
+ 'webapp/webapp_model_table',
+ 'webapp/webapp_model_viewing',
+ 'webapp/webapp_model_comparing'
+ ]},
+ {'Dataviews': [
+ 'hyperdatasets/webapp/webapp_dataviews',
+ 'hyperdatasets/webapp/webapp_exp_track_visual',
+ 'hyperdatasets/webapp/webapp_exp_modifying',
+ 'hyperdatasets/webapp/webapp_exp_comparing'
+ ]},
+ 'webapp/webapp_exp_sharing'
+ ]},
+ {'Datasets': [
+ 'webapp/datasets/webapp_dataset_page',
+ 'webapp/datasets/webapp_dataset_viewing'
+ ]},
+ {'Hyper-Datasets': [
+ 'hyperdatasets/webapp/webapp_datasets',
+ 'hyperdatasets/webapp/webapp_datasets_versioning',
+ 'hyperdatasets/webapp/webapp_datasets_frames',
+ 'hyperdatasets/webapp/webapp_annotator'
+ ]},
+ {'Pipelines': [
+ 'webapp/pipelines/webapp_pipeline_page',
+ 'webapp/pipelines/webapp_pipeline_table',
+ 'webapp/pipelines/webapp_pipeline_viewing'
+ ]},
+ 'webapp/webapp_model_endpoints',
+ 'webapp/webapp_reports',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Orchestration',
+ link: {type: 'doc', id: 'webapp/webapp_workers_queues'},
+ items: [
+ 'webapp/webapp_orchestration_dash',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Autoscalers',
+ items: [
+ 'webapp/applications/apps_aws_autoscaler',
+ 'webapp/applications/apps_gcp_autoscaler',
+ ]
+ },
+ 'webapp/resource_policies'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Applications',
+ link: {type: 'doc', id: 'webapp/applications/apps_overview'},
+ items: [
+ {"General": [
+ 'webapp/applications/apps_hpo',
+ 'webapp/applications/apps_dashboard',
+ 'webapp/applications/apps_task_scheduler',
+ 'webapp/applications/apps_trigger_manager',
+ ]},
+ {"AI Dev": [
+ 'webapp/applications/apps_ssh_session',
+ 'webapp/applications/apps_jupyter_lab',
+ 'webapp/applications/apps_vscode',
+ ]},
+ {"UI Dev": [
+ 'webapp/applications/apps_gradio',
+ 'webapp/applications/apps_streamlit'
+ ]},
+ {"Deploy": [
+ 'webapp/applications/apps_embed_model_deployment',
+ 'webapp/applications/apps_model_deployment',
+ 'webapp/applications/apps_llama_deployment'
+ ]},
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Settings',
+ link: {type: 'doc', id: 'webapp/settings/webapp_settings_overview'},
+ items: [
+ 'webapp/settings/webapp_settings_profile',
+ 'webapp/settings/webapp_settings_admin_vaults',
+ 'webapp/settings/webapp_settings_users',
+ 'webapp/settings/webapp_settings_access_rules',
+ 'webapp/settings/webapp_settings_id_providers',
+ 'webapp/settings/webapp_settings_resource_configs',
+ 'webapp/settings/webapp_settings_usage_billing',
+ 'webapp/settings/webapp_settings_storage_credentials'
+ ]
+ },
]
},
- 'hyperdatasets/code_examples'
],
- sdkHyperDataset: [
- {'Hyper-Dataset': ['references/hyperdataset/hyperdataset', 'references/hyperdataset/hyperdatasetversion']},
- {'DataFrame': ['references/hyperdataset/singleframe',
- 'references/hyperdataset/framegroup', 'references/hyperdataset/annotation',]},
- 'references/hyperdataset/dataview',
+ installationSidebar: [
+ 'clearml_sdk/clearml_sdk_setup',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Agent',
+ items: [
+ 'clearml_agent/clearml_agent_setup',
+ {
+ 'Deployment': [
+ 'clearml_agent/clearml_agent_deployment_bare_metal',
+ 'clearml_agent/clearml_agent_deployment_k8s',
+ 'clearml_agent/clearml_agent_deployment_slurm',
+ ]
+ },
+ 'clearml_agent/clearml_agent_execution_env',
+ 'clearml_agent/clearml_agent_env_caching',
+ 'clearml_agent/clearml_agent_services_mode',
+ ]
+ },
+ {
+ type: 'doc',
+ label: 'Configuring Client Storage Access',
+ id: 'integrations/storage',
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Open Source Server',
+ link: {type: 'doc', id: 'deploying_clearml/clearml_server'},
+ items: [
+ {'Deployment Options': [
+ 'deploying_clearml/clearml_server_aws_ec2_ami',
+ 'deploying_clearml/clearml_server_gcp',
+ 'deploying_clearml/clearml_server_linux_mac',
+ 'deploying_clearml/clearml_server_win',
+ 'deploying_clearml/clearml_server_kubernetes_helm'
+ ]},
+ 'deploying_clearml/clearml_server_config',
+ 'deploying_clearml/clearml_server_security',
+ {'Server Upgrade Procedures': [
+ 'deploying_clearml/upgrade_server_aws_ec2_ami',
+ 'deploying_clearml/upgrade_server_gcp',
+ 'deploying_clearml/upgrade_server_linux_mac',
+ 'deploying_clearml/upgrade_server_win',
+ 'deploying_clearml/upgrade_server_kubernetes_helm',
+ 'deploying_clearml/clearml_server_es7_migration',
+ 'deploying_clearml/clearml_server_mongo44_migration'
+ ]},
+ ]
+ },
+/* {'Getting Started': [
+ 'getting_started/architecture',
+ ]},*/
+ {
+ 'Enterprise Server Deployment': [
+ 'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
+ 'deploying_clearml/enterprise_deploy/vpc_aws',
+ 'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Application Gateway',
+ items: [
+ 'deploying_clearml/enterprise_deploy/appgw_install_compose',
+ 'deploying_clearml/enterprise_deploy/appgw_install_k8s',
+ ]
+ },
+ 'deploying_clearml/enterprise_deploy/delete_tenant',
+ {
+ 'Enterprise Applications': [
+ 'deploying_clearml/enterprise_deploy/app_install_ubuntu_on_prem',
+ 'deploying_clearml/enterprise_deploy/app_install_ex_server',
+ 'deploying_clearml/enterprise_deploy/app_custom',
+ ]
+ },
+ {
+ 'User Management': [
+ 'user_management/user_groups',
+ 'user_management/access_rules',
+ 'user_management/admin_vaults',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Identity Provider Integration',
+ link: {type: 'doc', id: 'user_management/identity_providers'},
+ items: [
+ 'deploying_clearml/enterprise_deploy/sso_saml_k8s',
+ 'deploying_clearml/enterprise_deploy/sso_keycloak',
+ 'deploying_clearml/enterprise_deploy/sso_active_directory'
+ ]
+ },
+ ]
+ },
],
- apiSidebar: [
- 'references/api/index',
- 'references/api/definitions',
- 'references/api/login',
- 'references/api/debug',
- 'references/api/projects',
- 'references/api/queues',
- 'references/api/workers',
- 'references/api/events',
- 'references/api/models',
- 'references/api/tasks',
+ bestPracticesSidebar: [
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'Best Practices',
+ items: [
+ {
+ type: 'doc',
+ label: 'Data Scientists',
+ id: 'getting_started/ds/best_practices'
+ },
+ {
+ type: 'doc',
+ label: 'MLOps and LLMOps',
+ id: 'getting_started/mlops/mlops_best_practices'
+ },
+ {
+ type: 'doc',
+ label: 'Data Management',
+ id: 'clearml_data/best_practices'
+ },
+ ],
+ },
]
};
diff --git a/src/css/custom.css b/src/css/custom.css
index e428974d..67fa2cf0 100644
--- a/src/css/custom.css
+++ b/src/css/custom.css
@@ -29,7 +29,7 @@ html {
--ifm-color-primary-light: #17c5a2;
--ifm-color-primary-lighter: #2edfbb;
- --ifm-color-primary-lightest: #51f1d1;
+ --ifm-color-primary-lightest: #AEFDED;
--ifm-toc-background-color: #141722;
--ifm-code-font-size: 95%;
@@ -47,13 +47,17 @@ html {
}
html[data-theme="dark"] {
- --ifm-background-color: #1a1e2c;
- --ifm-footer-background-color: #1a1e2c;
- --ifm-footer-link-color: #a4a5aa;
- --ifm-footer-link-hover-color: #14aa8c;
- --ifm-dropdown-background-color: #2c3246;
- --ifm-table-stripe-background: #141722;
- --ifm-link-color: var(--ifm-color-primary-light);
+ --ifm-background-color: #040506; /* body bg */
+ --ifm-header-background-color: #101418; /* section 1 */
+ --ifm-footer-background-color: #101418; /* section 1 */
+ --ifm-footer-link-color: #D8FFF0; /* specific footer link color */
+ --ifm-footer-link-hover-color: #ffffff; /* specific footer link hover color */
+ --ifm-dropdown-background-color: #242D37; /* section 2 */
+ --ifm-table-stripe-background: #101418; /* section 1 */
+ --ifm-toc-background-color: #242D37; /* section 2 */
+ --ifm-link-color: #6AD6C0; /* specific link color */
+ --ifm-link-hover-color: #AEFDED; /* specific link hover color */
+ --ifm-font-color-base: #E5E5E5 /* body text */
}
@media (min-width: 1400px) {
@@ -70,7 +74,7 @@ a {
}
html[data-theme="dark"] a:hover {
- color: var(--ifm-color-primary-lightest);
+ color: var(--ifm-color-primary-lightest);
}
.align-center {
@@ -151,12 +155,16 @@ html[data-theme="dark"] div[role="banner"] {
background-color: #09173C;
}
html[data-theme="dark"] .navbar--dark {
- background-color: #151722;
+ background-color: var(--ifm-header-background-color);
}
.navbar--dark.navbar .navbar__toggle {
color: white; /* opener icon color */
}
+html[data-theme="dark"] .navbar__link:hover,
+html[data-theme="dark"] .navbar__link--active {
+ color: var(--ifm-link-color);
+}
/* ===HEADER=== */
@@ -374,7 +382,7 @@ html[data-theme="light"] [class^="sidebarLogo"] > img {
html[data-theme="dark"] .menu__link--active {
- color: var(--ifm-color-primary-lighter);
+ color: var(--ifm-link-color);
}
html[data-theme="light"] .menu__link:not(.menu__link--active) {
color: #606a78;
@@ -464,7 +472,10 @@ html[data-theme="dark"] .table-of-contents {
box-shadow: 0 0 0 2px rgba(0,0,0,0.4) inset;
}
html[data-theme="dark"] a.table-of-contents__link--active {
- color: var(--ifm-color-primary-light);
+ color: var(--ifm-link-color);
+}
+html[data-theme="dark"] .table-of-contents a:hover {
+ color: var(--ifm-color-primary-lightest);
}
.table-of-contents__left-border {
border:none;
@@ -564,7 +575,7 @@ html[data-theme="light"] .footer__link-item[href*="stackoverflow"] {
html[data-theme="dark"] .footer__link-item:hover {
- color: var(--ifm-color-primary-lighter);
+ color: var(--ifm-footer-link-hover-color);
}
@@ -719,15 +730,37 @@ html[data-theme="light"] .icon {
/* md heading style */
+/* */
+html[data-theme="light"] h2 {
+ color: #0b2471;
+}
+html[data-theme="light"] h2 a.hash-link {
+ color: #0b2471;
+}
+
+html[data-theme="dark"] h2 {
+ color: #A8C5E6;
+}
+html[data-theme="dark"] h2 a.hash-link {
+ color: #A8C5E6;
+}
+
/* */
.markdown h3 {
font-size: 1.6rem;
}
html[data-theme="light"] h3 {
- color: var(--ifm-color-primary-darker);
+ color: #a335d5;
}
+html[data-theme="light"] h3 a.hash-link {
+ color: #a335d5;
+}
+
html[data-theme="dark"] h3 {
- color: var(--ifm-color-primary-lightest);
+ color: #DAA5BF;
+}
+html[data-theme="dark"] h3 a.hash-link {
+ color: #DAA5BF;
}
/* */
@@ -736,12 +769,21 @@ html[data-theme="dark"] h3 {
margin-bottom: 8px;
margin-top: 42px;
}
+
html[data-theme="light"] h4 {
- color: #62b00d;
+ color: #242D37;
}
+html[data-theme="light"] h4 a.hash-link {
+ color: #242D37;
+}
+
html[data-theme="dark"] h4 {
- color: #83de1f;
+ color: #c7cdd2;
}
+html[data-theme="dark"] h4 a.hash-link {
+ color: #c7cdd2;
+}
+
/*
*/