-```
-
-You can add the Docker container as the base Docker image to a task, using one of the following methods:
-
-- Using the **ClearML Web UI** - See [Default Container](../webapp/webapp_exp_tuning.md#default-container).
-- In the ClearML configuration file - Use the ClearML configuration file [`agent.default_docker`](../configs/clearml_conf.md#agentdefault_docker)
- options.
-
-Check out [this tutorial](../guides/clearml_agent/exp_environment_containers.md) for building a Docker container
-replicating the execution environment of an existing task.
\ No newline at end of file
diff --git a/docs/clearml_agent/clearml_agent_scheduling.md b/docs/getting_started/clearml_agent_scheduling.md
similarity index 99%
rename from docs/clearml_agent/clearml_agent_scheduling.md
rename to docs/getting_started/clearml_agent_scheduling.md
index 469dfe54..80d22df7 100644
--- a/docs/clearml_agent/clearml_agent_scheduling.md
+++ b/docs/getting_started/clearml_agent_scheduling.md
@@ -1,6 +1,7 @@
---
-title: Scheduling Working Hours
+title: Managing Agent Work Schedules
---
+
:::important Enterprise Feature
This feature is available under the ClearML Enterprise plan.
:::
diff --git a/docs/getting_started/data_management.md b/docs/getting_started/data_management.md
new file mode 100644
index 00000000..3064a51f
--- /dev/null
+++ b/docs/getting_started/data_management.md
@@ -0,0 +1,131 @@
+---
+title: Managing Your Data
+---
+
+Data is probably one of the biggest factors that determines the success of a project. Associating a model's data with
+the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
+
+[ClearML Data](../clearml_data/clearml_data.md) lets you:
+* Version your data
+* Fetch your data from every machine with minimal code changes
+* Use the data with any other task
+* Associate data to task results.
+
+ClearML offers the following data management solutions:
+
+* `clearml.Dataset` - A Python interface for creating, retrieving, managing, and using datasets. See [SDK](../clearml_data/clearml_data_sdk.md)
+ for an overview of the basic methods of the Dataset module.
+* `clearml-data` - A CLI utility for creating, uploading, and managing datasets. See [CLI](../clearml_data/clearml_data_cli.md)
+ for a reference of `clearml-data` commands.
+* Hyper-Datasets - ClearML's advanced queryable dataset management solution. For more information, see [Hyper-Datasets](../hyperdatasets/overview.md)
+
+The following guide will use both the `clearml-data` CLI and the `Dataset` class to do the following:
+1. Create a ClearML dataset
+2. Access the dataset from a ClearML Task in order to preprocess the data
+3. Create a new version of the dataset with the modified data
+4. Use the new version of the dataset to train a model
+
+## Creating Dataset
+
+Let's assume you have some code that extracts data from a production database into a local folder.
+Your goal is to create an immutable copy of the data to be used by further steps.
+
+1. Create the dataset using the `clearml-data create` command and passing the dataset's project and name. You can add a
+ `latest` tag, making it easier to find it later.
+
+ ```bash
+ clearml-data create --project chatbot_data --name dataset_v1 --latest
+ ```
+
+1. Add data to the dataset using `clearml-data sync` and passing the path of the folder to be added to the dataset.
+ This command also uploads the data and finalizes the dataset automatically.
+
+ ```bash
+ clearml-data sync --folder ./work_dataset
+ ```
+
+
+## Preprocessing Data
+The second step is to preprocess the data. First access the data, then modify it,
+and lastly create a new version of the data.
+
+1. Create a task for you data preprocessing (not required):
+
+ ```python
+ from clearml import Task, Dataset
+
+ # create a task for the data processing
+ task = Task.init(project_name='data', task_name='create', task_type='data_processing')
+ ```
+
+1. Access a dataset using [`Dataset.get()`](../references/sdk/dataset.md#datasetget):
+
+ ```python
+ # get the v1 dataset
+ dataset = Dataset.get(dataset_project='data', dataset_name='dataset_v1')
+ ```
+1. Get a local mutable copy of the dataset using [`Dataset.get_mutable_local_copy`](../references/sdk/dataset.md#get_mutable_local_copy). \
+ This downloads the dataset to a specified `target_folder` (non-cached). If the folder already has contents, specify
+ whether to overwrite its contents with the dataset contents using the `overwrite` parameter.
+
+ ```python
+ # get a local mutable copy of the dataset
+ dataset_folder = dataset.get_mutable_local_copy(
+ target_folder='work_dataset',
+ overwrite=True
+ )
+ ```
+
+1. Preprocess the data, including modifying some files in the `./work_dataset` folder.
+
+1. Create a new version of the dataset:
+
+ ```python
+ # create a new version of the dataset with the pickle file
+ new_dataset = Dataset.create(
+ dataset_project='data',
+ dataset_name='dataset_v2',
+ parent_datasets=[dataset],
+ # this will make sure we have the creation code and the actual dataset artifacts on the same Task
+ use_current_task=True,
+ )
+
+1. Add the modified data to the dataset:
+
+ ```python
+ new_dataset.sync_folder(local_path=dataset_folder)
+ new_dataset.upload()
+ new_dataset.finalize()
+ ```
+
+1. Remove the `latest` tag from the previous dataset and add the tag to the new dataset:
+ ```python
+ # now let's remove the previous dataset tag
+ dataset.tags = []
+ new_dataset.tags = ['latest']
+ ```
+
+The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parent_datasets` argument.
+This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
+since it only stores the changed and/or added files from the parent versions.
+When you access the dataset, it automatically merges the files from all parent versions
+in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
+
+## Training
+You can now train your model with the **latest** dataset you have in the system, by getting the instance of the Dataset
+based on the `latest` tag (if you have two Datasets with the same tag you will get the newest).
+Once you have the dataset you can request a local copy of the data. All local copy requests are cached,
+which means that if you access the same dataset multiple times you will not have any unnecessary downloads.
+
+```python
+# create a task for the model training
+task = Task.init(project_name='data', task_name='ingest', task_type='training')
+
+# get the latest dataset with the tag `latest`
+dataset = Dataset.get(dataset_tags='latest')
+
+# get a cached copy of the Dataset files
+dataset_folder = dataset.get_local_copy()
+
+# train model here
+```
\ No newline at end of file
diff --git a/docs/getting_started/ds/ds_second_steps.md b/docs/getting_started/ds/ds_second_steps.md
deleted file mode 100644
index 3aaf3f87..00000000
--- a/docs/getting_started/ds/ds_second_steps.md
+++ /dev/null
@@ -1,193 +0,0 @@
----
-title: Next Steps
----
-
-So, you've already [installed ClearML's Python package](ds_first_steps.md) and run your first experiment!
-
-Now, you'll learn how to track Hyperparameters, Artifacts, and Metrics!
-
-## Accessing Experiments
-
-Every previously executed experiment is stored as a Task.
-A Task's project and name can be changed after the experiment has been executed.
-A Task is also automatically assigned an auto-generated unique identifier (UUID string) that cannot be changed and always locates the same Task in the system.
-
-Retrieve a Task object programmatically by querying the system based on either the Task ID,
-or project and name combination. You can also query tasks based on their properties, like tags (see [Querying Tasks](../../clearml_sdk/task_sdk.md#querying--searching-tasks)).
-
-```python
-prev_task = Task.get_task(task_id='123456deadbeef')
-```
-
-Once you have a Task object you can query the state of the Task, get its model(s), scalars, parameters, etc.
-
-## Log Hyperparameters
-
-For full reproducibility, it's paramount to save hyperparameters for each experiment. Since hyperparameters can have substantial impact
-on model performance, saving and comparing these between experiments is sometimes the key to understanding model behavior.
-
-ClearML supports logging `argparse` module arguments out of the box, so once ClearML is integrated into the code, it automatically logs all parameters provided to the argument parser.
-
-You can also log parameter dictionaries (very useful when parsing an external configuration file and storing as a dict object),
-whole configuration files, or even custom objects or [Hydra](https://hydra.cc/docs/intro/) configurations!
-
-```python
-params_dictionary = {'epochs': 3, 'lr': 0.4}
-task.connect(params_dictionary)
-```
-
-See [Configuration](../../clearml_sdk/task_sdk.md#configuration) for all hyperparameter logging options.
-
-## Log Artifacts
-
-ClearML lets you easily store the output products of an experiment - Model snapshot / weights file, a preprocessing of your data, feature representation of data and more!
-
-Essentially, artifacts are files (or Python objects) uploaded from a script and are stored alongside the Task.
-These artifacts can be easily accessed by the web UI or programmatically.
-
-Artifacts can be stored anywhere, either on the ClearML server, or any object storage solution or shared folder.
-See all [storage capabilities](../../integrations/storage.md).
-
-
-### Adding Artifacts
-
-Upload a local file containing the preprocessed results of the data:
-```python
-task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
-```
-
-You can also upload an entire folder with all its content by passing the folder (the folder will be zipped and uploaded as a single zip file).
-```python
-task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
-```
-
-Lastly, you can upload an instance of an object; Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
-If the object type is unknown, ClearML pickles it and uploads the pickle file.
-
-```python
-numpy_object = np.eye(100, 100)
-task.upload_artifact(name='features', artifact_object=numpy_object)
-```
-
-For more artifact logging options, see [Artifacts](../../clearml_sdk/task_sdk.md#artifacts).
-
-### Using Artifacts
-
-Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
-To use an artifact, first you have to get an instance of the Task that originally created it,
-then you either download it and get its path, or get the artifact object directly.
-
-For example, using a previously generated preprocessed data.
-
-```python
-preprocess_task = Task.get_task(task_id='preprocessing_task_id')
-local_csv = preprocess_task.artifacts['data'].get_local_copy()
-```
-
-`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
-Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
-need to download the artifact again.
-Calling `get()` gets a deserialized pickled object.
-
-Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
-
-### Models
-
-Models are a special kind of artifact.
-Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
-All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
-pass a storage location for the model files to be uploaded to.
-
-For example, upload all snapshots to an S3 bucket:
-```python
-task = Task.init(
- project_name='examples',
- task_name='storing model',
- output_uri='s3://my_models/'
-)
-```
-
-Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the experiment.
-
-Loading models by a framework is also logged by the system; these models appear in an experiment's **Artifacts** tab,
-under the "Input Models" section.
-
-Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
-[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
-[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
-[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
-
-#### Loading Models
-Loading a previously trained model is quite similar to loading artifacts.
-
-```python
-prev_task = Task.get_task(task_id='the_training_task')
-last_snapshot = prev_task.models['output'][-1]
-local_weights_path = last_snapshot.get_local_copy()
-```
-
-Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
-:::note
-Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
-:::
-As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
-Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
-This feature lets you easily get a full genealogy of every trained and used model by your system!
-
-## Log Metrics
-
-Full metrics logging is the key to finding the best performing model!
-By default, ClearML automatically captures and logs everything reported to TensorBoard and Matplotlib.
-
-Since not all metrics are tracked that way, you can also manually report metrics using a [`Logger`](../../fundamentals/logger.md) object.
-
-You can log everything, from time series data and confusion matrices to HTML, Audio, and Video, to custom plotly graphs! Everything goes!
-
-
-
-
-Once everything is neatly logged and displayed, use the [comparison tool](../../webapp/webapp_exp_comparing.md) to find the best configuration!
-
-
-## Track Experiments
-
-The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the entire development.
-
-
-
-
-
-### Creating Leaderboards
-Customize the [task table](../../webapp/webapp_exp_table.md) to fit your own needs, adding desired views of parameters, metrics, and tags.
-You can filter and sort based on parameters and metrics, so creating custom views is simple and flexible.
-
-Create a dashboard for a project, presenting the latest Models and their accuracy scores, for immediate insights.
-
-It can also be used as a live leaderboard, showing the best performing experiments' status, updated in real time.
-This is helpful to monitor your projects' progress, and to share it across the organization.
-
-Any page is sharable by copying the URL from the address bar, allowing you to bookmark leaderboards or to send an exact view of a specific experiment or a comparison page.
-
-You can also tag Tasks for visibility and filtering allowing you to add more information on the execution of the experiment.
-Later you can search based on task name in the search bar, and filter experiments based on their tags, parameters, status, and more.
-
-## What's Next?
-
-This covers the basics of ClearML! Running through this guide you've learned how to log Parameters, Artifacts and Metrics!
-
-If you want to learn more look at how we see the data science process in our [best practices](best_practices.md) page,
-or check these pages out:
-
-- Scale you work and deploy [ClearML Agents](../../clearml_agent.md)
-- Develop on remote machines with [ClearML Session](../../apps/clearml_session.md)
-- Structure your work and put it into [Pipelines](../../pipelines/pipelines.md)
-- Improve your experiments with [Hyperparameter Optimization](../../fundamentals/hpo.md)
-- Check out ClearML's integrations with your favorite ML frameworks like [TensorFlow](../../integrations/tensorflow.md),
- [PyTorch](../../integrations/pytorch.md), [Keras](../../integrations/keras.md),
- and more
-
-## YouTube Playlist
-
-All these tips and tricks are also covered in ClearML's **Getting Started** series on YouTube. Go check it out :)
-
-[](https://www.youtube.com/watch?v=kyOfwVg05EM&list=PLMdIlCuMqSTnoC45ME5_JnsJX0zWqDdlO&index=3)
\ No newline at end of file
diff --git a/docs/getting_started/hpo.md b/docs/getting_started/hpo.md
new file mode 100644
index 00000000..aa388581
--- /dev/null
+++ b/docs/getting_started/hpo.md
@@ -0,0 +1,34 @@
+---
+title: Hyperparameter Optimization
+---
+
+## What is Hyperparameter Optimization?
+Hyperparameters are variables that directly control the behaviors of training algorithms, and have a significant effect on
+the performance of the resulting machine learning models. Hyperparameter optimization (HPO) is crucial for improving
+model performance and generalization.
+
+Finding the hyperparameter values that yield the best performing models can be complicated. Manually adjusting
+hyperparameters over the course of many training trials can be slow and tedious. Luckily, ClearML offers automated
+solutions to boost hyperparameter optimization efficiency.
+
+## Workflow
+
+
+
+The preceding diagram demonstrates the typical flow of hyperparameter optimization where the parameters of a base task are optimized:
+
+1. Configure an Optimization Task with a base task whose parameters will be optimized, optimization targets, and a set of parameter values to
+ test
+1. Clone the base task. Each clone's parameter is overridden with a value from the optimization task
+1. Enqueue each clone for execution by a ClearML Agent
+1. The Optimization Task records and monitors the cloned tasks' configuration and execution details, and returns a
+ summary of the optimization results.
+
+## ClearML Solutions
+
+ClearML offers three solutions for hyperparameter optimization:
+* [GUI application](../webapp/applications/apps_hpo.md): The Hyperparameter Optimization app allows you to run and manage the optimization tasks
+ directly from the web interface--no code necessary (available under the ClearML Pro plan).
+* [Command-Line Interface (CLI)](../apps/clearml_param_search.md): The `clearml-param-search` CLI tool enables you to configure and launch the optimization process from your terminal.
+* [Python Interface](../clearml_sdk/hpo_sdk.md): The `HyperParameterOptimizer` class within the ClearML SDK allows you to
+ configure and launch optimization tasks, and seamlessly integrate them in your existing model training tasks.
diff --git a/docs/getting_started/logging_using_artifacts.md b/docs/getting_started/logging_using_artifacts.md
new file mode 100644
index 00000000..27cafc98
--- /dev/null
+++ b/docs/getting_started/logging_using_artifacts.md
@@ -0,0 +1,122 @@
+---
+title: Logging and Using Task Artifacts
+---
+
+:::note
+This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md)
+:::
+
+
+ClearML lets you easily store a task's output products--or **Artifacts**:
+* [Model](#models) snapshot / weights file
+* Preprocessing of your data
+* Feature representation of data
+* And more!
+
+**Artifacts** are files or Python objects that are uploaded and stored alongside the Task.
+These artifacts can be easily accessed by the web UI or programmatically.
+
+Artifacts can be stored anywhere, either on the ClearML Server, or any object storage solution or shared folder.
+See all [storage capabilities](../integrations/storage.md).
+
+
+## Adding Artifacts
+
+Let's create a [Task](../fundamentals/task.md) and add some artifacts to it.
+
+1. Create a task using [`Task.init()`](../references/sdk/task.md#taskinit)
+
+ ```python
+ from clearml import Task
+
+ task = Task.init(project_name='great project', task_name='task with artifacts')
+ ```
+
+1. Upload a local **file** using [`Task.upload_folder()`](../references/sdk/task.md#upload_artifact) and specifying the artifact's
+ name and its path:
+
+ ```python
+ task.upload_artifact(name='data', artifact_object='/path/to/preprocess_data.csv')
+ ```
+
+1. Upload an **entire folder** with all its content by passing the folder path (the folder will be zipped and uploaded as a single zip file).
+
+ ```python
+ task.upload_artifact(name='folder', artifact_object='/path/to/folder/')
+ ```
+
+1. Upload an instance of an object. Numpy/Pandas/PIL Images are supported with `npz`/`csv.gz`/`jpg` formats accordingly.
+ If the object type is unknown, ClearML pickles it and uploads the pickle file.
+
+ ```python
+ numpy_object = np.eye(100, 100)
+ task.upload_artifact(name='features', artifact_object=numpy_object)
+ ```
+
+For more artifact logging options, see [Artifacts](../clearml_sdk/task_sdk.md#artifacts).
+
+### Using Artifacts
+
+Logged artifacts can be used by other Tasks, whether it's a pre-trained Model or processed data.
+To use an artifact, first you have to get an instance of the Task that originally created it,
+then you either download it and get its path, or get the artifact object directly.
+
+For example, using a previously generated preprocessed data.
+
+```python
+preprocess_task = Task.get_task(task_id='preprocessing_task_id')
+local_csv = preprocess_task.artifacts['data'].get_local_copy()
+```
+
+`task.artifacts` is a dictionary where the keys are the artifact names, and the returned object is the artifact object.
+Calling `get_local_copy()` returns a local cached copy of the artifact. Therefore, next time you execute the code, you don't
+need to download the artifact again.
+Calling `get()` gets a deserialized pickled object.
+
+Check out the [artifacts retrieval](https://github.com/clearml/clearml/blob/master/examples/reporting/artifacts_retrieval.py) example code.
+
+## Models
+
+Models are a special kind of artifact.
+Models created by popular frameworks (such as PyTorch, TensorFlow, Scikit-learn) are automatically logged by ClearML.
+All snapshots are automatically logged. In order to make sure you also automatically upload the model snapshot (instead of saving its local path),
+pass a storage location for the model files to be uploaded to.
+
+For example, upload all snapshots to an S3 bucket:
+```python
+task = Task.init(
+ project_name='examples',
+ task_name='storing model',
+ output_uri='s3://my_models/'
+)
+```
+
+Now, whenever the framework (TensorFlow/Keras/PyTorch etc.) stores a snapshot, the model file is automatically uploaded to the bucket to a specific folder for the task.
+
+Loading models by a framework is also logged by the system; these models appear in a task's **Artifacts** tab,
+under the "Input Models" section.
+
+Check out model snapshots examples for [TensorFlow](https://github.com/clearml/clearml/blob/master/examples/frameworks/tensorflow/tensorflow_mnist.py),
+[PyTorch](https://github.com/clearml/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py),
+[Keras](https://github.com/clearml/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py),
+[scikit-learn](https://github.com/clearml/clearml/blob/master/examples/frameworks/scikit-learn/sklearn_joblib_example.py).
+
+### Loading Models
+Loading a previously trained model is quite similar to loading artifacts.
+
+```python
+prev_task = Task.get_task(task_id='the_training_task')
+last_snapshot = prev_task.models['output'][-1]
+local_weights_path = last_snapshot.get_local_copy()
+```
+
+Like before, you have to get the instance of the task training the original weights files, then you can query the task for its output models (a list of snapshots), and get the latest snapshot.
+
+:::note
+Using TensorFlow, the snapshots are stored in a folder, meaning the `local_weights_path` will point to a folder containing your requested snapshot.
+:::
+
+As with artifacts, all models are cached, meaning the next time you run this code, no model needs to be downloaded.
+Once one of the frameworks will load the weights file, the running task will be automatically updated with "Input Model" pointing directly to the original training Task's Model.
+This feature lets you easily get a full genealogy of every trained and used model by your system!
+
diff --git a/docs/getting_started/main.md b/docs/getting_started/main.md
index 26fffcd0..bc1b3b17 100644
--- a/docs/getting_started/main.md
+++ b/docs/getting_started/main.md
@@ -1,8 +1,4 @@
----
-id: main
-title: What is ClearML?
-slug: /
----
+# What is ClearML?
ClearML is an open-source, end-to-end AI Platform designed to streamline AI adoption and the entire development lifecycle.
It supports every phase of AI development, from research to production, allowing users to
@@ -109,14 +105,14 @@ Want a more in depth introduction to ClearML? Choose where you want to get start
- [Track and upload](../fundamentals/task.md) metrics and models with only 2 lines of code
- [Reproduce](../webapp/webapp_exp_reproducing.md) tasks with 3 mouse clicks
-- [Create bots](../guides/services/slack_alerts.md) that send you Slack messages based on experiment behavior (for example,
+- [Create bots](../guides/services/slack_alerts.md) that send you Slack messages based on task behavior (for example,
alert you whenever your model improves in accuracy)
- Manage your [data](../clearml_data/clearml_data.md) - store, track, and version control
-- Remotely execute experiments on any compute resource you have available with [ClearML Agent](../clearml_agent.md)
+- Remotely execute tasks on any compute resource you have available with [ClearML Agent](../clearml_agent.md)
- Automatically scale cloud instances according to your resource needs with ClearML's
[AWS Autoscaler](../webapp/applications/apps_aws_autoscaler.md) and [GCP Autoscaler](../webapp/applications/apps_gcp_autoscaler.md)
GUI applications
-- Run [hyperparameter optimization](../fundamentals/hpo.md)
+- Run [hyperparameter optimization](hpo.md)
- Build [pipelines](../pipelines/pipelines.md) from code
- Much more!
diff --git a/docs/getting_started/mlops/mlops_first_steps.md b/docs/getting_started/mlops/mlops_first_steps.md
deleted file mode 100644
index 34635cd3..00000000
--- a/docs/getting_started/mlops/mlops_first_steps.md
+++ /dev/null
@@ -1,225 +0,0 @@
----
-title: First Steps
----
-
-:::note
-This tutorial assumes that you've already [signed up](https://app.clear.ml) to ClearML
-:::
-
-ClearML provides tools for **automation**, **orchestration**, and **tracking**, all key in performing effective MLOps and LLMOps.
-
-Effective MLOps and LLMOps rely on the ability to scale work beyond one's own computer. Moving from your own machine can be time-consuming.
-Even assuming that you have all the drivers and applications installed, you still need to manage multiple Python environments
-for different packages / package versions, or worse - manage different Dockers for different package versions.
-
-Not to mention, when working on remote machines, executing experiments, tracking what's running where, and making sure machines
-are fully utilized at all times become daunting tasks.
-
-This can create overhead that derails you from your core work!
-
-ClearML Agent was designed to deal with such issues and more! It is a tool responsible for executing tasks on remote machines: on-premises or in the cloud! ClearML Agent provides the means to reproduce and track tasks in your
-machine of choice through the ClearML WebApp with no need for additional code.
-
-The agent will set up the environment for a specific Task's execution (inside a Docker, or bare-metal), install the
-required Python packages, and execute and monitor the process.
-
-
-## Set up an Agent
-
-1. Install the agent:
-
- ```bash
- pip install clearml-agent
- ```
-
-1. Connect the agent to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration), then run this:
-
- ```bash
- clearml-agent init
- ```
-
- :::note
- If you've already created credentials, you can copy-paste the default agent section from [here](https://github.com/clearml/clearml-agent/blob/master/docs/clearml.conf#L15) (this is optional. If the section is not provided the default values will be used)
- :::
-
-1. Start the agent's daemon and assign it to a [queue](../../fundamentals/agents_and_queues.md#what-is-a-queue):
-
- ```bash
- clearml-agent daemon --queue default
- ```
-
- A queue is an ordered list of Tasks that are scheduled for execution. The agent will pull Tasks from its assigned
- queue (`default` in this case), and execute them one after the other. Multiple agents can listen to the same queue
- (or even multiple queues), but only a single agent will pull a Task to be executed.
-
-:::tip Agent Deployment Modes
-ClearML Agents can be deployed in:
-* [Virtual environment mode](../../clearml_agent/clearml_agent_execution_env.md): Agent creates a new venv to execute a task.
-* [Docker mode](../../clearml_agent/clearml_agent_execution_env.md#docker-mode): Agent executes a task inside a
-Docker container.
-
-For more information, see [Running Modes](../../fundamentals/agents_and_queues.md#running-modes).
-:::
-
-## Clone a Task
-Tasks can be reproduced (cloned) for validation or as a baseline for further experimentation.
-Cloning a task duplicates the task's configuration, but not its outputs.
-
-**To clone a task in the ClearML WebApp:**
-1. Click on any project card to open its [task table](../../webapp/webapp_exp_table.md).
-1. Right-click one of the tasks on the table.
-1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
-1. Click **CLONE** in the window.
-
-The newly cloned task will appear and its info panel will slide open. The cloned task is in draft mode, so
-it can be modified. You can edit the Git / code references, control the Python packages to be installed, specify the
-Docker container image to be used, or change the hyperparameters and configuration files. See [Modifying Tasks](../../webapp/webapp_exp_tuning.md#modifying-tasks) for more information about editing tasks in the UI.
-
-## Enqueue a Task
-Once you have set up a task, it is now time to execute it.
-
-**To execute a task through the ClearML WebApp:**
-1. Right-click your draft task (the context menu is also available through the
- button on the top right of the task's info panel)
-1. Click **ENQUEUE,** which will open the **ENQUEUE TASK** window
-1. In the window, select `default` in the queue menu
-1. Click **ENQUEUE**
-
-This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
-assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
-task, and the task can be [tracked and its results visualized](../../webapp/webapp_exp_track_visual.md).
-
-
-## Programmatic Interface
-
-The cloning, modifying, and enqueuing actions described above can also be performed programmatically.
-
-### First Steps
-#### Access Previously Executed Tasks
-All Tasks in the system can be accessed through their unique Task ID, or based on their properties using the [`Task.get_task`](../../references/sdk/task.md#taskget_task)
-method. For example:
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-```
-
-Once a specific Task object has been obtained, it can be cloned, modified, and more. See [Advanced Usage](#advanced-usage).
-
-#### Clone a Task
-
-To duplicate a task, use the [`Task.clone`](../../references/sdk/task.md#taskclone) method, and input either a
-Task object or the Task's ID as the `source_task` argument.
-```python
-cloned_task = Task.clone(source_task=executed_task)
-```
-
-#### Enqueue a Task
-To enqueue the task, use the [`Task.enqueue`](../../references/sdk/task.md#taskenqueue) method, and input the Task object
-with the `task` argument, and the queue to push the task into with `queue_name`.
-
-```python
-Task.enqueue(task=cloned_task, queue_name='default')
-```
-
-### Advanced Usage
-Before execution, use a variety of programmatic methods to manipulate a task object.
-
-#### Modify Hyperparameters
-[Hyperparameters](../../fundamentals/hyperparameters.md) are an integral part of Machine Learning code as they let you
-control the code without directly modifying it. Hyperparameters can be added from anywhere in your code, and ClearML supports multiple ways to obtain them!
-
-Users can programmatically change cloned tasks' parameters.
-
-For example:
-```python
-from clearml import Task
-
-cloned_task = Task.clone(task_id='aabbcc')
-cloned_task.set_parameter(name='internal/magic', value=42)
-```
-
-#### Report Artifacts
-Artifacts are files created by your task. Users can upload [multiple types of data](../../clearml_sdk/task_sdk.md#logging-artifacts),
-objects and files to a task anywhere from code.
-
-```python
-import numpy as np
-from clearml import Task
-
-Task.current_task().upload_artifact(name='a_file', artifact_object='local_file.bin')
-Task.current_task().upload_artifact(name='numpy', artifact_object=np.ones(4,4))
-```
-
-Artifacts serve as a great way to pass and reuse data between tasks. Artifacts can be [retrieved](../../clearml_sdk/task_sdk.md#using-artifacts)
-by accessing the Task that created them. These artifacts can be modified and uploaded to other tasks.
-
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-# artifact as a file
-local_file = executed_task.artifacts['file'].get_local_copy()
-# artifact as object
-a_numpy = executed_task.artifacts['numpy'].get()
-```
-
-By facilitating the communication of complex objects between tasks, artifacts serve as the foundation of ClearML's [Data Management](../../clearml_data/clearml_data.md)
-and [pipeline](../../pipelines/pipelines.md) solutions.
-
-#### Log Models
-Logging models into the model repository is the easiest way to integrate the development process directly with production.
-Any model stored by a supported framework (Keras / TensorFlow / PyTorch / Joblib etc.) will be automatically logged into ClearML.
-
-ClearML also supports methods to explicitly log models. Models can be automatically stored on a preferred storage medium
-(S3 bucket, Google storage, etc.).
-
-#### Log Metrics
-Log as many metrics as you want from your processes using the [Logger](../../fundamentals/logger.md) module. This
-improves the visibility of your processes' progress.
-
-```python
-from clearml import Logger
-
-Logger.current_logger().report_scalar(
- graph='metric',
- series='variant',
- value=13.37,
- iteration=counter
-)
-```
-
-You can also retrieve reported scalars for programmatic analysis:
-```python
-from clearml import Task
-
-executed_task = Task.get_task(task_id='aabbcc')
-# get a summary of the min/max/last value of all reported scalars
-min_max_values = executed_task.get_last_scalar_metrics()
-# get detailed graphs of all scalars
-full_scalars = executed_task.get_reported_scalars()
-```
-
-#### Query Tasks
-You can also search and query Tasks in the system. Use the [`Task.get_tasks`](../../references/sdk/task.md#taskget_tasks)
-class method to retrieve Task objects and filter based on the specific values of the Task - status, parameters, metrics and more!
-
-```python
-from clearml import Task
-
-tasks = Task.get_tasks(
- project_name='examples',
- task_name='partial_name_match',
- task_filter={'status': 'in_progress'}
-)
-```
-
-#### Manage Your Data
-Data is probably one of the biggest factors that determines the success of a project. Associating a model's data with
-the model's configuration, code, and results (such as accuracy) is key to deducing meaningful insights into model behavior.
-
-[ClearML Data](../../clearml_data/clearml_data.md) lets you version your data, so it's never lost, fetch it from every
-machine with minimal code changes, and associate data to task results.
-
-Logging data can be done via command line, or programmatically. If any preprocessing code is involved, ClearML logs it
-as well! Once data is logged, it can be used by other tasks.
diff --git a/docs/getting_started/mlops/mlops_second_steps.md b/docs/getting_started/mlops/mlops_second_steps.md
deleted file mode 100644
index aa56772b..00000000
--- a/docs/getting_started/mlops/mlops_second_steps.md
+++ /dev/null
@@ -1,121 +0,0 @@
----
-title: Next Steps
----
-
-Once Tasks are defined and in the ClearML system, they can be chained together to create Pipelines.
-Pipelines provide users with a greater level of abstraction and automation, with Tasks running one after the other.
-
-Tasks can interface with other Tasks in the pipeline and leverage other Tasks' work products.
-
-The sections below describe the following scenarios:
-* [Dataset creation](#dataset-creation)
-* Data [processing](#preprocessing-data) and [consumption](#training)
-* [Pipeline building](#building-the-pipeline)
-
-
-## Building Tasks
-### Dataset Creation
-
-Let's assume you have some code that extracts data from a production database into a local folder.
-Your goal is to create an immutable copy of the data to be used by further steps:
-
-```bash
-clearml-data create --project data --name dataset
-clearml-data sync --folder ./from_production
-```
-
-You can add a tag `latest` to the Dataset, marking it as the latest version.
-
-### Preprocessing Data
-The second step is to preprocess the data. First access the data, then modify it,
-and lastly create a new version of the data.
-
-```python
-from clearml import Task, Dataset
-
-# create a task for the data processing part
-task = Task.init(project_name='data', task_name='create', task_type='data_processing')
-
-# get the v1 dataset
-dataset = Dataset.get(dataset_project='data', dataset_name='dataset_v1')
-
-# get a local mutable copy of the dataset
-dataset_folder = dataset.get_mutable_local_copy(
- target_folder='work_dataset',
- overwrite=True
-)
-# change some files in the `./work_dataset` folder
-
-# create a new version of the dataset with the pickle file
-new_dataset = Dataset.create(
- dataset_project='data',
- dataset_name='dataset_v2',
- parent_datasets=[dataset],
- # this will make sure we have the creation code and the actual dataset artifacts on the same Task
- use_current_task=True,
-)
-new_dataset.sync_folder(local_path=dataset_folder)
-new_dataset.upload()
-new_dataset.finalize()
-# now let's remove the previous dataset tag
-dataset.tags = []
-new_dataset.tags = ['latest']
-```
-
-The new dataset inherits the contents of the datasets specified in `Dataset.create`'s `parent_datasets` argument.
-This not only helps trace back dataset changes with full genealogy, but also makes the storage more efficient,
-since it only stores the changed and/or added files from the parent versions.
-When you access the Dataset, it automatically merges the files from all parent versions
-in a fully automatic and transparent process, as if the files were always part of the requested Dataset.
-
-### Training
-You can now train your model with the **latest** Dataset you have in the system, by getting the instance of the Dataset
-based on the `latest` tag
-(if by any chance you have two Datasets with the same tag you will get the newest).
-Once you have the dataset you can request a local copy of the data. All local copy requests are cached,
-which means that if you access the same dataset multiple times you will not have any unnecessary downloads.
-
-```python
-# create a task for the model training
-task = Task.init(project_name='data', task_name='ingest', task_type='training')
-
-# get the latest dataset with the tag `latest`
-dataset = Dataset.get(dataset_tags='latest')
-
-# get a cached copy of the Dataset files
-dataset_folder = dataset.get_local_copy()
-
-# train our model here
-```
-
-## Building the Pipeline
-
-Now that you have the data creation step, and the data training step, create a pipeline that when executed,
-will first run the first and then run the second.
-It is important to remember that pipelines are Tasks by themselves and can also be automated by other pipelines (i.e. pipelines of pipelines).
-
-```python
-from clearml import PipelineController
-
-pipe = PipelineController(
- project='data',
- name='pipeline demo',
- version="1.0"
-)
-
-pipe.add_step(
- name='step 1 data',
- base_project_name='data',
- base_task_name='create'
-)
-pipe.add_step(
- name='step 2 train',
- parents=['step 1 data', ],
- base_project_name='data',
- base_task_name='ingest'
-)
-```
-
-You can also pass the parameters from one step to the other (for example `Task.id`).
-In addition to pipelines made up of Task steps, ClearML also supports pipelines consisting of function steps. For more
-information, see the [full pipeline documentation](../../pipelines/pipelines.md).
diff --git a/docs/getting_started/project_progress.md b/docs/getting_started/project_progress.md
new file mode 100644
index 00000000..01f12893
--- /dev/null
+++ b/docs/getting_started/project_progress.md
@@ -0,0 +1,43 @@
+---
+title: Monitoring Project Progress
+---
+
+ClearML provides a comprehensive set of monitoring tools to help effectively track and manage machine learning projects.
+These tools offer both high-level overviews and detailed insights into task execution, resource
+utilization, and project performance.
+
+## Offerings
+
+### Project Dashboard
+
+:::info Pro Plan Offering
+The Project Dashboard app is available under the ClearML Pro plan.
+:::
+
+The [**Project Dashboard**](../webapp/applications/apps_dashboard.md) UI application provides a centralized
+view of project progress, task statuses, resource usage, and key performance metrics. It offers:
+* Comprehensive insights:
+ * Track task statuses and trends over time.
+ * Monitor GPU utilization and worker activity.
+ * Analyze performance metrics.
+* Proactive alerts - By integrating with Slack, the Dashboard can notify teams of task failures
+ and completions.
+
+For more information, see [Project Dashboard](../webapp/applications/apps_dashboard.md).
+
+
+
+
+### Project Overview
+
+A project's **OVERVIEW** tab in the UI presents a general picture of a project:
+* Metric Snapshot – A graphical representation of selected metric values across project tasks, offering a quick assessment of progress.
+* Task Status Tracking – When a single metric variant is selected for the snapshot, task status is color-coded (e.g.,
+Completed, Aborted, Published, Failed) for better visibility.
+
+Use the Metric Snapshot to track project progress and identify trends in task performance.
+
+For more information, see [Project Overview](../webapp/webapp_project_overview.md).
+
+
+
diff --git a/docs/getting_started/remote_execution.md b/docs/getting_started/remote_execution.md
new file mode 100644
index 00000000..3f7fab5f
--- /dev/null
+++ b/docs/getting_started/remote_execution.md
@@ -0,0 +1,84 @@
+---
+title: Remote Execution
+---
+
+:::note
+This guide assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md) and [ClearML Agent](../clearml_agent/clearml_agent_setup.md).
+:::
+
+ClearML Agent enables seamless remote execution by offloading computations from a local development environment to a more
+powerful remote machine. This is useful for:
+
+* Running initial process (a task or function) locally before scaling up.
+* Offloading resource-intensive tasks to dedicated compute nodes.
+* Managing execution through ClearML's queue system.
+
+This guide focuses on transitioning a locally executed process to a remote machine for scalable execution. To learn how
+to reproduce a previously executed process on a remote machine, see [Reproducing Tasks](reproduce_tasks.md).
+
+## Running a Task Remotely
+
+A compelling workflow is:
+
+1. Run code on a development machine for a few iterations, or just set up the environment.
+1. Move the execution to a beefier remote machine for the actual training.
+
+Use [`Task.execute_remotely()`](../references/sdk/task.md#execute_remotely) to implement this workflow. This method stops the current manual execution, and then
+re-runs it on a remote machine.
+
+1. Deploy a `clearml-agent` from your beefier remote machine and assign it to the `default` queue:
+
+ ```commandline
+ clearml-agent daemon --queue default
+ ```
+
+1. Run the local code to send to the remote machine for execution:
+
+ ```python
+ from clearml import Task
+
+ task = Task.init(project_name="myProject", task_name="myTask")
+
+ # training code
+
+ task.execute_remotely(
+ queue_name='default',
+ clone=False,
+ exit_process=True
+ )
+ ```
+
+Once `execute_remotely()` is called on the machine, it stops the local process and enqueues the current task into the `default`
+queue. From there, an agent assigned to the queue can pull and launch it.
+
+## Running a Function Remotely
+
+You can execute a specific function remotely using [`Task.create_function_task()`](../references/sdk/task.md#create_function_task).
+This method creates a ClearML Task from a Python function and runs it on a remote machine.
+
+For example:
+
+```python
+from clearml import Task
+
+task = Task.init(project_name="myProject", task_name="Remote function")
+
+def run_me_remotely(some_argument):
+ print(some_argument)
+
+a_func_task = task.create_function_task(
+ func=run_me_remotely,
+ func_name='func_id_run_me_remotely',
+ task_name='a func task',
+ # everything below will be passed directly to our function as arguments
+ some_argument=123
+)
+```
+
+:::important Function Task Creation
+Function tasks must be created from within a regular task, created by calling `Task.init`
+:::
+
+Arguments passed to the function will be automatically logged in the task's **CONFIGURATION** tab under the **HYPERPARAMETERS > Function section**.
+Like any other arguments, they can be changed from the UI or programmatically.
+
diff --git a/docs/getting_started/reproduce_tasks.md b/docs/getting_started/reproduce_tasks.md
new file mode 100644
index 00000000..57bb1a98
--- /dev/null
+++ b/docs/getting_started/reproduce_tasks.md
@@ -0,0 +1,82 @@
+---
+title: Reproducing Tasks
+---
+
+:::note
+This tutorial assumes that you've already set up [ClearML](../clearml_sdk/clearml_sdk_setup.md) and [ClearML Agent](../clearml_agent/clearml_agent_setup.md).
+:::
+
+Tasks can be reproduced--or **Cloned**--for validation or as a baseline for further experimentation. When you initialize a task in your
+code, ClearML logs everything needed to reproduce your task and its environment:
+* Uncommitted changes
+* Used packages and their versions
+* Parameters
+* and more
+
+Cloning a task duplicates the task's configuration, but not its outputs.
+
+ClearML offers two ways to clone your task:
+* [Via the WebApp](#via-the-webapp)--no further code required
+* [Via programmatic interface](#via-programmatic-interface) using the `clearml` Python package
+
+Once you have cloned your task, you can modify its setup, and then execute it remotely on a machine of your choice using a ClearML Agent.
+
+## Via the WebApp
+
+**To clone a task in the ClearML WebApp:**
+1. Click on any project card to open its [task table](../webapp/webapp_exp_table.md).
+1. Right-click the task you want to reproduce.
+1. Click **Clone** in the context menu, which will open a **CLONE TASK** window.
+1. Click **CLONE** in the window.
+
+The newly cloned task's details page will open up. The cloned task is in *draft* mode, which means
+it can be modified. You can edit any of the Task's setup details, including:
+* Git and/or code references
+* Python packages to be installed
+* Container image to be used
+
+You can adjust the values of the task's hyperparameters and configuration files. See [Modifying Tasks](../webapp/webapp_exp_tuning.md#modifying-tasks) for more
+information about editing tasks in the UI.
+
+### Enqueue a Task
+Once you have set up a task, it is now time to execute it.
+
+**To execute a task through the ClearML WebApp:**
+1. In the task's details page, click "Menu"
+1. Click **ENQUEUE** to open the **ENQUEUE TASK** window
+1. In the window, select `default` in the `Queue` menu
+1. Click **ENQUEUE**
+
+This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
+assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
+task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).
+
+
+## Via Programmatic Interface
+
+The cloning, modifying, and enqueuing actions described above can also be performed programmatically using `clearml`.
+
+
+### Clone the Task
+
+To duplicate the task, use [`Task.clone()`](../references/sdk/task.md#taskclone), and input either a
+Task object or the Task's ID as the `source_task` argument.
+
+```python
+cloned_task = Task.clone(source_task='qw03485je3hap903ere54')
+```
+
+The cloned task is in *draft* mode, which means it can be modified. For modification options, such as setting new parameter
+values, see [Task SDK](../clearml_sdk/task_sdk.md).
+
+### Enqueue the Task
+To enqueue the task, use [`Task.enqueue()`](../references/sdk/task.md#taskenqueue), and input the Task object
+with the `task` argument, and the queue to push the task into with `queue_name`.
+
+```python
+Task.enqueue(task=cloned_task, queue_name='default')
+```
+
+This action pushes the task into the `default` queue. The task's status becomes *Pending* until an agent
+assigned to the queue fetches it, at which time the task's status becomes *Running*. The agent executes the
+task, and the task can be [tracked and its results visualized](../webapp/webapp_exp_track_visual.md).
\ No newline at end of file
diff --git a/docs/getting_started/task_trigger_schedule.md b/docs/getting_started/task_trigger_schedule.md
new file mode 100644
index 00000000..f1822e22
--- /dev/null
+++ b/docs/getting_started/task_trigger_schedule.md
@@ -0,0 +1,41 @@
+---
+title: Scheduling and Triggering Task Execution
+---
+
+ In ClearML, tasks can be scheduled and triggered automatically, enabling seamless workflow automation. This section
+ provides an overview of the mechanisms available for managing task scheduling and event-based
+ triggering.
+
+## Task Scheduling
+Task scheduling allows users to define one-shot or periodic executions at specified times and intervals. This
+is useful for:
+
+* Running routine operations such as periodic model training, evaluation jobs, backups, and reports.
+* Automating data ingestion and preprocessing workflows.
+* Ensuring regular execution of monitoring and reporting tasks.
+
+ClearML's offers the following scheduling solutions:
+* [**UI Application**](../webapp/applications/apps_task_scheduler.md) (available under the Enterprise Plan) - The **Task Scheduler** app
+ provides a simple no-code interface for managing task schedules.
+
+* [**Python Interface**](../references/sdk/scheduler.md) - Use the `TaskScheduler` class to programmatically manage
+ task schedules.
+
+## Task Execution Triggering
+
+ClearML's trigger manager enables you to automate task execution based on event occurence in the ClearML system, such as:
+* Changes in task status (e.g. running, completed, etc.)
+* Publication, archiving, or tagging of tasks, models, or datasets
+* Task metrics crossing predefined thresholds
+
+This is useful for:
+* Triggering a training task when a dataset has been tagged as `latest` or any other tag
+* Running an inference task when a model has been published
+* Retraining a model when accuracy falls below a certain threshold
+* And more
+
+ClearML's offers the following trigger management solutions:
+* [**UI Application**](../webapp/applications/apps_trigger_manager.md) (available under the Enterprise Plan) - The **Trigger Manager** app
+ provides a simple no-code interface for managing task triggers .
+* [**Python Interface**](../references/sdk/trigger.md) - Use the `TriggerScheduler` class to programmatically manage
+ task triggers.
diff --git a/docs/getting_started/track_tasks.md b/docs/getting_started/track_tasks.md
new file mode 100644
index 00000000..0b8223f6
--- /dev/null
+++ b/docs/getting_started/track_tasks.md
@@ -0,0 +1,46 @@
+---
+title: Tracking Tasks
+---
+
+Every ClearML [task](../fundamentals/task.md) you create can be found in the **All Tasks** table and in its project's
+task table.
+
+The task table is a powerful tool for creating dashboards and views of your own projects, your team's projects, or the
+entire development.
+
+
+
+
+Customize the [task table](../webapp/webapp_exp_table.md) to fit your own needs by adding views of parameters, metrics, and tags.
+Filter and sort based on various criteria, such as parameters and metrics, making it simple to create custom
+views. This allows you to:
+
+* Create a dashboard for a project, presenting the latest model accuracy scores, for immediate insights.
+* Create a live leaderboard displaying the best-performing tasks, updated in real time
+* Monitor a projects' progress and share it across the organization.
+
+## Creating Leaderboards
+
+To create a leaderboard:
+
+1. Select a project in the ClearML WebApp and go to its task table
+1. Customize the column selection. Click "Settings"
+ to view and select columns to display.
+1. Filter tasks by name using the search bar to find tasks containing any search term
+1. Filter by other categories by clicking "Filter"
+ on the relevant column. There are a few types of filters:
+ * Value set - Choose which values to include from a list of all values in the column
+ * Numerical ranges - Insert minimum and/or maximum value
+ * Date ranges - Insert starting and/or ending date and time
+ * Tags - Choose which tags to filter by from a list of all tags used in the column.
+ * Filter by multiple tag values using the **ANY** or **ALL** options, which correspond to the logical "AND" and "OR" respectively. These
+ options appear on the top of the tag list.
+ * Filter by the absence of a tag (logical "NOT") by clicking its checkbox twice. An `X` will appear in the tag's checkbox.
+1. Enable auto-refresh for real-time monitoring
+
+For more detailed instructions, see the [Tracking Leaderboards Tutorial](../guides/ui/building_leader_board.md).
+
+## Sharing Leaderboards
+
+Bookmark the URL of your customized leaderboard to save and share your view. The URL contains all parameters and values
+for your specific leaderboard view.
\ No newline at end of file
diff --git a/docs/guides/clearml-task/clearml_task_tutorial.md b/docs/guides/clearml-task/clearml_task_tutorial.md
index 085f352c..99b86e0f 100644
--- a/docs/guides/clearml-task/clearml_task_tutorial.md
+++ b/docs/guides/clearml-task/clearml_task_tutorial.md
@@ -7,7 +7,7 @@ on a remote or local machine, from a remote repository and your local machine.
### Prerequisites
-- [`clearml`](../../getting_started/ds/ds_first_steps.md) Python package installed and configured
+- [`clearml`](../../clearml_sdk/clearml_sdk_setup) Python package installed and configured
- [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) running on at least one machine (to execute the task), configured to listen to `default` queue
### Executing Code from a Remote Repository
diff --git a/docs/guides/clearml_agent/executable_exp_containers.md b/docs/guides/clearml_agent/executable_exp_containers.md
index 35cd57da..884bc53a 100644
--- a/docs/guides/clearml_agent/executable_exp_containers.md
+++ b/docs/guides/clearml_agent/executable_exp_containers.md
@@ -9,7 +9,7 @@ script.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
-* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
+* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task
diff --git a/docs/guides/clearml_agent/exp_environment_containers.md b/docs/guides/clearml_agent/exp_environment_containers.md
index 0398e017..388d932e 100644
--- a/docs/guides/clearml_agent/exp_environment_containers.md
+++ b/docs/guides/clearml_agent/exp_environment_containers.md
@@ -11,7 +11,7 @@ be used when running optimization tasks.
## Prerequisites
* [`clearml-agent`](../../clearml_agent/clearml_agent_setup.md#installation) installed and configured
-* [`clearml`](../../getting_started/ds/ds_first_steps.md#install-clearml) installed and configured
+* [`clearml`](../../clearml_sdk/clearml_sdk_setup#install-clearml) installed and configured
* [clearml](https://github.com/clearml/clearml) repo cloned (`git clone https://github.com/clearml/clearml.git`)
## Creating the ClearML Task
diff --git a/docs/guides/frameworks/tensorflow/integration_keras_tuner.md b/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
index 4635afd9..5db4d120 100644
--- a/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
+++ b/docs/guides/frameworks/tensorflow/integration_keras_tuner.md
@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
-If you are not already using ClearML, see [Getting Started](../../../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
Integrate ClearML into code that uses [Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner). By
specifying `ClearMLTunerLogger` (see [kerastuner.py](https://github.com/clearml/clearml/blob/master/clearml/external/kerastuner.py))
as the Keras Tuner logger, ClearML automatically logs scalars and hyperparameter optimization.
diff --git a/docs/guides/main.md b/docs/guides/main.md
index 89143186..202eaa40 100644
--- a/docs/guides/main.md
+++ b/docs/guides/main.md
@@ -1,6 +1,6 @@
---
id: guidemain
-title: Examples
+title: ClearML Tutorials
slug: /guides
---
diff --git a/docs/hyperdatasets/task.md b/docs/hyperdatasets/task.md
index ea6a9063..5543acaf 100644
--- a/docs/hyperdatasets/task.md
+++ b/docs/hyperdatasets/task.md
@@ -1,6 +1,10 @@
---
-title: Tasks
+title: Dataviews
---
+
+:::important ENTERPRISE FEATURE
+Dataviews available under the ClearML Enterprise plan.
+:::
Hyper-Datasets extend the ClearML [**Task**](../fundamentals/task.md) with [Dataviews](dataviews.md).
diff --git a/docs/hyperdatasets/webapp/webapp_annotator.md b/docs/hyperdatasets/webapp/webapp_annotator.md
index fb48de89..3a52547f 100644
--- a/docs/hyperdatasets/webapp/webapp_annotator.md
+++ b/docs/hyperdatasets/webapp/webapp_annotator.md
@@ -2,6 +2,10 @@
title: Annotation Tasks
---
+:::important ENTERPRISE FEATURE
+Annotation tasks are available under the ClearML Enterprise plan.
+:::
+
Use the Annotations page to access and manage annotation Tasks.
Use annotation tasks to efficiently organize the annotation of frames in Dataset versions and manage the work of annotators
diff --git a/docs/hyperdatasets/webapp/webapp_datasets.md b/docs/hyperdatasets/webapp/webapp_datasets.md
index cddbe574..5cc3d06f 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets.md
@@ -2,6 +2,10 @@
title: Hyper-Datasets Page
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
Use the Hyper-Datasets Page to navigate between and manage hyper-datasets.
You can view the Hyper-Datasets page in Project view
diff --git a/docs/hyperdatasets/webapp/webapp_datasets_frames.md b/docs/hyperdatasets/webapp/webapp_datasets_frames.md
index ca92d2c8..ee4037b2 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets_frames.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets_frames.md
@@ -2,6 +2,10 @@
title: Working with Frames
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
View and edit SingleFrames in the Dataset page. After selecting a Hyper-Dataset version, the **Version Browser** shows a sample
of frames and enables viewing SingleFrames and FramesGroups, and editing SingleFrames, in the [frame viewer](#frame-viewer).
Before opening the frame viewer, you can filter the frames by applying [simple](webapp_datasets_versioning.md#simple-frame-filtering) or [advanced](webapp_datasets_versioning.md#advanced-frame-filtering)
diff --git a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
index dfa64503..f40d44a3 100644
--- a/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
+++ b/docs/hyperdatasets/webapp/webapp_datasets_versioning.md
@@ -2,6 +2,10 @@
title: Dataset Versions
---
+:::important ENTERPRISE FEATURE
+Hyper-Datasets are available under the ClearML Enterprise plan.
+:::
+
Use the Dataset versioning WebApp (UI) features for viewing, creating, modifying, and
deleting [Dataset versions](../dataset.md#dataset-versioning).
diff --git a/docs/hyperdatasets/webapp/webapp_dataviews.md b/docs/hyperdatasets/webapp/webapp_dataviews.md
index 73e1d821..9722528b 100644
--- a/docs/hyperdatasets/webapp/webapp_dataviews.md
+++ b/docs/hyperdatasets/webapp/webapp_dataviews.md
@@ -2,6 +2,10 @@
title: The Dataview Table
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
The **Dataview table** is a [customizable](#customizing-the-dataview-table) list of Dataviews associated with a project.
Use it to view and create Dataviews, and access their info panels.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_comparing.md b/docs/hyperdatasets/webapp/webapp_exp_comparing.md
index 8a5b2707..333ba0cb 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_comparing.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_comparing.md
@@ -2,6 +2,10 @@
title: Comparing Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
In addition to [ClearML's comparison features](../../webapp/webapp_exp_comparing.md), the ClearML Enterprise WebApp
supports comparing input data selection criteria of task [Dataviews](../dataviews.md), enabling to easily locate, visualize, and analyze differences.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_modifying.md b/docs/hyperdatasets/webapp/webapp_exp_modifying.md
index 1c616ae2..bbb57e62 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_modifying.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_modifying.md
@@ -2,6 +2,10 @@
title: Modifying Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
A task that has been executed can be [cloned](../../webapp/webapp_exp_reproducing.md), then the cloned task's
execution details can be modified, and the modified task can be executed.
diff --git a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
index 978b613b..569d1fff 100644
--- a/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
+++ b/docs/hyperdatasets/webapp/webapp_exp_track_visual.md
@@ -2,6 +2,10 @@
title: Task Dataviews
---
+:::important ENTERPRISE FEATURE
+Dataviews are available under the ClearML Enterprise plan.
+:::
+
While a task is running, and any time after it finishes, results are tracked and can be visualized in the ClearML
Enterprise WebApp (UI).
diff --git a/docs/img/app_bool_choice.png b/docs/img/app_bool_choice.png
new file mode 100644
index 00000000..d0df5dd8
Binary files /dev/null and b/docs/img/app_bool_choice.png differ
diff --git a/docs/img/app_bool_choice_dark.png b/docs/img/app_bool_choice_dark.png
new file mode 100644
index 00000000..5e28c914
Binary files /dev/null and b/docs/img/app_bool_choice_dark.png differ
diff --git a/docs/img/app_cond_str.png b/docs/img/app_cond_str.png
new file mode 100644
index 00000000..7ac43ae4
Binary files /dev/null and b/docs/img/app_cond_str.png differ
diff --git a/docs/img/app_cond_str_dark.png b/docs/img/app_cond_str_dark.png
new file mode 100644
index 00000000..8b26acbe
Binary files /dev/null and b/docs/img/app_cond_str_dark.png differ
diff --git a/docs/img/app_group.png b/docs/img/app_group.png
new file mode 100644
index 00000000..9d377d5a
Binary files /dev/null and b/docs/img/app_group.png differ
diff --git a/docs/img/app_group_dark.png b/docs/img/app_group_dark.png
new file mode 100644
index 00000000..116fec04
Binary files /dev/null and b/docs/img/app_group_dark.png differ
diff --git a/docs/img/app_html_elements.png b/docs/img/app_html_elements.png
new file mode 100644
index 00000000..67769ac1
Binary files /dev/null and b/docs/img/app_html_elements.png differ
diff --git a/docs/img/app_html_elements_dark.png b/docs/img/app_html_elements_dark.png
new file mode 100644
index 00000000..f9eb9eca
Binary files /dev/null and b/docs/img/app_html_elements_dark.png differ
diff --git a/docs/img/app_log.png b/docs/img/app_log.png
new file mode 100644
index 00000000..272def23
Binary files /dev/null and b/docs/img/app_log.png differ
diff --git a/docs/img/app_log_dark.png b/docs/img/app_log_dark.png
new file mode 100644
index 00000000..16c90163
Binary files /dev/null and b/docs/img/app_log_dark.png differ
diff --git a/docs/img/app_plot.png b/docs/img/app_plot.png
new file mode 100644
index 00000000..26907fce
Binary files /dev/null and b/docs/img/app_plot.png differ
diff --git a/docs/img/app_plot_dark.png b/docs/img/app_plot_dark.png
new file mode 100644
index 00000000..840e772a
Binary files /dev/null and b/docs/img/app_plot_dark.png differ
diff --git a/docs/img/app_proj_selection.png b/docs/img/app_proj_selection.png
new file mode 100644
index 00000000..3b125b91
Binary files /dev/null and b/docs/img/app_proj_selection.png differ
diff --git a/docs/img/app_proj_selection_dark.png b/docs/img/app_proj_selection_dark.png
new file mode 100644
index 00000000..8a3dc9e3
Binary files /dev/null and b/docs/img/app_proj_selection_dark.png differ
diff --git a/docs/img/gif/ai_dev_center.gif b/docs/img/gif/ai_dev_center.gif
new file mode 100644
index 00000000..7a76737a
Binary files /dev/null and b/docs/img/gif/ai_dev_center.gif differ
diff --git a/docs/img/gif/ai_dev_center_dark.gif b/docs/img/gif/ai_dev_center_dark.gif
new file mode 100644
index 00000000..ab5a4efb
Binary files /dev/null and b/docs/img/gif/ai_dev_center_dark.gif differ
diff --git a/docs/img/gif/dataset.gif b/docs/img/gif/dataset.gif
index a83978a4..3063a288 100644
Binary files a/docs/img/gif/dataset.gif and b/docs/img/gif/dataset.gif differ
diff --git a/docs/img/gif/dataset_dark.gif b/docs/img/gif/dataset_dark.gif
new file mode 100644
index 00000000..85486974
Binary files /dev/null and b/docs/img/gif/dataset_dark.gif differ
diff --git a/docs/img/gif/genai_engine.gif b/docs/img/gif/genai_engine.gif
new file mode 100644
index 00000000..ecca8a5e
Binary files /dev/null and b/docs/img/gif/genai_engine.gif differ
diff --git a/docs/img/gif/genai_engine_dark.gif b/docs/img/gif/genai_engine_dark.gif
new file mode 100644
index 00000000..6af30d0f
Binary files /dev/null and b/docs/img/gif/genai_engine_dark.gif differ
diff --git a/docs/img/gif/infra_control_plane.gif b/docs/img/gif/infra_control_plane.gif
new file mode 100644
index 00000000..66e70c8d
Binary files /dev/null and b/docs/img/gif/infra_control_plane.gif differ
diff --git a/docs/img/gif/infra_control_plane_dark.gif b/docs/img/gif/infra_control_plane_dark.gif
new file mode 100644
index 00000000..3d25ef82
Binary files /dev/null and b/docs/img/gif/infra_control_plane_dark.gif differ
diff --git a/docs/img/gif/integrations_yolov5.gif b/docs/img/gif/integrations_yolov5.gif
index f332940c..0a0795bd 100644
Binary files a/docs/img/gif/integrations_yolov5.gif and b/docs/img/gif/integrations_yolov5.gif differ
diff --git a/docs/img/gif/integrations_yolov5_dark.gif b/docs/img/gif/integrations_yolov5_dark.gif
new file mode 100644
index 00000000..6dcfb4f2
Binary files /dev/null and b/docs/img/gif/integrations_yolov5_dark.gif differ
diff --git a/docs/integrations/accelerate.md b/docs/integrations/accelerate.md
index 6be0f9ab..8d5d685e 100644
--- a/docs/integrations/accelerate.md
+++ b/docs/integrations/accelerate.md
@@ -9,7 +9,7 @@ such as required packages and uncommitted changes, and supports reporting scalar
## Setup
-To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../getting_started/ds/ds_first_steps.md#install-clearml)
+To use Accelerate's ClearML tracker, make sure that `clearml` is [installed and set up](../clearml_sdk/clearml_sdk_setup#install-clearml)
in your environment, and use the `log_with` parameter when instantiating an `Accelerator`:
```python
diff --git a/docs/integrations/autokeras.md b/docs/integrations/autokeras.md
index a92eb852..dcf38cff 100644
--- a/docs/integrations/autokeras.md
+++ b/docs/integrations/autokeras.md
@@ -3,7 +3,7 @@ title: AutoKeras
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [Getting Started](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
@@ -95,7 +95,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/catboost.md b/docs/integrations/catboost.md
index 50c41700..f3e60261 100644
--- a/docs/integrations/catboost.md
+++ b/docs/integrations/catboost.md
@@ -3,7 +3,7 @@ title: CatBoost
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
@@ -93,7 +93,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -117,5 +118,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/click.md b/docs/integrations/click.md
index cf9298bd..c1169615 100644
--- a/docs/integrations/click.md
+++ b/docs/integrations/click.md
@@ -3,7 +3,7 @@ title: Click
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
+If you are not already using ClearML, see [ClearML Setup](../clearml_sdk/clearml_sdk_setup) for setup
instructions.
:::
diff --git a/docs/integrations/fastai.md b/docs/integrations/fastai.md
index e8fd03e5..f532be3a 100644
--- a/docs/integrations/fastai.md
+++ b/docs/integrations/fastai.md
@@ -3,8 +3,7 @@ title: Fast.ai
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [fast.ai](https://www.fast.ai/), automatically logging its models and scalars.
@@ -93,7 +92,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/hydra.md b/docs/integrations/hydra.md
index d8a05c04..faaa41b0 100644
--- a/docs/integrations/hydra.md
+++ b/docs/integrations/hydra.md
@@ -3,8 +3,7 @@ title: Hydra
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
diff --git a/docs/integrations/ignite.md b/docs/integrations/ignite.md
index 9b2de832..683292ab 100644
--- a/docs/integrations/ignite.md
+++ b/docs/integrations/ignite.md
@@ -3,8 +3,7 @@ title: PyTorch Ignite
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[PyTorch Ignite](https://pytorch.org/ignite/index.html) is a library for training and evaluating neural networks in
diff --git a/docs/integrations/integrations.md b/docs/integrations/integrations.md
new file mode 100644
index 00000000..589d30fa
--- /dev/null
+++ b/docs/integrations/integrations.md
@@ -0,0 +1,40 @@
+# ClearML Integrations
+
+ClearML seamlessly integrates with a wide range of popular machine learning frameworks, tools, and platforms to enhance your ML development workflow. Our integrations enable automatic experiment tracking, model management, and pipeline orchestration across your preferred tools.
+
+### Deep Learning Frameworks
+* [PyTorch](pytorch.md)
+* [TensorFlow](tensorflow.md)
+* [Keras](keras.md)
+* [YOLO v5](yolov5.md)
+* [YOLO v8](yolov8.md)
+* [MMEngine](mmengine.md)
+* [MMCV](mmcv.md)
+* [MONAI](monai.md)
+* [Nvidia TAO](tao.md)
+* [MegEngine](megengine.md)
+* [FastAI](fastai.md)
+
+### ML Frameworks
+* [scikit-learn](scikit_learn.md)
+* [XGBoost](xgboost.md)
+* [LightGBM](lightgbm.md)
+* [CatBoost](catboost.md)
+* [Seaborn](seaborn.md)
+
+### Configuration and Optimization
+* [AutoKeras](autokeras.md)
+* [Keras Tuner](keras_tuner.md)
+* [Optuna](optuna.md)
+* [Hydra](hydra.md)
+* [Click](click.md)
+* [Python Fire](python_fire.md)
+* [jsonargparse](jsonargparse.md)
+
+### MLOps and Visualization
+* [TensorBoard](tensorboard.md)
+* [TensorBoardX](tensorboardx.md)
+* [Matplotlib](matplotlib.md)
+* [LangChain](langchain.md)
+* [Pytorch Ignite](ignite.md)
+* [Pytorch Lightning](pytorch_lightning.md)
diff --git a/docs/integrations/jsonargparse.md b/docs/integrations/jsonargparse.md
index 8f348e45..42cc2fa2 100644
--- a/docs/integrations/jsonargparse.md
+++ b/docs/integrations/jsonargparse.md
@@ -3,11 +3,11 @@ title: jsonargparse
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[jsonargparse](https://github.com/omni-us/jsonargparse) is a Python package for creating command-line interfaces.
ClearML integrates seamlessly with `jsonargparse` and automatically logs its command-line parameters and connected
configuration files.
diff --git a/docs/integrations/keras.md b/docs/integrations/keras.md
index 52f6f487..d9ac7a0d 100644
--- a/docs/integrations/keras.md
+++ b/docs/integrations/keras.md
@@ -3,10 +3,10 @@ title: Keras
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates with [Keras](https://keras.io/) out-of-the-box, automatically logging its models, scalars,
TensorFlow definitions, and TensorBoard outputs.
@@ -105,7 +105,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -129,5 +130,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/keras_tuner.md b/docs/integrations/keras_tuner.md
index d75cffc1..705526b8 100644
--- a/docs/integrations/keras_tuner.md
+++ b/docs/integrations/keras_tuner.md
@@ -3,10 +3,10 @@ title: Keras Tuner
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[Keras Tuner](https://www.tensorflow.org/tutorials/keras/keras_tuner) is a library that helps you pick the optimal set
of hyperparameters for training your models. ClearML integrates seamlessly with `kerastuner` and automatically logs
task scalars, the output model, and hyperparameter optimization summary.
diff --git a/docs/integrations/langchain.md b/docs/integrations/langchain.md
index f4fef37d..c85f7551 100644
--- a/docs/integrations/langchain.md
+++ b/docs/integrations/langchain.md
@@ -3,10 +3,10 @@ title: LangChain
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[LangChain](https://github.com/langchain-ai/langchain) is a popular framework for developing applications powered by
language models. You can integrate ClearML into your LangChain code using the built-in `ClearMLCallbackHandler`. This
class is used to create a ClearML Task to log LangChain assets and metrics.
diff --git a/docs/integrations/lightgbm.md b/docs/integrations/lightgbm.md
index cce9887e..7f6d2628 100644
--- a/docs/integrations/lightgbm.md
+++ b/docs/integrations/lightgbm.md
@@ -3,10 +3,10 @@ title: LightGBM
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [LightGBM](https://github.com/microsoft/LightGBM), automatically logging its models,
metric plots, and parameters.
@@ -94,7 +94,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -118,5 +119,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/matplotlib.md b/docs/integrations/matplotlib.md
index 06714ff8..dde8e0cd 100644
--- a/docs/integrations/matplotlib.md
+++ b/docs/integrations/matplotlib.md
@@ -3,10 +3,10 @@ title: Matplotlib
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[Matplotlib](https://matplotlib.org/) is a Python library for data visualization. ClearML automatically captures plots
and images created with `matplotlib`.
diff --git a/docs/integrations/megengine.md b/docs/integrations/megengine.md
index 77cad702..3ad13771 100644
--- a/docs/integrations/megengine.md
+++ b/docs/integrations/megengine.md
@@ -3,10 +3,10 @@ title: MegEngine
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [MegEngine](https://github.com/MegEngine/MegEngine), automatically logging its models.
All you have to do is simply add two lines of code to your MegEngine script:
@@ -90,7 +90,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -114,5 +115,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/mmcv.md b/docs/integrations/mmcv.md
index 8c77ca70..b9833820 100644
--- a/docs/integrations/mmcv.md
+++ b/docs/integrations/mmcv.md
@@ -7,10 +7,10 @@ title: MMCV v1.x
:::
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MMCV](https://github.com/open-mmlab/mmcv/tree/1.x) is a computer vision framework developed by OpenMMLab. You can integrate ClearML into your
code using the `mmcv` package's [`ClearMLLoggerHook`](https://mmcv.readthedocs.io/en/master/_modules/mmcv/runner/hooks/logger/clearml.html)
class. This class is used to create a ClearML Task and to automatically log metrics.
diff --git a/docs/integrations/mmengine.md b/docs/integrations/mmengine.md
index 09d64256..733625f6 100644
--- a/docs/integrations/mmengine.md
+++ b/docs/integrations/mmengine.md
@@ -3,10 +3,10 @@ title: MMEngine
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MMEngine](https://github.com/open-mmlab/mmengine) is a library for training deep learning models based on PyTorch.
MMEngine supports ClearML through a builtin logger: It automatically logs task environment information, such as
required packages and uncommitted changes, and supports reporting scalars, parameters, and debug samples.
diff --git a/docs/integrations/monai.md b/docs/integrations/monai.md
index 3dc98233..8b82e036 100644
--- a/docs/integrations/monai.md
+++ b/docs/integrations/monai.md
@@ -3,10 +3,10 @@ title: MONAI
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[MONAI](https://github.com/Project-MONAI/MONAI) is a PyTorch-based, open-source framework for deep learning in healthcare
imaging. You can integrate ClearML into your code using MONAI's built-in handlers: [`ClearMLImageHandler`, `ClearMLStatsHandler`](#clearmlimagehandler-and-clearmlstatshandler),
and [`ModelCheckpoint`](#modelcheckpoint).
diff --git a/docs/integrations/optuna.md b/docs/integrations/optuna.md
index f660f78b..2e4c821b 100644
--- a/docs/integrations/optuna.md
+++ b/docs/integrations/optuna.md
@@ -2,7 +2,7 @@
title: Optuna
---
-[Optuna](https://optuna.readthedocs.io/en/latest) is a [hyperparameter optimization](../fundamentals/hpo.md) framework,
+[Optuna](https://optuna.readthedocs.io/en/latest) is a [hyperparameter optimization](../getting_started/hpo.md) framework,
which makes use of different samplers such as grid search, random, bayesian, and evolutionary algorithms. You can integrate
Optuna into ClearML's automated hyperparameter optimization.
diff --git a/docs/integrations/pytorch.md b/docs/integrations/pytorch.md
index 59191fc9..9373b4e7 100644
--- a/docs/integrations/pytorch.md
+++ b/docs/integrations/pytorch.md
@@ -3,10 +3,10 @@ title: PyTorch
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [PyTorch](https://pytorch.org/), automatically logging its models.
All you have to do is simply add two lines of code to your PyTorch script:
@@ -114,7 +114,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/pytorch_lightning.md b/docs/integrations/pytorch_lightning.md
index d01f5cb2..41e95bba 100644
--- a/docs/integrations/pytorch_lightning.md
+++ b/docs/integrations/pytorch_lightning.md
@@ -3,10 +3,10 @@ title: PyTorch Lightning
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[PyTorch Lightning](https://github.com/Lightning-AI/lightning) is a framework that simplifies the process of training and deploying PyTorch models. ClearML seamlessly
integrates with PyTorch Lightning, automatically logging PyTorch models, parameters supplied by [LightningCLI](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html),
and more.
@@ -120,7 +120,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -144,6 +145,6 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/scikit_learn.md b/docs/integrations/scikit_learn.md
index 5a6afbab..c0fb490a 100644
--- a/docs/integrations/scikit_learn.md
+++ b/docs/integrations/scikit_learn.md
@@ -3,10 +3,10 @@ title: scikit-learn
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates seamlessly with [scikit-learn](https://scikit-learn.org/stable/), automatically logging models created
with `joblib`.
@@ -96,7 +96,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/seaborn.md b/docs/integrations/seaborn.md
index ca2e1a2c..54b65583 100644
--- a/docs/integrations/seaborn.md
+++ b/docs/integrations/seaborn.md
@@ -3,10 +3,10 @@ title: Seaborn
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[seaborn](https://seaborn.pydata.org/) is a Python library for data visualization.
ClearML automatically captures plots created using `seaborn`. All you have to do is add two
lines of code to your script:
diff --git a/docs/integrations/tao.md b/docs/integrations/tao.md
index ec80c93f..6a2376b2 100644
--- a/docs/integrations/tao.md
+++ b/docs/integrations/tao.md
@@ -113,7 +113,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/tensorboard.md b/docs/integrations/tensorboard.md
index a0921c3b..317c983f 100644
--- a/docs/integrations/tensorboard.md
+++ b/docs/integrations/tensorboard.md
@@ -3,9 +3,10 @@ title: TensorBoard
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
[TensorBoard](https://www.tensorflow.org/tensorboard) is TensorFlow's data visualization toolkit.
ClearML automatically captures all data logged to TensorBoard. All you have to do is add two
lines of code to your script:
diff --git a/docs/integrations/tensorboardx.md b/docs/integrations/tensorboardx.md
index c8bf97bf..673b2c7b 100644
--- a/docs/integrations/tensorboardx.md
+++ b/docs/integrations/tensorboardx.md
@@ -3,7 +3,7 @@ title: TensorboardX
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md).
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
[TensorboardX](https://tensorboardx.readthedocs.io/en/latest/tutorial.html#what-is-tensorboard-x) is a data
diff --git a/docs/integrations/tensorflow.md b/docs/integrations/tensorflow.md
index 3bdaee58..49040835 100644
--- a/docs/integrations/tensorflow.md
+++ b/docs/integrations/tensorflow.md
@@ -3,10 +3,10 @@ title: TensorFlow
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
+
ClearML integrates with [TensorFlow](https://www.tensorflow.org/) out-of-the-box, automatically logging its models,
definitions, scalars, as well as TensorBoard outputs.
@@ -107,7 +107,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -131,5 +132,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/transformers.md b/docs/integrations/transformers.md
index 754fd07f..74b8c69b 100644
--- a/docs/integrations/transformers.md
+++ b/docs/integrations/transformers.md
@@ -78,7 +78,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -90,5 +91,5 @@ The ClearML Agent executing the task will use the new values to [override any ha
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/xgboost.md b/docs/integrations/xgboost.md
index 7f230f81..876f5fb2 100644
--- a/docs/integrations/xgboost.md
+++ b/docs/integrations/xgboost.md
@@ -3,8 +3,7 @@ title: XGBoost
---
:::tip
-If you are not already using ClearML, see [Getting Started](../getting_started/ds/ds_first_steps.md) for setup
-instructions.
+If you are not already using ClearML, see [ClearML Setup instructions](../clearml_sdk/clearml_sdk_setup).
:::
ClearML integrates seamlessly with [XGBoost](https://xgboost.readthedocs.io/en/stable/), automatically logging its models,
@@ -121,7 +120,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
@@ -145,5 +145,5 @@ task.execute_remotely(queue_name='default', exit_process=True)
## Hyperparameter Optimization
Use ClearML's [`HyperParameterOptimizer`](../references/sdk/hpo_optimization_hyperparameteroptimizer.md) class to find
-the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../fundamentals/hpo.md)
+the hyperparameter values that yield the best performing models. See [Hyperparameter Optimization](../getting_started/hpo.md)
for more information.
diff --git a/docs/integrations/yolov5.md b/docs/integrations/yolov5.md
index 6690cf75..9818d8c9 100644
--- a/docs/integrations/yolov5.md
+++ b/docs/integrations/yolov5.md
@@ -7,7 +7,7 @@ built in logger:
* Track every YOLOv5 training run in ClearML
* Version and easily access your custom training data with [ClearML Data](../clearml_data/clearml_data.md)
* Remotely train and monitor your YOLOv5 training runs using [ClearML Agent](../clearml_agent.md)
-* Get the very best mAP using ClearML [Hyperparameter Optimization](../fundamentals/hpo.md)
+* Get the very best mAP using ClearML [Hyperparameter Optimization](../getting_started/hpo.md)
* Turn your newly trained YOLOv5 model into an API with just a few commands using [ClearML Serving](../clearml_serving/clearml_serving.md)
## Setup
@@ -169,7 +169,8 @@ and shuts down instances as needed, according to a resource budget that you set.
### Cloning, Editing, and Enqueuing
-
+
+
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
with the new configuration on a remote machine:
diff --git a/docs/integrations/yolov8.md b/docs/integrations/yolov8.md
index 90d38321..f3080412 100644
--- a/docs/integrations/yolov8.md
+++ b/docs/integrations/yolov8.md
@@ -166,4 +166,5 @@ with the new configuration on a remote machine:
The ClearML Agent executing the task will use the new values to [override any hard coded values](../clearml_agent.md).
-
+
+
diff --git a/docs/overview.md b/docs/overview.md
new file mode 100644
index 00000000..12cb5402
--- /dev/null
+++ b/docs/overview.md
@@ -0,0 +1,82 @@
+---
+id: overview
+title: What is ClearML?
+slug: /
+---
+
+# ClearML Documentation
+
+## Overview
+Welcome to the documentation for ClearML, the end to end platform for streamlining AI development and deployment. ClearML consists of three essential layers:
+1. [**Infrastructure Control Plane**](#infrastructure-control-plane) (Cloud/On-Prem Agnostic)
+2. [**AI Development Center**](#ai-development-center)
+3. [**GenAI App Engine**](#genai-app-engine)
+
+Each layer provides distinct functionality to ensure an efficient and scalable AI workflow from development to deployment.
+
+
+
+
+---
+
+## Infrastructure Control Plane
+The Infrastructure Control Plane serves as the foundation of the ClearML platform, offering compute resource provisioning and management, enabling administrators to make the compute available through GPUaaS capabilities and no-hassle configuration.
+Utilizing the Infrastructure Control Plane, DevOps and IT teams can manage and optimize GPU resources to ensure high performance and cost efficiency.
+
+#### Features
+- **Resource Management:** Automates the allocation and management of GPU resources.
+- **Workload Autoscaling:** Seamlessly scale GPU resources based on workload demands.
+- **Monitoring and Logging:** Provides comprehensive monitoring and logging for GPU utilization and performance.
+- **Cost Optimization:** Consolidate cloud and on-prem compute into a seamless GPUaaS offering
+- **Deployment Flexibility:** Easily run your workloads on both cloud and on-premises compute.
+
+
+
+
+---
+
+## AI Development Center
+The AI Development Center offers a robust environment for developing, training, and testing AI models. It is designed to be cloud and on-premises agnostic, providing flexibility in deployment.
+
+#### Features
+- **Integrated Development Environment:** A comprehensive IDE for training, testing, and debugging AI models.
+- **Model Training:** Scalable and distributed model training and hyperparameter optimization.
+- **Data Management:** Tools for data preprocessing, management, and versioning.
+- **Experiment Tracking:** Track metrics, artifacts and log. manage versions, and compare results.
+- **Workflow Automation:** Build pipelines to formalize your workflow
+
+
+
+
+---
+
+## GenAI App Engine
+The GenAI App Engine is designed to deploy large language models (LLM) into GPU clusters and manage various AI workloads, including Retrieval-Augmented Generation (RAG) tasks. This layer also handles networking, authentication, and role-based access control (RBAC) for deployed services.
+
+#### Features
+- **LLM Deployment:** Seamlessly deploy LLMs into GPU clusters.
+- **RAG Workloads:** Efficiently manage and execute RAG workloads.
+- **Networking and Authentication:** Deploy GenAI through secure, authenticated network endpoints
+- **RBAC:** Implement RBAC to control access to deployed services.
+
+
+
+
+---
+
+## Getting Started
+To begin using the ClearML, follow these steps:
+1. **Set Up Infrastructure Control Plane:** Allocate and manage your GPU resources.
+2. **Develop AI Models:** Use the AI Development Center to develop and train your models.
+3. **Deploy AI Models:** Deploy your models using the GenAI App Engine.
+
+For detailed instructions on each step, refer to the respective sections in this documentation.
+
+---
+
+## Support
+For feature requests or bug reports, see ClearML on [GitHub](https://github.com/clearml/clearml/issues).
+
+If you have any questions, join the discussion on the **ClearML** [Slack channel](https://joinslack.clear.ml), or tag your questions on [stackoverflow](https://stackoverflow.com/questions/tagged/clearml) with the **clearml** tag.
+
+Lastly, you can always find us at [support@clearml.ai](mailto:support@clearml.ai?subject=ClearML).
\ No newline at end of file
diff --git a/docs/pipelines/pipelines.md b/docs/pipelines/pipelines.md
index 1785c34f..2c0e742d 100644
--- a/docs/pipelines/pipelines.md
+++ b/docs/pipelines/pipelines.md
@@ -12,7 +12,8 @@ products such as artifacts and parameters.
When run, the controller will sequentially launch the pipeline steps. The pipeline logic and steps
can be executed locally, or on any machine using the [clearml-agent](../clearml_agent.md).
-
+
+
The [Pipeline Run](../webapp/pipelines/webapp_pipeline_viewing.md) page in the web UI displays the pipeline's structure
in terms of executed steps and their status, as well as the run's configuration parameters and output. See [pipeline UI](../webapp/pipelines/webapp_pipeline_page.md)
diff --git a/docs/remote_session.md b/docs/remote_session.md
index b6c2fc85..8d104534 100644
--- a/docs/remote_session.md
+++ b/docs/remote_session.md
@@ -16,7 +16,7 @@ meets resource needs:
* [Clearml Session CLI](apps/clearml_session.md) - Launch an interactive JupyterLab, VS Code, and SSH session on a remote machine:
* Automatically store and sync your [interactive session workspace](apps/clearml_session.md#storing-and-synchronizing-workspace)
* Replicate a previously executed task's execution environment and [interactively execute and debug](apps/clearml_session.md#starting-a-debugging-session) it on a remote session
- * Develop directly inside your Kubernetes pods ([see ClearML Agent](clearml_agent/clearml_agent_deployment.md#kubernetes))
+ * Develop directly inside your Kubernetes pods ([see ClearML Agent](clearml_agent/clearml_agent_deployment_k8s.md))
* And more!
* GUI Applications (available under ClearML Enterprise Plan) - These apps provide access to remote machines over a
secure and encrypted SSH connection, allowing you to work in a remote environment using your preferred development
diff --git a/docs/webapp/applications/apps_aws_autoscaler.md b/docs/webapp/applications/apps_aws_autoscaler.md
index cfab329f..3068db42 100644
--- a/docs/webapp/applications/apps_aws_autoscaler.md
+++ b/docs/webapp/applications/apps_aws_autoscaler.md
@@ -319,17 +319,10 @@ to an IAM user, and create credentials keys for that user to configure in the au
"ssm:GetParameters",
"ssm:GetParameter"
],
- "Resource": "arn:aws:ssm:*::parameter/aws/service/marketplace/*"
- },
- {
- "Sid": "AllowUsingDeeplearningAMIAliases",
- "Effect": "Allow",
- "Action": [
- "ssm:GetParametersByPath",
- "ssm:GetParameters",
- "ssm:GetParameter"
- ],
- "Resource": "arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
+ "Resource": [
+ "arn:aws:ssm:*::parameter/aws/service/marketplace/*",
+ "arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
+ ]
}
]
}
diff --git a/docs/webapp/applications/apps_dashboard.md b/docs/webapp/applications/apps_dashboard.md
index e122d018..306b80ff 100644
--- a/docs/webapp/applications/apps_dashboard.md
+++ b/docs/webapp/applications/apps_dashboard.md
@@ -28,13 +28,13 @@ of the chosen metric over time.
* Monitored Metric - Series - Metric series (variant) to track
* Monitored Metric - Trend - Choose whether to track the monitored metric's highest or lowest values
* **Slack Notification** (optional) - Set up Slack integration for notifications of task failure. Select the
-`Alert on completed experiments` under `Additional options` to set up alerts for task completions.
+`Alert on completed tasks` under `Additional options` to set up alerts for task completions.
* API Token - Slack workspace access token
* Channel Name - Slack channel to which task failure alerts will be posted
* Alert Iteration Threshold - Minimum number of task iterations to trigger Slack alerts (tasks that fail prior to the threshold will be ignored)
* **Additional options**
- * Track manual (non agent-run) experiments as well - Select to include in the dashboard tasks that were not executed by an agent
- * Alert on completed experiments - Select to include completed tasks in alerts: in the dashboard's Task Alerts section and in Slack Alerts.
+ * Track manual (non agent-run) tasks as well - Select to include in the dashboard tasks that were not executed by an agent
+ * Alert on completed tasks - Select to include completed tasks in alerts: in the dashboard's Task Alerts section and in Slack Alerts.
* **Export Configuration** - Export the app instance configuration as a JSON file, which you can later import to create
a new instance with the same configuration.
@@ -50,7 +50,7 @@ of the chosen metric over time.
Once a project dashboard instance is launched, its dashboard displays the following information about a project:
* Task Status Summary - Percentages of Tasks by status
* Task Type Summary - Percentages of local tasks vs. agent tasks
-* Experiments Summary - Number of tasks by status over time
+* Task Summary - Number of tasks by status over time
* Monitoring - GPU utilization and GPU memory usage
* Metric Monitoring - An aggregated view of the values of a metric over time
* Project's Active Workers - Number of workers currently executing tasks in the monitored project
diff --git a/docs/webapp/applications/apps_hpo.md b/docs/webapp/applications/apps_hpo.md
index 0238b3a6..e7b65f20 100644
--- a/docs/webapp/applications/apps_hpo.md
+++ b/docs/webapp/applications/apps_hpo.md
@@ -56,18 +56,18 @@ limits.
**CONFIGURATION > HYPERPARAMETERS > Hydra**).
:::
* **Optimization Job Title** (optional) - Name for the HPO instance. This will appear in the instance list
-* **Optimization Experiments Destination Project** (optional) - The project where optimization tasks will be saved.
+* **Optimization Tasks Destination Project** (optional) - The project where optimization tasks will be saved.
Leave empty to use the same project as the Initial task.
* **Maximum Concurrent Tasks** - The maximum number of simultaneously running optimization tasks
* **Advanced Configuration** (optional)
- * Limit Total HPO Experiments - Maximum total number of optimization tasks
- * Number of Top Experiments to Save - Number of best performing tasks to save (the rest are archived)
- * Limit Single Experiment Running Time (Minutes) - Time limit per optimization task. Tasks will be
+ * Limit Total HPO Tasks - Maximum total number of optimization tasks
+ * Number of Top Tasks to Save - Number of best performing tasks to save (the rest are archived)
+ * Limit Single Task Running Time (Minutes) - Time limit per optimization task. Tasks will be
stopped after the specified time elapsed
- * Minimal Number of Iterations Per Single Experiment - Some search methods, such as Optuna, prune underperforming
+ * Minimal Number of Iterations Per Single Task - Some search methods, such as Optuna, prune underperforming
tasks. This is the minimum number of iterations per task before it can be stopped. Iterations are
based on the tasks' own reporting (for example, if tasks report every epoch, then iterations=epochs)
- * Maximum Number of Iterations Per Single Experiment - Maximum iterations per task after which it will be
+ * Maximum Number of Iterations Per Single Task - Maximum iterations per task after which it will be
stopped. Iterations are based on the tasks' own reporting (for example, if tasks report every epoch,
then iterations=epochs)
* Limit Total Optimization Instance Time (Minutes) - Time limit for the whole optimization process (in minutes)
diff --git a/docs/webapp/applications/apps_llama_deployment.md b/docs/webapp/applications/apps_llama_deployment.md
index 1f965d1e..596586b3 100644
--- a/docs/webapp/applications/apps_llama_deployment.md
+++ b/docs/webapp/applications/apps_llama_deployment.md
@@ -81,6 +81,6 @@ values from the file, which can be modified before launching the app instance

-
+
\ No newline at end of file
diff --git a/docs/webapp/webapp_exp_track_visual.md b/docs/webapp/webapp_exp_track_visual.md
index 496daa47..8d8ef485 100644
--- a/docs/webapp/webapp_exp_track_visual.md
+++ b/docs/webapp/webapp_exp_track_visual.md
@@ -93,7 +93,7 @@ using to set up an environment (`pip` or `conda`) are available. Select which re
### Container
The Container section list the following information:
-* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md))
+* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../getting_started/clearml_agent_docker_exec.md))
* Arguments - add container arguments
* Setup shell script - a bash script to be executed inside the container before setting up the task's environment
diff --git a/docs/webapp/webapp_exp_tuning.md b/docs/webapp/webapp_exp_tuning.md
index 6c6ddd96..b63dc423 100644
--- a/docs/webapp/webapp_exp_tuning.md
+++ b/docs/webapp/webapp_exp_tuning.md
@@ -72,7 +72,7 @@ and/or Reset functions.
#### Default Container
-Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Docker containers](../clearml_agent/clearml_agent_docker.md)).
+Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Docker containers](../getting_started/clearml_agent_docker_exec.md)).
**To add, change, or delete a default container:**
diff --git a/docs/webapp/webapp_model_comparing.md b/docs/webapp/webapp_model_comparing.md
index 07be1798..ee98d6d9 100644
--- a/docs/webapp/webapp_model_comparing.md
+++ b/docs/webapp/webapp_model_comparing.md
@@ -46,8 +46,7 @@ models update. The Enterprise Plan and Hosted Service support embedding resource
The comparison tabs provides the following views:
* [Side-by-side textual comparison](#side-by-side-textual-comparison)
* [Tabular scalar comparison](#tabular-scalar-comparison)
-* [Merged plot comparison](#plot-comparison)
-* [Side-by-side graphic comparison](#graphic-comparison)
+* [Plot comparison](#plot-comparison)
### Side-by-side Textual Comparison
diff --git a/docusaurus.config.js b/docusaurus.config.js
index d78ff414..ef4f8ead 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -68,7 +68,7 @@ module.exports = {
},
announcementBar: {
id: 'supportus',
- content: 'If you ❤️ ️ClearML, ⭐️ us on GitHub!',
+ content: 'If you ❤️ ️ClearML, ⭐️ us on GitHub!',
isCloseable: true,
},
navbar: {
@@ -82,54 +82,72 @@ module.exports = {
},
items: [
{
- to: '/docs',
- label: 'Docs',
+ to: '/docs/',
+ label: 'Overview',
position: 'left',
+ activeBaseRegex: '^/docs/latest/docs/(fundamentals/agents_and_queues|hyper_datasets|clearml_agent(/(clearml_agent_dynamic_gpus|clearml_agent_fractional_gpus)?|)?|cloud_autoscaling/autoscaling_overview|remote_session|model_registry|deploying_clearml/enterprise_deploy/appgw|build_interactive_models|deploying_models|custom_apps)?$',
},
{
- to:'/docs/hyperdatasets/overview',
- label: 'Hyper-Datasets',
+ to: '/docs/clearml_sdk/clearml_sdk_setup',
+ label: 'Setup',
position: 'left',
+ activeBaseRegex: '^/docs/latest/docs/(deploying_clearml(?!/enterprise_deploy/appgw(/.*)?$)(/.*)?$|clearml_sdk/clearml_sdk_setup|user_management(/.*)?|clearml_agent/(clearml_agent_setup|clearml_agent_deployment_bare_metal|clearml_agent_deployment_k8s|clearml_agent_deployment_slurm|clearml_agent_execution_env|clearml_agent_env_caching|clearml_agent_services_mode)|integrations/storage)/?$',
},
- // {to: 'tutorials', label: 'Tutorials', position: 'left'},
- // Please keep GitHub link to the right for consistency.
- {to: '/docs/guides', label: 'Examples', position: 'left'},
- //{to: '/docs/references', label: 'API', position: 'left'},
{
- label: 'References',
+ to: '/docs/getting_started/auto_log_exp',
+ label: 'Using ClearML',
+ position: 'left',
+ activeBaseRegex: '^/docs/latest/docs/(getting_started(?!/video_tutorials(/.*)?)|clearml_serving|apps/clearml_session)(/.*)?$',
+ },
+ {
+ label: 'Developer Center',
position: 'left', // or 'right'
+ to: '/docs/fundamentals/projects',
+ activeBaseRegex: '^/docs/latest/docs/(fundamentals(?!/agents_and_queues)(/.*)?|configs/configuring_clearml|getting_started/video_tutorials(/.*)?|clearml_sdk(?!/clearml_sdk_setup)(/.*)?|pipelines(/.*)?|hyperdatasets(/.*)?|clearml_data(/.*)?|hyperdatasets(/webapp)(/.*)?|references(/.*)?|webapp(/.*)?|clearml_agent/(clearml_agent_ref|clearml_agent_env_var)(/.*)?|configs/(clearml_conf|env_vars)(/.*)?|apps/(clearml_task|clearml_param_search)(/.*)?|best_practices(/.*)?|guides(/.*)?|integrations(/.*)?|faq|release_notes(/.*)?)$',
+ activeClassName: 'navbar__link--active',
items: [
{
- label: 'SDK',
+ label: 'ClearML Basics',
+ to: '/docs/fundamentals/projects',
+ activeBaseRegex: '^/docs/latest/docs/(fundamentals|getting_started/video_tutorials|clearml_sdk(/(?!clearml_sdk_setup).*|(?=/))?|pipelines|clearml_data|hyperdatasets/(?!webapp/).*)(/.*)?$',
+ },
+ {
+ label: 'References',
to: '/docs/references/sdk/task',
+ activeBaseRegex: '^/docs/latest/docs/(references/|webapp/.*|hyperdatasets/webapp/.*|clearml_agent/(clearml_agent_ref|clearml_agent_env_var)|configs/(clearml_conf|env_vars)|apps/(clearml_task|clearml_param_search))(/.*)?$',
},
{
- label: 'ClearML Agent',
- to: '/docs/clearml_agent/clearml_agent_ref',
+ label: 'Best Practices',
+ to: 'docs/best_practices/data_scientist_best_practices',
+ activeBaseRegex: '^/docs/latest/docs/best_practices/'
},
{
- label: 'Server API',
- to: '/docs/references/api',
+ label: 'Tutorials',
+ to: '/docs/guides',
+ activeBaseRegex: '^/docs/latest/docs/guides',
},
{
- label: 'Hyper-Datasets',
- to: '/docs/references/hyperdataset',
+ label: 'Code Integrations',
+ to: '/docs/integrations',
+ activeBaseRegex: '^/docs/latest/docs/integrations(?!/storage)',
+ },
+ {
+ label: 'FAQ',
+ to: '/docs/faq',
+ activeBaseRegex: '^/docs/latest/docs/faq$',
},
-
{
label: 'Release Notes',
to: '/docs/release_notes/clearml_server/open_source/ver_2_0',
+ activeBaseRegex: '^/docs/latest/docs/release_notes/',
},
- {
- label: 'Community Resources',
- to: '/docs/community',
- }
+
],
},
{
- label: 'FAQ',
+ label: 'Community Resources',
position: 'left', // or 'right'
- to: '/docs/faq'
+ to: '/docs/latest/docs/community',
},
{
href: 'https://joinslack.clear.ml',
@@ -150,7 +168,7 @@ module.exports = {
'aria-label': 'Twitter',
},
{
- href: 'https://github.com/allegroai/clearml',
+ href: 'https://github.com/clearml/clearml',
position: 'right',
className: 'header-ico header-ico--github',
'aria-label': 'GitHub repository',
@@ -197,7 +215,7 @@ module.exports = {
},
{
label: 'GitHub',
- href: 'https://github.com/allegroai/clearml',
+ href: 'https://github.com/clearml/clearml',
},
],
},
@@ -215,13 +233,13 @@ module.exports = {
// Please change this to your repo.
breadcrumbs: false,
editUrl:
- 'https://github.com/allegroai/clearml-docs/edit/main/',
+ 'https://github.com/clearml/clearml-docs/edit/main/',
},
// API: {
// sidebarPath: require.resolve('./sidebars.js'),
// // Please change this to your repo.
// editUrl:
- // 'https://github.com/allegroai/clearml-docs/edit/main/',
+ // 'https://github.com/clearml/clearml-docs/edit/main/',
// },
blog: {
blogTitle: 'ClearML Tutorials',
@@ -231,7 +249,7 @@ module.exports = {
showReadingTime: true,
// Please change this to your repo.
editUrl:
- 'https://github.com/allegroai/clearml-docs/edit/main/tutorials/',
+ 'https://github.com/clearml/clearml-docs/edit/main/tutorials/',
},
theme: {
customCss: require.resolve('./src/css/custom.css'),
diff --git a/package-lock.json b/package-lock.json
index d146110c..5a47cfd3 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -15,7 +15,7 @@
"@docusaurus/plugin-google-analytics": "^3.6.1",
"@docusaurus/plugin-google-gtag": "^3.6.1",
"@docusaurus/preset-classic": "^3.6.1",
- "@easyops-cn/docusaurus-search-local": "^0.48.0",
+ "@easyops-cn/docusaurus-search-local": "^0.48.5",
"@mdx-js/react": "^3.0.0",
"clsx": "^1.1.1",
"joi": "^17.4.0",
diff --git a/package.json b/package.json
index a9144041..27874081 100644
--- a/package.json
+++ b/package.json
@@ -23,7 +23,7 @@
"@docusaurus/plugin-google-analytics": "^3.6.1",
"@docusaurus/plugin-google-gtag": "^3.6.1",
"@docusaurus/preset-classic": "^3.6.1",
- "@easyops-cn/docusaurus-search-local": "^0.48.0",
+ "@easyops-cn/docusaurus-search-local": "^0.48.5",
"@mdx-js/react": "^3.0.0",
"clsx": "^1.1.1",
"medium-zoom": "^1.0.6",
diff --git a/sidebars.js b/sidebars.js
index 29abd359..b6414ef3 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -9,293 +9,120 @@
module.exports = {
mainSidebar: [
- {'Getting Started': ['getting_started/main', {
- 'Where do I start?': [{'Data Scientists': ['getting_started/ds/ds_first_steps', 'getting_started/ds/ds_second_steps', 'getting_started/ds/best_practices']},
- {'MLOps and LLMOps': ['getting_started/mlops/mlops_first_steps','getting_started/mlops/mlops_second_steps','getting_started/mlops/mlops_best_practices']}]
- }, 'getting_started/architecture', {'Video Tutorials':
- [
- 'getting_started/video_tutorials/quick_introduction',
- 'getting_started/video_tutorials/core_component_overview',
- 'getting_started/video_tutorials/experiment_manager_hands-on',
- 'getting_started/video_tutorials/experiment_management_best_practices',
- 'getting_started/video_tutorials/agent_remote_execution_and_automation',
- 'getting_started/video_tutorials/hyperparameter_optimization',
- 'getting_started/video_tutorials/pipelines_from_code',
- 'getting_started/video_tutorials/pipelines_from_tasks',
- 'getting_started/video_tutorials/clearml-data',
- 'getting_started/video_tutorials/the_clearml_autoscaler',
- 'getting_started/video_tutorials/hyperdatasets_data_versioning',
+ {
+ type: 'doc',
+ id: 'overview',
+ label: 'ClearML at a Glance',
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'Infrastructure Control Plane (GPUaaS)',
+ items: [
+ 'fundamentals/agents_and_queues',
+ 'clearml_agent',
+ 'clearml_agent/clearml_agent_dynamic_gpus',
+ 'clearml_agent/clearml_agent_fractional_gpus',
+ 'cloud_autoscaling/autoscaling_overview',
+ 'remote_session'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'AI Development Center',
+ items: [
+ 'clearml_sdk/clearml_sdk',
+ 'pipelines/pipelines',
+ 'clearml_data/clearml_data',
+ 'hyper_datasets',
+ 'model_registry',
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'GenAI App Engine',
+ items: [
+ 'deploying_clearml/enterprise_deploy/appgw',
+ 'build_interactive_models',
+ 'deploying_models',
+ 'custom_apps'
+ ]
+ },
+ ],
+ usecaseSidebar: [
+ /*'getting_started/main',*/
+ 'getting_started/auto_log_exp',
+ 'getting_started/track_tasks',
+ 'getting_started/reproduce_tasks',
+ 'getting_started/logging_using_artifacts',
+ 'getting_started/data_management',
+ 'getting_started/remote_execution',
+ 'getting_started/building_pipelines',
+ 'getting_started/hpo',
+ 'getting_started/clearml_agent_docker_exec',
+ 'getting_started/clearml_agent_base_docker',
+ 'getting_started/clearml_agent_scheduling',
+ {"Deploying Model Endpoints": [
{
- 'Hands-on MLOps Tutorials':[
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist',
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_an_mlops_engineer',
- 'getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml'
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Serving',
+ link: {type: 'doc', id: 'clearml_serving/clearml_serving'},
+ items: ['clearml_serving/clearml_serving_setup', 'clearml_serving/clearml_serving_cli', 'clearml_serving/clearml_serving_tutorial']
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Model Launchers',
+ items: [
+ 'webapp/applications/apps_embed_model_deployment',
+ 'webapp/applications/apps_model_deployment',
+ 'webapp/applications/apps_llama_deployment'
]
- }
- ]}]},
- {'ClearML Fundamentals': [
- 'fundamentals/projects', 'fundamentals/task', 'fundamentals/hyperparameters',
- 'fundamentals/artifacts', 'fundamentals/models', 'fundamentals/logger', 'fundamentals/agents_and_queues',
- 'fundamentals/hpo'
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML SDK',
- link: {type: 'doc', id: 'clearml_sdk/clearml_sdk'},
- items: ['clearml_sdk/task_sdk', 'clearml_sdk/model_sdk', 'clearml_sdk/apiclient_sdk']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Agent',
- link: {type: 'doc', id: 'clearml_agent'},
- items: ['clearml_agent/clearml_agent_setup', 'clearml_agent/clearml_agent_deployment',
- 'clearml_agent/clearml_agent_execution_env', 'clearml_agent/clearml_agent_env_caching',
- 'clearml_agent/clearml_agent_dynamic_gpus', 'clearml_agent/clearml_agent_fractional_gpus',
- 'clearml_agent/clearml_agent_services_mode', 'clearml_agent/clearml_agent_docker',
- 'clearml_agent/clearml_agent_scheduling']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Cloud Autoscaling',
- link: {type: 'doc', id: 'cloud_autoscaling/autoscaling_overview'},
- items: [
- {'Autoscaler Apps': [
- 'webapp/applications/apps_aws_autoscaler',
- 'webapp/applications/apps_gcp_autoscaler',
- ]
- }
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Pipelines',
- link: {type: 'doc', id: 'pipelines/pipelines'},
- items: [{"Building Pipelines":
- ['pipelines/pipelines_sdk_tasks', 'pipelines/pipelines_sdk_function_decorators']
- }
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Data',
- link: {type: 'doc', id: 'clearml_data/clearml_data'},
- items: ['clearml_data/clearml_data_cli', 'clearml_data/clearml_data_sdk', 'clearml_data/best_practices',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Workflows',
- link: {type: 'doc', id: 'clearml_data/data_management_examples/workflows'},
- items: [
- 'clearml_data/data_management_examples/data_man_simple',
- 'clearml_data/data_management_examples/data_man_folder_sync',
- 'clearml_data/data_management_examples/data_man_cifar_classification',
- 'clearml_data/data_management_examples/data_man_python'
- ]
- },
- ]
- },
- 'hyper_datasets',
- 'model_registry',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Remote IDE',
- link: {type: 'doc', id: 'remote_session'},
- items: [
- 'apps/clearml_session',
- {type: 'ref', id: 'webapp/applications/apps_ssh_session'},
- {type: 'ref', id: 'webapp/applications/apps_jupyter_lab'},
- {type: 'ref', id: 'webapp/applications/apps_vscode'}
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Serving',
- link: {type: 'doc', id: 'clearml_serving/clearml_serving'},
- items: ['clearml_serving/clearml_serving_setup', 'clearml_serving/clearml_serving_cli', 'clearml_serving/clearml_serving_tutorial']
- },
- {'CLI Tools': [
- 'apps/clearml_task',
- {type: 'ref', id: 'clearml_agent/clearml_agent_ref'},
- {type: 'ref', id: 'clearml_data/clearml_data_cli'},
- 'apps/clearml_param_search',
- {type: 'ref', id: 'apps/clearml_session'},
- {type: 'ref', id: 'clearml_serving/clearml_serving_cli'},
- ]
- },
- {'Integrations': [
- 'integrations/autokeras',
- 'integrations/catboost',
- 'integrations/click',
- 'integrations/fastai',
- {"Hugging Face": ['integrations/transformers', 'integrations/accelerate']},
- 'integrations/hydra', 'integrations/jsonargparse',
- 'integrations/keras', 'integrations/keras_tuner',
- 'integrations/langchain',
- 'integrations/lightgbm', 'integrations/matplotlib',
- 'integrations/megengine', 'integrations/monai', 'integrations/tao',
- {"OpenMMLab":['integrations/mmcv', 'integrations/mmengine']},
- 'integrations/optuna',
- 'integrations/python_fire', 'integrations/pytorch',
- 'integrations/ignite',
- 'integrations/pytorch_lightning',
- 'integrations/scikit_learn', 'integrations/seaborn',
- 'integrations/splunk',
- 'integrations/tensorboard', 'integrations/tensorboardx', 'integrations/tensorflow',
- 'integrations/xgboost', 'integrations/yolov5', 'integrations/yolov8'
- ]
- },
- 'integrations/storage',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'WebApp',
- link: {type: 'doc', id: 'webapp/webapp_overview'},
- items: [
- 'webapp/webapp_home',
- {
- 'Projects': [
- 'webapp/webapp_projects_page',
- 'webapp/webapp_project_overview',
- {
- 'Tasks': ['webapp/webapp_exp_table', 'webapp/webapp_exp_track_visual', 'webapp/webapp_exp_reproducing', 'webapp/webapp_exp_tuning',
- 'webapp/webapp_exp_comparing']
- },
- {
- 'Models': ['webapp/webapp_model_table', 'webapp/webapp_model_viewing', 'webapp/webapp_model_comparing']
- },
- 'webapp/webapp_exp_sharing'
- ]
- },
- {
- 'Datasets':[
- 'webapp/datasets/webapp_dataset_page', 'webapp/datasets/webapp_dataset_viewing'
- ]
- },
- {
- 'Pipelines':[
- 'webapp/pipelines/webapp_pipeline_page', 'webapp/pipelines/webapp_pipeline_table', 'webapp/pipelines/webapp_pipeline_viewing'
- ]
- },
- 'webapp/webapp_model_endpoints',
- 'webapp/webapp_reports',
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Orchestration',
- link: {type: 'doc', id: 'webapp/webapp_workers_queues'},
- items: ['webapp/webapp_orchestration_dash', 'webapp/resource_policies']
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Applications',
- link: {type: 'doc', id: 'webapp/applications/apps_overview'},
- items: [
- {
- "General": [
- 'webapp/applications/apps_hpo',
- 'webapp/applications/apps_dashboard',
- 'webapp/applications/apps_task_scheduler',
- 'webapp/applications/apps_trigger_manager',
- ]
- },
- {
- "AI Dev": [
- 'webapp/applications/apps_ssh_session',
- 'webapp/applications/apps_jupyter_lab',
- 'webapp/applications/apps_vscode',
- ]
- },
- {
- "UI Dev": [
- 'webapp/applications/apps_gradio',
- 'webapp/applications/apps_streamlit'
- ]
- },
- {
- "Deploy": [
- 'webapp/applications/apps_embed_model_deployment',
- 'webapp/applications/apps_model_deployment',
- 'webapp/applications/apps_llama_deployment'
- ]
- },
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Settings',
- link: {type: 'doc', id: 'webapp/settings/webapp_settings_overview'},
- items: ['webapp/settings/webapp_settings_profile',
- 'webapp/settings/webapp_settings_admin_vaults', 'webapp/settings/webapp_settings_users',
- 'webapp/settings/webapp_settings_access_rules', 'webapp/settings/webapp_settings_id_providers',
- 'webapp/settings/webapp_settings_resource_configs', 'webapp/settings/webapp_settings_usage_billing',
- 'webapp/settings/webapp_settings_storage_credentials'
- ]
- },
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'Configuring ClearML',
- link: {type: 'doc', id: 'configs/configuring_clearml'},
- items: ['configs/clearml_conf', 'configs/env_vars']
- },
- {'User Management': [
- 'user_management/user_groups',
- 'user_management/access_rules',
- 'user_management/admin_vaults',
- 'user_management/identity_providers'
- ]
- },
- {
- type: 'category',
- collapsible: true,
- collapsed: true,
- label: 'ClearML Server',
- link: {type: 'doc', id: 'deploying_clearml/clearml_server'},
- items: [
- {'Deploying ClearML Server':
- ['deploying_clearml/clearml_server_aws_ec2_ami', 'deploying_clearml/clearml_server_gcp',
- 'deploying_clearml/clearml_server_linux_mac', 'deploying_clearml/clearml_server_win',
- 'deploying_clearml/clearml_server_kubernetes_helm']
- },
- {'Upgrading ClearML Server':
- ['deploying_clearml/upgrade_server_aws_ec2_ami','deploying_clearml/upgrade_server_gcp',
- 'deploying_clearml/upgrade_server_linux_mac', 'deploying_clearml/upgrade_server_win',
- 'deploying_clearml/upgrade_server_kubernetes_helm',
- 'deploying_clearml/clearml_server_es7_migration', 'deploying_clearml/clearml_server_mongo44_migration']
- },
- 'deploying_clearml/clearml_server_config', 'deploying_clearml/clearml_server_security'
- ]
- },
-
- //'Comments': ['Notes'],
-
-
-
+ }
+ ]},
+ {"Launching a Remote IDE": [
+ 'apps/clearml_session',
+ {type: 'ref', id: 'webapp/applications/apps_ssh_session'},
+ {type: 'ref', id: 'webapp/applications/apps_jupyter_lab'},
+ {type: 'ref', id: 'webapp/applications/apps_vscode'}
+ ]},
+ {"Building Interactive Model Demos": [
+ {type: 'ref', id: 'webapp/applications/apps_gradio'},
+ {type: 'ref', id: 'webapp/applications/apps_streamlit'},
+ ]},
+ 'getting_started/task_trigger_schedule',
+ 'getting_started/project_progress',
+ ],
+ integrationsSidebar: [
+ {
+ type: 'doc',
+ label: 'Overview',
+ id: 'integrations/integrations',
+ },
+ 'integrations/autokeras',
+ 'integrations/catboost',
+ 'integrations/click',
+ 'integrations/fastai',
+ {"Hugging Face": ['integrations/transformers', 'integrations/accelerate']},
+ 'integrations/hydra', 'integrations/jsonargparse',
+ 'integrations/keras', 'integrations/keras_tuner',
+ 'integrations/langchain',
+ 'integrations/lightgbm', 'integrations/matplotlib',
+ 'integrations/megengine', 'integrations/monai', 'integrations/tao',
+ {"OpenMMLab":['integrations/mmcv', 'integrations/mmengine']},
+ 'integrations/optuna',
+ 'integrations/python_fire', 'integrations/pytorch',
+ 'integrations/ignite',
+ 'integrations/pytorch_lightning',
+ 'integrations/scikit_learn', 'integrations/seaborn',
+ 'integrations/splunk',
+ 'integrations/tensorboard', 'integrations/tensorboardx', 'integrations/tensorflow',
+ 'integrations/xgboost', 'integrations/yolov5', 'integrations/yolov8'
],
guidesSidebar: [
'guides/guidemain',
@@ -304,6 +131,7 @@ module.exports = {
{'ClearML Task': ['guides/clearml-task/clearml_task_tutorial']},
{'ClearML Agent': ['guides/clearml_agent/executable_exp_containers', 'guides/clearml_agent/exp_environment_containers', 'guides/clearml_agent/reproduce_exp']},
{'Datasets': ['clearml_data/data_management_examples/data_man_cifar_classification', 'clearml_data/data_management_examples/data_man_python']},
+ {id: 'hyperdatasets/code_examples', type: 'doc', label: 'Hyper-Datasets'},
{'Distributed': ['guides/distributed/distributed_pytorch_example', 'guides/distributed/subprocess_example']},
{'Docker': ['guides/docker/extra_docker_shell_script']},
{'Frameworks': [
@@ -342,7 +170,6 @@ module.exports = {
{'Offline Mode':['guides/set_offline']},
{'Optimization': ['guides/optimization/hyper-parameter-optimization/examples_hyperparam_opt']},
{'Pipelines': ['guides/pipeline/pipeline_controller', 'guides/pipeline/pipeline_decorator', 'guides/pipeline/pipeline_functions']},
-
{'Reporting': ['guides/reporting/explicit_reporting','guides/reporting/3d_plots_reporting', 'guides/reporting/artifacts', 'guides/reporting/using_artifacts', 'guides/reporting/clearml_logging_example', 'guides/reporting/html_reporting',
'guides/reporting/hyper_parameters', 'guides/reporting/image_reporting', 'guides/reporting/manual_matplotlib_reporting', 'guides/reporting/media_reporting',
'guides/reporting/model_config', 'guides/reporting/pandas_reporting', 'guides/reporting/plotly_reporting',
@@ -352,6 +179,112 @@ module.exports = {
{'Web UI': ['guides/ui/building_leader_board','guides/ui/tuning_exp']}
],
+ knowledgeSidebar: [
+ {'Fundamentals': [
+ 'fundamentals/projects',
+ 'fundamentals/task',
+ 'fundamentals/hyperparameters',
+ 'fundamentals/artifacts',
+ 'fundamentals/models',
+ 'fundamentals/logger',
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML SDK',
+ link: {type: 'doc', id: 'clearml_sdk/clearml_sdk'},
+ items: [
+ 'clearml_sdk/task_sdk',
+ 'clearml_sdk/model_sdk',
+ 'hyperdatasets/task',
+ 'clearml_sdk/hpo_sdk',
+ 'clearml_sdk/apiclient_sdk'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Pipelines',
+ link: {type: 'doc', id: 'pipelines/pipelines'},
+ items: [{
+ "Building Pipelines": [
+ 'pipelines/pipelines_sdk_tasks',
+ 'pipelines/pipelines_sdk_function_decorators'
+ ]
+ }]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Data',
+ link: {type: 'doc', id: 'clearml_data/clearml_data'},
+ items: [
+ 'clearml_data/clearml_data_cli',
+ 'clearml_data/clearml_data_sdk',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Workflows',
+ link: {type: 'doc', id: 'clearml_data/data_management_examples/workflows'},
+ items: [
+ 'clearml_data/data_management_examples/data_man_simple',
+ 'clearml_data/data_management_examples/data_man_folder_sync',
+ 'clearml_data/data_management_examples/data_man_cifar_classification',
+ 'clearml_data/data_management_examples/data_man_python'
+ ]
+ },
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Hyper-Datasets',
+ link: {type: 'doc', id: 'hyperdatasets/overview'},
+ items: [
+ 'hyperdatasets/dataset',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Frames',
+ link: {type: 'doc', id: 'hyperdatasets/frames'},
+ items: [
+ 'hyperdatasets/single_frames',
+ 'hyperdatasets/frame_groups',
+ 'hyperdatasets/sources',
+ 'hyperdatasets/annotations',
+ 'hyperdatasets/masks',
+ 'hyperdatasets/previews',
+ 'hyperdatasets/custom_metadata'
+ ]
+ },
+ 'hyperdatasets/dataviews',
+ ]
+ },
+ {'Video Tutorials': [
+ 'getting_started/video_tutorials/quick_introduction',
+ 'getting_started/video_tutorials/core_component_overview',
+ 'getting_started/video_tutorials/experiment_manager_hands-on',
+ 'getting_started/video_tutorials/experiment_management_best_practices',
+ 'getting_started/video_tutorials/agent_remote_execution_and_automation',
+ 'getting_started/video_tutorials/hyperparameter_optimization',
+ 'getting_started/video_tutorials/pipelines_from_code',
+ 'getting_started/video_tutorials/pipelines_from_tasks',
+ 'getting_started/video_tutorials/clearml-data',
+ 'getting_started/video_tutorials/the_clearml_autoscaler',
+ 'getting_started/video_tutorials/hyperdatasets_data_versioning',
+ {'Hands-on MLOps Tutorials': [
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_a_data_scientist',
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/how_clearml_is_used_by_an_mlops_engineer',
+ 'getting_started/video_tutorials/hands-on_mlops_tutorials/ml_ci_cd_using_github_actions_and_clearml'
+ ]}
+ ]},
+ ],
rnSidebar: [
{'Server': [
{
@@ -383,7 +316,7 @@ module.exports = {
'release_notes/clearml_server/enterprise/ver_3_24',
{
'Older Versions': [
- 'release_notes/clearml_server/enterprise/ver_3_23','release_notes/clearml_server/enterprise/ver_3_22',
+ 'release_notes/clearml_server/enterprise/ver_3_23', 'release_notes/clearml_server/enterprise/ver_3_22',
'release_notes/clearml_server/enterprise/ver_3_21', 'release_notes/clearml_server/enterprise/ver_3_20'
]
}
@@ -456,7 +389,8 @@ module.exports = {
]
}
],
- sdkSidebar: [
+ referenceSidebar: [
+ {'SDK': [
'references/sdk/task',
'references/sdk/logger',
{'Model': ['references/sdk/model_model',
@@ -481,59 +415,298 @@ module.exports = {
'references/sdk/hpo_parameters_uniformintegerparameterrange',
'references/sdk/hpo_parameters_uniformparameterrange',
'references/sdk/hpo_parameters_parameterset',
- ]},
- ],
- clearmlAgentSidebar: [
- 'clearml_agent/clearml_agent_ref', 'clearml_agent/clearml_agent_env_var'
- ],
- hyperdatasetsSidebar: [
- 'hyperdatasets/overview',
- {'Frames': [
- 'hyperdatasets/frames',
- 'hyperdatasets/single_frames',
- 'hyperdatasets/frame_groups',
- 'hyperdatasets/sources',
- 'hyperdatasets/annotations',
- 'hyperdatasets/masks',
- 'hyperdatasets/previews',
- 'hyperdatasets/custom_metadata'
]},
- 'hyperdatasets/dataset',
- 'hyperdatasets/dataviews',
- 'hyperdatasets/task',
- {'WebApp': [
- {'Projects': [
- 'hyperdatasets/webapp/webapp_dataviews', 'hyperdatasets/webapp/webapp_exp_track_visual',
- 'hyperdatasets/webapp/webapp_exp_modifying', 'hyperdatasets/webapp/webapp_exp_comparing',
- ]
- },
- {'Datasets': [
- 'hyperdatasets/webapp/webapp_datasets',
- 'hyperdatasets/webapp/webapp_datasets_versioning',
- 'hyperdatasets/webapp/webapp_datasets_frames'
- ]
- },
- 'hyperdatasets/webapp/webapp_annotator'
+ {'Enterprise Hyper-Datasets': [
+ {'Hyper-Dataset': [
+ 'references/hyperdataset/hyperdataset',
+ 'references/hyperdataset/hyperdatasetversion'
+ ]},
+ {'DataFrame': [
+ 'references/hyperdataset/singleframe',
+ 'references/hyperdataset/framegroup',
+ 'references/hyperdataset/annotation',
+ ]},
+ 'references/hyperdataset/dataview',
+ ]},
+ ]},
+ {'CLI Tools': [
+ 'apps/clearml_task',
+ {type: 'ref', id: 'clearml_data/clearml_data_cli'},
+ 'apps/clearml_param_search',
+ {type: 'ref', id: 'apps/clearml_session'},
+ {type: 'ref', id: 'clearml_serving/clearml_serving_cli'},
+ ] },
+ {'ClearML Agent': [
+ 'clearml_agent/clearml_agent_ref', 'clearml_agent/clearml_agent_env_var'
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Client Configuration',
+ link: {type: 'doc', id: 'configs/configuring_clearml'},
+ items: [
+ 'configs/clearml_conf',
+ 'configs/env_vars'
+ ]
+ },
+ {'Server API': [
+ 'references/api/index',
+ 'references/api/definitions',
+ 'references/api/login',
+ 'references/api/debug',
+ 'references/api/projects',
+ 'references/api/queues',
+ 'references/api/workers',
+ 'references/api/events',
+ 'references/api/models',
+ 'references/api/tasks',
+ ]},
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'WebApp',
+ link: {type: 'doc', id: 'webapp/webapp_overview'},
+ items: [
+ 'webapp/webapp_home',
+ {'Projects': [
+ 'webapp/webapp_projects_page',
+ 'webapp/webapp_project_overview',
+ {'Tasks': [
+ 'webapp/webapp_exp_table',
+ 'webapp/webapp_exp_track_visual',
+ 'webapp/webapp_exp_reproducing',
+ 'webapp/webapp_exp_tuning',
+ 'webapp/webapp_exp_comparing'
+ ]},
+ {'Models': [
+ 'webapp/webapp_model_table',
+ 'webapp/webapp_model_viewing',
+ 'webapp/webapp_model_comparing'
+ ]},
+ {'Dataviews': [
+ 'hyperdatasets/webapp/webapp_dataviews',
+ 'hyperdatasets/webapp/webapp_exp_track_visual',
+ 'hyperdatasets/webapp/webapp_exp_modifying',
+ 'hyperdatasets/webapp/webapp_exp_comparing'
+ ]},
+ 'webapp/webapp_exp_sharing'
+ ]},
+ {'Datasets': [
+ 'webapp/datasets/webapp_dataset_page',
+ 'webapp/datasets/webapp_dataset_viewing'
+ ]},
+ {'Hyper-Datasets': [
+ 'hyperdatasets/webapp/webapp_datasets',
+ 'hyperdatasets/webapp/webapp_datasets_versioning',
+ 'hyperdatasets/webapp/webapp_datasets_frames',
+ 'hyperdatasets/webapp/webapp_annotator'
+ ]},
+ {'Pipelines': [
+ 'webapp/pipelines/webapp_pipeline_page',
+ 'webapp/pipelines/webapp_pipeline_table',
+ 'webapp/pipelines/webapp_pipeline_viewing'
+ ]},
+ 'webapp/webapp_model_endpoints',
+ 'webapp/webapp_reports',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Orchestration',
+ link: {type: 'doc', id: 'webapp/webapp_workers_queues'},
+ items: [
+ 'webapp/webapp_orchestration_dash',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Autoscalers',
+ items: [
+ 'webapp/applications/apps_aws_autoscaler',
+ 'webapp/applications/apps_gcp_autoscaler',
+ ]
+ },
+ 'webapp/resource_policies'
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Applications',
+ link: {type: 'doc', id: 'webapp/applications/apps_overview'},
+ items: [
+ {"General": [
+ 'webapp/applications/apps_hpo',
+ 'webapp/applications/apps_dashboard',
+ 'webapp/applications/apps_task_scheduler',
+ 'webapp/applications/apps_trigger_manager',
+ ]},
+ {"AI Dev": [
+ 'webapp/applications/apps_ssh_session',
+ 'webapp/applications/apps_jupyter_lab',
+ 'webapp/applications/apps_vscode',
+ ]},
+ {"UI Dev": [
+ 'webapp/applications/apps_gradio',
+ 'webapp/applications/apps_streamlit'
+ ]},
+ {"Deploy": [
+ 'webapp/applications/apps_embed_model_deployment',
+ 'webapp/applications/apps_model_deployment',
+ 'webapp/applications/apps_llama_deployment'
+ ]},
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Settings',
+ link: {type: 'doc', id: 'webapp/settings/webapp_settings_overview'},
+ items: [
+ 'webapp/settings/webapp_settings_profile',
+ 'webapp/settings/webapp_settings_admin_vaults',
+ 'webapp/settings/webapp_settings_users',
+ 'webapp/settings/webapp_settings_access_rules',
+ 'webapp/settings/webapp_settings_id_providers',
+ 'webapp/settings/webapp_settings_resource_configs',
+ 'webapp/settings/webapp_settings_usage_billing',
+ 'webapp/settings/webapp_settings_storage_credentials'
+ ]
+ },
]
},
- 'hyperdatasets/code_examples'
],
- sdkHyperDataset: [
- {'Hyper-Dataset': ['references/hyperdataset/hyperdataset', 'references/hyperdataset/hyperdatasetversion']},
- {'DataFrame': ['references/hyperdataset/singleframe',
- 'references/hyperdataset/framegroup', 'references/hyperdataset/annotation',]},
- 'references/hyperdataset/dataview',
+ installationSidebar: [
+ 'clearml_sdk/clearml_sdk_setup',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Agent',
+ items: [
+ 'clearml_agent/clearml_agent_setup',
+ {
+ 'Deployment': [
+ 'clearml_agent/clearml_agent_deployment_bare_metal',
+ 'clearml_agent/clearml_agent_deployment_k8s',
+ 'clearml_agent/clearml_agent_deployment_slurm',
+ ]
+ },
+ 'clearml_agent/clearml_agent_execution_env',
+ 'clearml_agent/clearml_agent_env_caching',
+ 'clearml_agent/clearml_agent_services_mode',
+ ]
+ },
+ {
+ type: 'doc',
+ label: 'Configuring Client Storage Access',
+ id: 'integrations/storage',
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Open Source Server',
+ link: {type: 'doc', id: 'deploying_clearml/clearml_server'},
+ items: [
+ {'Deployment Options': [
+ 'deploying_clearml/clearml_server_aws_ec2_ami',
+ 'deploying_clearml/clearml_server_gcp',
+ 'deploying_clearml/clearml_server_linux_mac',
+ 'deploying_clearml/clearml_server_win',
+ 'deploying_clearml/clearml_server_kubernetes_helm'
+ ]},
+ 'deploying_clearml/clearml_server_config',
+ 'deploying_clearml/clearml_server_security',
+ {'Server Upgrade Procedures': [
+ 'deploying_clearml/upgrade_server_aws_ec2_ami',
+ 'deploying_clearml/upgrade_server_gcp',
+ 'deploying_clearml/upgrade_server_linux_mac',
+ 'deploying_clearml/upgrade_server_win',
+ 'deploying_clearml/upgrade_server_kubernetes_helm',
+ 'deploying_clearml/clearml_server_es7_migration',
+ 'deploying_clearml/clearml_server_mongo44_migration'
+ ]},
+ ]
+ },
+/* {'Getting Started': [
+ 'getting_started/architecture',
+ ]},*/
+ {
+ 'Enterprise Server Deployment': [
+ 'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
+ 'deploying_clearml/enterprise_deploy/vpc_aws',
+ 'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
+ ]
+ },
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'ClearML Application Gateway',
+ items: [
+ 'deploying_clearml/enterprise_deploy/appgw_install_compose',
+ 'deploying_clearml/enterprise_deploy/appgw_install_k8s',
+ ]
+ },
+ 'deploying_clearml/enterprise_deploy/custom_billing',
+ 'deploying_clearml/enterprise_deploy/delete_tenant',
+ 'deploying_clearml/enterprise_deploy/import_projects',
+ 'deploying_clearml/enterprise_deploy/change_artifact_links',
+ {
+ 'Enterprise Applications': [
+ 'deploying_clearml/enterprise_deploy/app_install_ubuntu_on_prem',
+ 'deploying_clearml/enterprise_deploy/app_install_ex_server',
+ 'deploying_clearml/enterprise_deploy/app_custom',
+ ]
+ },
+ {
+ 'User Management': [
+ 'user_management/user_groups',
+ 'user_management/access_rules',
+ 'user_management/admin_vaults',
+ {
+ type: 'category',
+ collapsible: true,
+ collapsed: true,
+ label: 'Identity Provider Integration',
+ link: {type: 'doc', id: 'user_management/identity_providers'},
+ items: [
+ 'deploying_clearml/enterprise_deploy/sso_multi_tenant_login',
+ 'deploying_clearml/enterprise_deploy/sso_saml_k8s',
+ 'deploying_clearml/enterprise_deploy/sso_keycloak',
+ 'deploying_clearml/enterprise_deploy/sso_active_directory'
+ ]
+ },
+ ]
+ },
],
- apiSidebar: [
- 'references/api/index',
- 'references/api/definitions',
- 'references/api/login',
- 'references/api/debug',
- 'references/api/projects',
- 'references/api/queues',
- 'references/api/workers',
- 'references/api/events',
- 'references/api/models',
- 'references/api/tasks',
+ bestPracticesSidebar: [
+ {
+ type: 'category',
+ collapsible: true,
+ label: 'Best Practices',
+ items: [
+ {
+ type: 'doc',
+ label: 'Data Scientists',
+ id: 'best_practices/data_scientist_best_practices'
+ },
+ {
+ type: 'doc',
+ label: 'MLOps and LLMOps',
+ id: 'best_practices/mlops_best_practices'
+ },
+ {
+ type: 'doc',
+ label: 'Data Management',
+ id: 'best_practices/data_best_practices'
+ },
+ ],
+ },
]
};
diff --git a/src/css/custom.css b/src/css/custom.css
index e428974d..160fad9e 100644
--- a/src/css/custom.css
+++ b/src/css/custom.css
@@ -29,7 +29,7 @@ html {
--ifm-color-primary-light: #17c5a2;
--ifm-color-primary-lighter: #2edfbb;
- --ifm-color-primary-lightest: #51f1d1;
+ --ifm-color-primary-lightest: #AEFDED;
--ifm-toc-background-color: #141722;
--ifm-code-font-size: 95%;
@@ -46,16 +46,24 @@ html {
--ifm-code-padding-vertical: 0.2rem;
}
-html[data-theme="dark"] {
- --ifm-background-color: #1a1e2c;
- --ifm-footer-background-color: #1a1e2c;
- --ifm-footer-link-color: #a4a5aa;
- --ifm-footer-link-hover-color: #14aa8c;
- --ifm-dropdown-background-color: #2c3246;
- --ifm-table-stripe-background: #141722;
- --ifm-link-color: var(--ifm-color-primary-light);
+[data-theme=dark]:root {
+ --ifm-background-color: #040506; /* body bg */
+ --ifm-header-background-color: #101418; /* section 1 */
+ --ifm-footer-background-color: #101418; /* section 1 */
+ --ifm-footer-link-color: #D8FFF0; /* specific footer link color */
+ --ifm-footer-link-hover-color: #ffffff; /* specific footer link hover color */
+ --ifm-dropdown-background-color: #242D37; /* section 2 */
+ --ifm-table-stripe-background: #101418; /* section 1 */
+ --ifm-link-color: #6AD6C0; /* specific link color */
+ --ifm-link-hover-color: #AEFDED; /* specific link hover color */
+ --ifm-font-color-base: #E5E5E5; /* body text */
+ --ifm-hr-background-color: #242D37; /* section 1 */
+ --ifm-toc-link-color: #E5E5E5; /* body text */
+ --ifm-toc-background-color: #242D37; /* section 2 */
+ --ifm-code-background: #242D37; /* section 2 */
}
+
@media (min-width: 1400px) {
/* Expand sidebar width above 1400px */
html[data-theme="light"],
@@ -70,7 +78,7 @@ a {
}
html[data-theme="dark"] a:hover {
- color: var(--ifm-color-primary-lightest);
+ color: var(--ifm-color-primary-lightest);
}
.align-center {
@@ -151,12 +159,16 @@ html[data-theme="dark"] div[role="banner"] {
background-color: #09173C;
}
html[data-theme="dark"] .navbar--dark {
- background-color: #151722;
+ background-color: var(--ifm-header-background-color);
}
.navbar--dark.navbar .navbar__toggle {
color: white; /* opener icon color */
}
+html[data-theme="dark"] .navbar__link:hover,
+html[data-theme="dark"] .navbar__link--active {
+ color: var(--ifm-link-color);
+}
/* ===HEADER=== */
@@ -374,7 +386,7 @@ html[data-theme="light"] [class^="sidebarLogo"] > img {
html[data-theme="dark"] .menu__link--active {
- color: var(--ifm-color-primary-lighter);
+ color: var(--ifm-link-color);
}
html[data-theme="light"] .menu__link:not(.menu__link--active) {
color: #606a78;
@@ -460,11 +472,13 @@ html[data-theme="light"] .table-of-contents {
box-shadow: 0 0 0 2px rgba(0,0,0,0.1) inset;
}
html[data-theme="dark"] .table-of-contents {
- background-color: var(--ifm-toc-background-color);
box-shadow: 0 0 0 2px rgba(0,0,0,0.4) inset;
}
html[data-theme="dark"] a.table-of-contents__link--active {
- color: var(--ifm-color-primary-light);
+ color: var(--ifm-link-color);
+}
+html[data-theme="dark"] .table-of-contents a:hover {
+ color: var(--ifm-color-primary-lightest);
}
.table-of-contents__left-border {
border:none;
@@ -481,9 +495,6 @@ a.table-of-contents__link--active:before {
border-left: 6px solid var(--ifm-color-primary);
transform: translateY(5px);
}
-html[data-theme="light"] .table-of-contents__link:not(.table-of-contents__link--active) {
- color: rgba(0,0,0,0.9);
-}
/* toc: show "..." inside code tag */
.table-of-contents code {
@@ -564,7 +575,7 @@ html[data-theme="light"] .footer__link-item[href*="stackoverflow"] {
html[data-theme="dark"] .footer__link-item:hover {
- color: var(--ifm-color-primary-lighter);
+ color: var(--ifm-footer-link-hover-color);
}
@@ -719,15 +730,37 @@ html[data-theme="light"] .icon {
/* md heading style */
+/* */
+html[data-theme="light"] h2 {
+ color: #0b2471;
+}
+html[data-theme="light"] h2 a.hash-link {
+ color: #0b2471;
+}
+
+html[data-theme="dark"] h2 {
+ color: #A8C5E6;
+}
+html[data-theme="dark"] h2 a.hash-link {
+ color: #A8C5E6;
+}
+
/* */
.markdown h3 {
font-size: 1.6rem;
}
html[data-theme="light"] h3 {
- color: var(--ifm-color-primary-darker);
+ color: #a335d5;
}
+html[data-theme="light"] h3 a.hash-link {
+ color: #a335d5;
+}
+
html[data-theme="dark"] h3 {
- color: var(--ifm-color-primary-lightest);
+ color: #DAA5BF;
+}
+html[data-theme="dark"] h3 a.hash-link {
+ color: #DAA5BF;
}
/* */
@@ -736,20 +769,19 @@ html[data-theme="dark"] h3 {
margin-bottom: 8px;
margin-top: 42px;
}
+
html[data-theme="light"] h4 {
- color: #62b00d;
+ color: #242D37;
}
+html[data-theme="light"] h4 a.hash-link {
+ color: #242D37;
+}
+
html[data-theme="dark"] h4 {
- color: #83de1f;
+ color: #c7cdd2;
}
-
-
-/*
*/
-.markdown hr {
- border-bottom: none;
-}
-html[data-theme="dark"] .markdown hr {
- border-color: rgba(255,255,255,0.1);
+html[data-theme="dark"] h4 a.hash-link {
+ color: #c7cdd2;
}