Merge branch 'clearml:main' into main
@@ -56,7 +56,7 @@ error, you are good to go.
|
||||
1. The session Task is enqueued in the selected queue, and a ClearML Agent pulls and executes it. The agent downloads the appropriate IDE(s) and
|
||||
launches it.
|
||||
|
||||
1. Once the agent finishes the initial setup of the interactive Task, the local `cleaml-session` connects to the host
|
||||
1. Once the agent finishes the initial setup of the interactive Task, the local `clearml-session` connects to the host
|
||||
machine via SSH, and tunnels both SSH and IDE over the SSH connection. If a container is specified, the
|
||||
IDE environment runs inside of it.
|
||||
|
||||
|
||||
@@ -9,7 +9,8 @@ See [Hyper-Datasets](../hyperdatasets/overview.md) for ClearML's advanced querya
|
||||
|
||||
The following are some recommendations for using ClearML Data.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
## Versioning Datasets
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
|
||||
|
||||
## Clone Tasks
|
||||
Define a ClearML Task with one of the following options:
|
||||
- Run the actual code with the `Task.init()` call. This will create and auto-populate the Task in CleaML (including Git Repo / Python Packages / Command line etc.).
|
||||
- Run the actual code with the `Task.init()` call. This will create and auto-populate the Task in ClearML (including Git Repo / Python Packages / Command line etc.).
|
||||
- Register local / remote code repository with `clearml-task`. See [details](../apps/clearml_task.md).
|
||||
|
||||
Once you have a Task in ClearML, you can clone and edit its definitions in the UI, then launch it on one of your nodes with [ClearML Agent](../clearml_agent.md).
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
title: Dynamic GPU Allocation
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Dynamic GPU allocation is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The ClearML Enterprise server supports dynamic allocation of GPUs based on queue properties.
|
||||
|
||||
@@ -414,7 +414,7 @@ These settings define which Docker image and arguments should be used unless [ex
|
||||
* **`agent.default_docker.match_rules`** (*[dict]*)
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The `match_rules` configuration option is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
* Lookup table of rules that determine the default container and arguments when running a worker in Docker mode. The
|
||||
@@ -1599,7 +1599,7 @@ sdk {
|
||||
## Configuration Vault
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Configuration vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The ClearML Enterprise Server includes the configuration vault. Users can add configuration sections to the vault and, once
|
||||
|
||||
@@ -422,7 +422,7 @@ options.
|
||||
### Custom UI Context Menu Actions
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Custom UI context menu actions are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Create custom UI context menu actions to be performed on ClearML objects (projects, tasks, models, dataviews, or queues)
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: Installing External Applications Server
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
UI application deployment is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML supports applications, which are extensions that allow additional capabilities, such as cloud auto-scaling,
|
||||
Hyperparameter Optimizations, etc. For more information, see [ClearML Applications](../../webapp/applications/apps_overview.md).
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: Application Installation on On-Prem and VPC Servers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
UI application deployment is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML Applications are like plugins that allow you to manage ML workloads and automatically run recurring workflows
|
||||
without any coding. Applications are installed on top of the ClearML Server.
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: AI Application Gateway
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The AI Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Services running through a cluster orchestrator such as Kubernetes or cloud hyperscaler require meticulous configuration
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Docker-Compose Deployment
|
||||
---
|
||||
title: Docker-Compose Deployment
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
The Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
## Requirements
|
||||
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Kubernetes Deployment
|
||||
---
|
||||
title: Kubernetes Deployment
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
The Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
This guide details the installation of the ClearML AI Application Gateway, specifically the ClearML Task Router Component.
|
||||
|
||||
|
||||
@@ -0,0 +1,78 @@
|
||||
---
|
||||
title: Changing ClearML Artifacts Links
|
||||
---
|
||||
|
||||
This guide describes how to update artifact references in the ClearML Enterprise server.
|
||||
|
||||
By default, artifacts are stored on the file server; however, an external storage such as AWS S3, Minio, Google Cloud
|
||||
Storage, etc. may be used to store artifacts. References to these artifacts may exist in ClearML databases: MongoDB and ElasticSearch.
|
||||
This procedure should be used if external storage is being migrated to a different location or URL.
|
||||
|
||||
:::important
|
||||
This procedure does not deal with the actual migration of the data--only with changing the references in ClearML that
|
||||
point to the data.
|
||||
:::
|
||||
|
||||
## Preparation
|
||||
|
||||
### Version Confirmation
|
||||
|
||||
To change the links, use the `fix_fileserver_urls.py` script, located inside the `allegro-apiserver`
|
||||
Docker container. This script will be executed from within the `apiserver` container. Make sure the `apiserver` version
|
||||
is 3.20 or higher.
|
||||
|
||||
### Backup
|
||||
|
||||
It is highly recommended to back up the ClearML MongoDB and ElasticSearch databases before running the script, as the
|
||||
script changes the values in the databases, and can't be undone.
|
||||
|
||||
## Fixing MongoDB links
|
||||
|
||||
1. Access the `apiserver` Docker container:
|
||||
* In `docker-compose:`
|
||||
|
||||
```commandline
|
||||
sudo docker exec -it allegro-apiserver /bin/bash
|
||||
```
|
||||
|
||||
* In Kubernetes:
|
||||
|
||||
```commandline
|
||||
kubectl exec -it -n clearml <clearml-apiserver-pod-name> -- bash
|
||||
```
|
||||
|
||||
1. Navigate to the script location in the `upgrade` folder:
|
||||
|
||||
```commandline
|
||||
cd /opt/seematics/apiserver/server/upgrade
|
||||
```
|
||||
|
||||
1. Run the following command:
|
||||
|
||||
:::important
|
||||
Before running the script, verify that this is indeed the correct version (`apiserver` v3.20 or higher,
|
||||
or that the script provided by ClearML was copied into the container).
|
||||
::::
|
||||
|
||||
```commandline
|
||||
python3 fix_fileserver_urls.py \
|
||||
--mongo-host mongodb://mongo:27017 \
|
||||
--elastic-host elasticsearch:9200 \
|
||||
--host-source "<old fileserver host and/or port, as in artifact links>" \
|
||||
--host-target "<new fileserver host and/or port>" --datasets
|
||||
```
|
||||
|
||||
:::note Notes
|
||||
* If MongoDB or ElasticSearch services are accessed from the `apiserver` container using custom addresses, then
|
||||
`--mongo-host` and `--elastic-host` arguments should be updated accordingly.
|
||||
* If ElasticSearch is set up to require authentication then the following arguments should be used to pass the user
|
||||
and password: `--elastic-user <es_user> --elastic-password <es_pass>`
|
||||
:::
|
||||
|
||||
The script fixes the links in MongoDB, and outputs `cURL` commands for updating the links in ElasticSearch.
|
||||
|
||||
## Fixing the ElasticSearch Links
|
||||
|
||||
Copy the `cURL` commands printed by the script run in the previous stage, and run them one after the other. Make sure to
|
||||
inspect that a "success" result was returned from each command. Depending on the amount of the data in the ElasticSearch,
|
||||
running these commands may take some time.
|
||||
122
docs/deploying_clearml/enterprise_deploy/custom_billing.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Custom Billing Events
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
Sending custom billing events is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML supports sending custom events to selected Kafka topics. Event sending is triggered by API calls and
|
||||
is available only for the companies with the `custom_events` settings set.
|
||||
|
||||
## Enabling Custom Events in ClearML Server
|
||||
|
||||
:::important Prerequisite
|
||||
**Precondition**: Customer Kafka for custom events is installed and reachable from the `apiserver`.
|
||||
:::
|
||||
|
||||
Set the following environment variables in the ClearML Enterprise helm chart under the `apiserver.extraEnv`:
|
||||
|
||||
* Enable custom events:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__enabled
|
||||
value: "true"
|
||||
```
|
||||
* Mount custom message template files into `/mnt/custom_events/templates` folder in the `apiserver` container and point
|
||||
the `apiserver` into it:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__template_folder
|
||||
value: "/mnt/custom_events/templates"
|
||||
```
|
||||
* Configure the Kafka host for sending events:
|
||||
|
||||
```
|
||||
- name: CLEARML__hosts__kafka__custom_events__host
|
||||
value: "[<KAFKA host address:port>]"
|
||||
```
|
||||
Configure Kafka security parameters. Below is the example for SASL plaintext security:
|
||||
|
||||
```
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__security_protocol
|
||||
value: "SASL_PLAINTEXT"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_mechanism
|
||||
value: "SCRAM-SHA-512"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_plain_username
|
||||
value: "<username>"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_plain_password
|
||||
value: "<password>"
|
||||
```
|
||||
* Define Kafka topics for lifecycle and inventory messages:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__channels__main__topics__service_instance_lifecycle
|
||||
value: "lifecycle"
|
||||
- name: CLEARML__services__custom_events__channels__main__topics__service_instance_inventory
|
||||
value: "inventory"
|
||||
```
|
||||
* For the desired companies set up the custom events properties required by the event message templates:
|
||||
|
||||
```
|
||||
curl $APISERVER_URL/system.update_company_custom_events_config -H "Content-Type: application/json" -u $APISERVER_KEY:$APISERVER_SECRET -d'{
|
||||
"company": "<company_id>",
|
||||
"fields": {
|
||||
"service_instance_id": "<value>",
|
||||
"service_instance_name": "<value>",
|
||||
"service_instance_customer_tenant_name": "<value>",
|
||||
"service_instance_customer_space_name": "<value>",
|
||||
"service_instance_customer_space_id": "<value>",
|
||||
"parameters_connection_points": ["<value1>", "<value2>"]
|
||||
}}'
|
||||
```
|
||||
|
||||
## Sending Custom Events to the API Server
|
||||
|
||||
:::important Prerequisite
|
||||
**Precondition:** Dedicated custom-events Redis instance installed and reachable from all the custom events deployments.
|
||||
:::
|
||||
|
||||
Environment lifecycle events are sent directly by the `apiserver`. Other event types are emitted by the following helm charts:
|
||||
|
||||
* `clearml-pods-monitor-exporter` - Monitors running pods and sends container lifecycle events (should run one per cluster with a unique identifier, a UUID is required for the installation):
|
||||
|
||||
```
|
||||
# -- Universal Unique string to identify Pods Monitor instances across worker clusters. It cannot be empty.
|
||||
# Uniqueness is required across different cluster installations to preserve the reported data status.
|
||||
podsMonitorUUID: "<Unique ID>"
|
||||
# Interval
|
||||
checkIntervalSeconds: 60
|
||||
```
|
||||
* `clearml-pods-inventory` - Periodically sends inventory events about running pods.
|
||||
|
||||
```
|
||||
# Cron schedule - https://crontab.guru/
|
||||
cronJob:
|
||||
schedule: "@daily"
|
||||
```
|
||||
* `clearml-company-inventory` - Monitors Clearml companies and sends environment inventory events.
|
||||
|
||||
```
|
||||
# Cron schedule - https://crontab.guru/
|
||||
cronJob:
|
||||
schedule: "@daily"
|
||||
```
|
||||
|
||||
For every script chart add the below configuration to enable redis access and connection to the `apiserver`:
|
||||
|
||||
```
|
||||
clearml:
|
||||
apiServerUrlReference: "<APISERVER_URL>"
|
||||
apiServerKey: "<APISERVER_KEY>"
|
||||
apiServerSecret: "<APISERVER_SECRET>"
|
||||
redisConnection:
|
||||
host: "<REDIS_HOST>"
|
||||
port: <REDIS_PORT>
|
||||
password: "<REDIS_PWD>"
|
||||
```
|
||||
|
||||
See all other available options to customize the `custom-events` charts by running:
|
||||
```
|
||||
helm show readme allegroai-enterprise/<CHART_NAME>
|
||||
```
|
||||
240
docs/deploying_clearml/enterprise_deploy/import_projects.md
Normal file
@@ -0,0 +1,240 @@
|
||||
---
|
||||
title: Project Migration
|
||||
---
|
||||
|
||||
When migrating from a ClearML Open Server to a ClearML Enterprise Server, you may need to transfer projects. This is done
|
||||
using the `data_tool.py` script. This utility is available in the `apiserver` Docker image, and can be used for
|
||||
exporting and importing ClearML project data for both open source and Enterprise versions.
|
||||
|
||||
This guide covers the following:
|
||||
* Exporting data from Open Source and Enterprise servers
|
||||
* Importing data into an Enterprise server
|
||||
* Handling the artifacts stored on the file server.
|
||||
|
||||
:::note
|
||||
Export instructions differ for ClearML open and Enterprise servers. Make sure you follow the guidelines that match your
|
||||
server type.
|
||||
:::
|
||||
|
||||
## Exporting Data
|
||||
|
||||
The export process is done by running the ***data_tool*** script that generates a zip file containing project and task
|
||||
data. This file should then be copied to the server on which the import will run.
|
||||
|
||||
Note that artifacts stored in the ClearML ***file server*** should be copied manually if required (see [Handling Artifacts](#handling-artifacts)).
|
||||
|
||||
### Exporting Data from ClearML Open Servers
|
||||
|
||||
#### Preparation
|
||||
|
||||
* Make sure the `apiserver` is at least Open Source server version 1.12.0.
|
||||
* Note that any `pending` or `running` tasks will not be exported. If you wish to export them, make sure to stop/dequeue
|
||||
them before exporting.
|
||||
|
||||
#### Running the Data Tool
|
||||
|
||||
Execute the data tool within the `apiserver` container.
|
||||
|
||||
Open a bash session inside the `apiserver` container of the server:
|
||||
* In docker-compose:
|
||||
|
||||
```commandline
|
||||
sudo docker exec -it clearml-apiserver /bin/bash
|
||||
```
|
||||
|
||||
* In Kubernetes:
|
||||
|
||||
```commandline
|
||||
kubectl exec -it -n <clearml-namespace> <clearml-apiserver-pod-name> -- bash
|
||||
```
|
||||
|
||||
#### Export Commands
|
||||
**To export specific projects:**
|
||||
|
||||
```commandline
|
||||
python3 -m apiserver.data_tool export --projects <project_id1> <project_id2>
|
||||
--statuses created stopped published failed completed --output <output-file-name>.zip
|
||||
```
|
||||
|
||||
As a result, you should get a `<output-file-name>.zip` file that contains all the data from the specified projects and
|
||||
their children.
|
||||
|
||||
**To export all the projects:**
|
||||
|
||||
```commandline
|
||||
python3 -m apiserver.data_tool export \
|
||||
--all \
|
||||
--statuses created stopped published failed completed \
|
||||
--output <output-file-name>.zip
|
||||
```
|
||||
|
||||
#### Optional Parameters
|
||||
|
||||
* `--experiments <list of experiment IDs>` - If not specified then all experiments from the specified projects are exported
|
||||
* `--statuses <list of task statuses>` - Export tasks of specific statuses. If the parameter
|
||||
is omitted, only `published` tasks are exported
|
||||
* `--no-events` - Do not export task events, i.e. logs and metrics (scalar, plots, debug samples).
|
||||
|
||||
Make sure to copy the generated zip file containing the exported data.
|
||||
|
||||
### Exporting Data from ClearML Enterprise Servers
|
||||
|
||||
#### Preparation
|
||||
|
||||
* Make sure the `apiserver` is at least Enterprise Server version 3.18.0.
|
||||
* Note that any `pending` or `running` tasks will not be exported. If you wish to export them, make sure to stop/dequeue
|
||||
before exporting.
|
||||
|
||||
#### Running the Data Tool
|
||||
|
||||
Execute the data tool from within the `apiserver` docker container.
|
||||
|
||||
Open a bash session inside the `apiserver` container of the server:
|
||||
* In `docker-compose`:
|
||||
|
||||
```commandline
|
||||
sudo docker exec -it allegro-apiserver /bin/bash
|
||||
```
|
||||
|
||||
* In Kubernetes:
|
||||
|
||||
```commandline
|
||||
kubectl exec -it -n <clearml-namespace> <clearml-apiserver-pod-name> -- bash
|
||||
```
|
||||
|
||||
#### Export Commands
|
||||
|
||||
**To export specific projects:**
|
||||
|
||||
```commandline
|
||||
PYTHONPATH=/opt/seematics/apiserver/trains-server-repo python3 data_tool.py \
|
||||
export \
|
||||
--projects <project_id1> <project_id2> \
|
||||
--statuses created stopped published failed completed \
|
||||
--output <output-file-name>.zip
|
||||
```
|
||||
|
||||
As a result, you should get `<output-file-name>.zip` file that contains all the data from the specified projects and
|
||||
their children.
|
||||
|
||||
**To export all the projects:**
|
||||
|
||||
```commandline
|
||||
PYTHONPATH=/opt/seematics/apiserver/trains-server-repo python3 data_tool.py \
|
||||
export \
|
||||
--all \
|
||||
--statuses created stopped published failed completed \
|
||||
--output <output-file-name>.zip
|
||||
```
|
||||
|
||||
#### Optional Parameters
|
||||
|
||||
* `--experiments <list of experiment IDs>` - If not specified then all experiments from the specified projects are exported
|
||||
* `--statuses <list of task statuses>` - Can be used to allow exporting tasks of specific statuses. If the parameter is
|
||||
omitted, only `published` tasks are exported.
|
||||
* `--no-events` - Do not export task events, i.e. logs, and metrics (scalar, plots, debug samples).
|
||||
|
||||
Make sure to copy the generated zip file containing the exported data.
|
||||
|
||||
## Importing Data
|
||||
|
||||
This section explains how to import the exported data into a ClearML Enterprise server.
|
||||
|
||||
### Preparation
|
||||
|
||||
* It is highly recommended to back up the ClearML databases before importing data, as import injects data into the
|
||||
databases, and can't be undone.
|
||||
* Make sure you are working with `apiserver` version 3.22.3 or higher.
|
||||
* Make the zip file accessible from within the `apiserver` container by copying the exported data to the
|
||||
`apiserver` container or to a folder on the host, which the `apiserver` is mounted to.
|
||||
|
||||
### Usage
|
||||
|
||||
The data tool should be executed from within the `apiserver` docker container.
|
||||
|
||||
1. Open a bash session inside the `apiserver` container of the server:
|
||||
* In `docker-compose`:
|
||||
|
||||
```commandline
|
||||
sudo docker exec -it allegro-apiserver /bin/bash
|
||||
```
|
||||
|
||||
* In Kubernetes:
|
||||
|
||||
```commandline
|
||||
kubectl exec -it -n <clearml-namespace> <clearml-apiserver-pod-name> -- bash
|
||||
```
|
||||
|
||||
1. Run the data tool script in *import* mode:
|
||||
|
||||
```commandline
|
||||
PYTHONPATH=/opt/seematics/apiserver/trains-server-repo python3 data_tool.py \
|
||||
import \
|
||||
<path to zip file> \
|
||||
--company <company_id> \
|
||||
--user <user_id>
|
||||
```
|
||||
|
||||
* `company_id`- The default company ID used in the target deployment. Inside the `apiserver` container you can
|
||||
usually get it from the environment variable `CLEARML__APISERVER__DEFAULT_COMPANY`.
|
||||
If you do not specify the `--company` parameter then all the data will be imported as `Examples` (read-only)
|
||||
* `user_id` - The ID of the user in the target deployment who will become the owner of the imported data
|
||||
|
||||
## Handling Artifacts
|
||||
|
||||
***Artifacts*** refers to any content which the ClearML server holds references to. This can include:
|
||||
* Dataset or Hyper-Dataset frame URLs
|
||||
* ClearML artifact URLs
|
||||
* Model snapshots
|
||||
* Debug samples
|
||||
|
||||
Artifacts may be stored in any external storage (e.g., AWS S3, minio, Google Cloud Storage) or in the ClearML file server.
|
||||
* If the artifacts are **not** stored in the ClearML file server, they do not need to be moved during the export/import process,
|
||||
as the URLs registered in ClearML entities pointing to these artifacts will not change.
|
||||
* If the artifacts are stored in the ClearML file server, then the file server content must also be moved, and the URLs
|
||||
in the ClearML databases must point to the new location. See instructions [below](#exporting-file-server-data-for-clearml-open-server).
|
||||
|
||||
### Exporting File Server Data for ClearML Open Server
|
||||
|
||||
Data in the file server is organized by project. For each project, all data references by entities in that project is
|
||||
stored in a folder bearing the name of the project. This folder can be located in:
|
||||
|
||||
```
|
||||
/opt/clearml/data/fileserver/<project name>
|
||||
```
|
||||
|
||||
The entire projects' folders content should be copied to the target server (see [Importing Fileserver Data](#importing-file-server-data)).
|
||||
|
||||
### Exporting File Server Data for ClearML Enterprise Server
|
||||
|
||||
Data in the file server is organized by tenant and project. For each project, all data references by entities in that
|
||||
project is stored in a folder bearing the name of the project. This folder can be located in:
|
||||
|
||||
```
|
||||
/opt/allegro/data/fileserver/<company_id>/<project name>
|
||||
```
|
||||
|
||||
The entire projects' folders content should be copied to the target server (see [Importing Fileserver Data](#importing-file-server-data)).
|
||||
|
||||
## Importing File Server Data
|
||||
|
||||
### Copying the Data
|
||||
|
||||
Place the exported projects' folder(s) content into the target file server's storage in the following folder:
|
||||
|
||||
```
|
||||
/opt/allegro/data/fileserver/<company_id>/<project name>
|
||||
```
|
||||
|
||||
### Fixing Registered URLs
|
||||
|
||||
Since URLs pointing to the file server contain the file server's address, these need to be changed to the address of the
|
||||
new file server.
|
||||
|
||||
Note that this is not required if the new file server is replacing the old file server and can be accessed using the same
|
||||
exact address.
|
||||
|
||||
Once the projects' data has been copied to the target server, and the projects themselves were imported, see
|
||||
[Changing ClearML Artifacts Links](change_artifact_links.md) for information on how to fix the URLs.
|
||||
|
||||
|
||||
@@ -0,0 +1,98 @@
|
||||
---
|
||||
title: Multi-Tenant Login Mode
|
||||
---
|
||||
|
||||
In a multi-tenant setup, each external tenant can be represented by an SSO client defined in the customer Identity provider
|
||||
(Keycloak). Each ClearML tenant can be associated with a particular external tenant. Currently, only one
|
||||
ClearML tenant can be associated with a particular external tenant
|
||||
|
||||
## Setup IdP/SSO Client in Identity Provider
|
||||
|
||||
1. Add the following URL to "Valid redirect URIs": `<clearml_webapp_address>/callback_<client_id>`
|
||||
2. Add the following URLs to "Valid post logout redirect URIs":
|
||||
|
||||
```
|
||||
<clearml_webapp_address>/login
|
||||
<clearml_webapp_address>/login/<external tenant ID>
|
||||
```
|
||||
3. Make sure the external tenant ID and groups are returned as claims for a each user
|
||||
|
||||
## Configure ClearML to use Multi-Tenant Mode
|
||||
|
||||
Set the following environment variables in the ClearML enterprise helm chart under the `apiserver` section:
|
||||
* To turn on the multi-tenant login mode:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__login__sso__tenant_login
|
||||
value: "true"
|
||||
```
|
||||
* To hide any global IdP/SSO configuration that's not associated with a specific ClearML tenant:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__login__sso__allow_settings_providers
|
||||
value: "false"
|
||||
```
|
||||
|
||||
Enable `onlyPasswordLogin` by setting the following environment variable in the helm chart under the `webserver` section:
|
||||
|
||||
```
|
||||
- name: WEBSERVER__onlyPasswordLogin`
|
||||
value: “true”`
|
||||
```
|
||||
|
||||
## Setup IdP for a ClearML Tenant
|
||||
|
||||
To set an IdP client for a ClearML tenant, you’ll need to set the ClearML tenant settings and define an identity provider:
|
||||
|
||||
1. Call the following API to set the ClearML tenant settings:
|
||||
|
||||
```
|
||||
curl $APISERVER_URL/system.update_company_sso_config -H "Content-Type: application/json" -u $APISERVER_KEY:$APISERVER_SECRET -d'{
|
||||
"company": "<company_id>",
|
||||
"sso": {
|
||||
"tenant": "<external tenant ID>",
|
||||
"group_mapping": {
|
||||
"IDP group name1": "Clearml group name1",
|
||||
"IDP group name2": "Clearml group name2"
|
||||
},
|
||||
"admin_groups": ["IDP admin group name1", "IDP admin group name2"]
|
||||
}}'
|
||||
```
|
||||
2. Call the following API to define the ClearML tenant identity provider:
|
||||
|
||||
```
|
||||
curl $APISERVER_URL/sso.save_provider_configuration -H "Content-Type: application/json" -u $APISERVER_KEY:$APISERVER_SECRET -d'{
|
||||
"provider": "keycloak",
|
||||
"company": "<company_id>",
|
||||
"configuration": {
|
||||
"id": "<some unique id here, you can use company_id>",
|
||||
"display_name": "<The text that you want to see on the login button>",
|
||||
"client_id": "<client_id from IDP>",
|
||||
"client_secret": "<client secret from IDP>",
|
||||
"authorization_endpoint": "<authorization_endpoint from IDP OpenID configuration>",
|
||||
"token_endpoint": "<token_endpoint from IDP OpenID configuration>",
|
||||
"revocation_endpoint": "<revocation_endpoint from IDP OpenID configuration>",
|
||||
"end_session_endpoint": "<end_session_endpoint from IDP OpenID configuration>",
|
||||
"logout_from_provider": true,
|
||||
"claim_tenant": "tenant_key",
|
||||
"claim_name": "name",
|
||||
"group_enabled": true,
|
||||
"claim_groups": "ad_groups_trusted",
|
||||
"group_prohibit_user_login_if_not_in_group": true
|
||||
}}'
|
||||
```
|
||||
The above configuration assumes the following:
|
||||
* On logout from ClearML, the user is also logged out from the Identity Provider
|
||||
* External tenant ID for the user is returned under the `tenant_key` claim
|
||||
* User display name is returned under the `name` claim
|
||||
* User groups list is returned under the `ad_groups_trusted` claim
|
||||
* Group integration is turned on and a user will be allowed to log in if any of the groups s/he belongs to in the
|
||||
IdP exists under the corresponding ClearML tenant (this is after group name translation is done according to the ClearML tenant settings)
|
||||
|
||||
## Webapp Login
|
||||
|
||||
When running in multi-tenant login mode, a user belonging to some external tenant should use the following link to log in:
|
||||
|
||||
```
|
||||
<clearml_webapp_address>/login/<external tenant ID>
|
||||
```
|
||||
@@ -2,14 +2,21 @@
|
||||
title: AWS EC2 AMIs
|
||||
---
|
||||
|
||||
:::note
|
||||
For upgrade purposes, the terms **Trains Server** and **ClearML Server** are interchangeable.
|
||||
:::
|
||||
<Collapsible title="Important: Upgrading to v2.x from v1.16.0 or older" type="info">
|
||||
|
||||
MongoDB major version was upgraded from `v5.x` to `6.x`. Please note that if your current ClearML Server version is older than
|
||||
`v1.17` (where MongoDB `v5.x` was first used), you'll need to first upgrade to ClearML Server v1.17.
|
||||
|
||||
First upgrade to ClearML Server v1.17 following the procedure below and using [this `docker-compose` file](https://github.com/clearml/clearml-server/blob/2976ce69cc91550a3614996e8a8d8cd799af2efd/upgrade/1_17_to_2_0/docker-compose.yml). Once successfully upgraded,
|
||||
you can proceed to upgrade to v2.x.
|
||||
|
||||
</Collapsible>
|
||||
|
||||
|
||||
The sections below contain the steps to upgrade ClearML Server on the [same AWS instance](#upgrading-on-the-same-aws-instance), and
|
||||
to upgrade and migrate to a [new AWS instance](#upgrading-and-migrating-to-a-new-aws-instance).
|
||||
|
||||
### Upgrading on the Same AWS Instance
|
||||
## Upgrading on the Same AWS Instance
|
||||
|
||||
This section contains the steps to upgrade ClearML Server on the same AWS instance.
|
||||
|
||||
@@ -52,7 +59,7 @@ If upgrading from Trains Server version 0.15 or older, a data migration is requi
|
||||
docker-compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
### Upgrading and Migrating to a New AWS Instance
|
||||
## Upgrading and Migrating to a New AWS Instance
|
||||
|
||||
This section contains the steps to upgrade ClearML Server on the new AWS instance.
|
||||
|
||||
@@ -67,8 +74,9 @@ This section contains the steps to upgrade ClearML Server on the new AWS instanc
|
||||
1. On the old AWS instance, [backup your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration)
|
||||
and, if your configuration folder is not empty, backup your configuration.
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
If upgrading from Trains Server version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
1. If upgrading from Trains Server version 0.15 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. On the new AWS instance, [restore your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) and, if the configuration folder is not empty, restore the
|
||||
configuration.
|
||||
|
||||
@@ -19,11 +19,13 @@ you can proceed to upgrade to v2.x.
|
||||
```
|
||||
docker-compose -f docker-compose.yml down
|
||||
```
|
||||
|
||||
1. [Backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) is recommended, and if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md),
|
||||
and then continue this upgrade.
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
|
||||
@@ -31,9 +33,7 @@ you can proceed to upgrade to v2.x.
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
1. [Backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) is recommended, and if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
|
||||
@@ -40,19 +40,21 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
|
||||
```
|
||||
docker-compose -f docker-compose.yml down
|
||||
```
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. [Backing up data](clearml_server_linux_mac.md#backing-up-and-restoring-data-and-configuration) is recommended and, if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. If upgrading from **Trains Server** to **ClearML Server**, rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
|
||||
```
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
|
||||
@@ -29,10 +29,7 @@ you can proceed to upgrade to v2.x.
|
||||
```
|
||||
docker-compose -f c:\opt\trains\docker-compose-win10.yml down
|
||||
```
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Backing up data is recommended, and if the configuration folder is not empty, backing up the configuration.
|
||||
|
||||
@@ -40,9 +37,15 @@ you can proceed to upgrade to v2.x.
|
||||
For example, if the configuration is in ``c:\opt\clearml``, then backup ``c:\opt\clearml\config`` and ``c:\opt\clearml\data``.
|
||||
Before restoring, remove the old artifacts in ``c:\opt\clearml\config`` and ``c:\opt\clearml\data``, and then restore.
|
||||
:::
|
||||
|
||||
1. If upgrading from **Trains Server** to **ClearML Server**, rename `/opt/trains` and its subdirectories to `/opt/clearml`.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`.
|
||||
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
```
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Managing Agent Work Schedules
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Agent work schedule management is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The Agent scheduler enables scheduling working hours for each Agent. During working hours, a worker will actively poll
|
||||
|
||||
|
Before Width: | Height: | Size: 388 KiB After Width: | Height: | Size: 372 KiB |
BIN
docs/img/gif/dataset_dark.gif
Normal file
|
After Width: | Height: | Size: 16 MiB |
|
Before Width: | Height: | Size: 606 KiB After Width: | Height: | Size: 615 KiB |
|
Before Width: | Height: | Size: 360 KiB After Width: | Height: | Size: 359 KiB |
|
Before Width: | Height: | Size: 2.4 MiB After Width: | Height: | Size: 12 MiB |
BIN
docs/img/gif/integrations_yolov5_dark.gif
Normal file
|
After Width: | Height: | Size: 14 MiB |
@@ -95,7 +95,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -93,7 +93,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -92,7 +92,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -105,7 +105,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -94,7 +94,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -90,7 +90,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -114,7 +114,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -120,7 +120,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -96,7 +96,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -113,7 +113,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -107,7 +107,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -78,7 +78,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -120,7 +120,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -169,7 +169,8 @@ and shuts down instances as needed, according to a resource budget that you set.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
Use ClearML's web interface to edit task details, like configuration parameters or input models, then execute the task
|
||||
with the new configuration on a remote machine:
|
||||
|
||||
@@ -166,4 +166,5 @@ with the new configuration on a remote machine:
|
||||
|
||||
The ClearML Agent executing the task will use the new values to [override any hard coded values](../clearml_agent.md).
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
@@ -3,7 +3,7 @@ title: Identity Providers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Identity provider integration is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can seamlessly connect ClearML with their identity service providers to easily implement single sign-on
|
||||
|
||||
@@ -319,17 +319,10 @@ to an IAM user, and create credentials keys for that user to configure in the au
|
||||
"ssm:GetParameters",
|
||||
"ssm:GetParameter"
|
||||
],
|
||||
"Resource": "arn:aws:ssm:*::parameter/aws/service/marketplace/*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowUsingDeeplearningAMIAliases",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ssm:GetParametersByPath",
|
||||
"ssm:GetParameters",
|
||||
"ssm:GetParameter"
|
||||
],
|
||||
"Resource": "arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
|
||||
"Resource": [
|
||||
"arn:aws:ssm:*::parameter/aws/service/marketplace/*",
|
||||
"arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Resource Policies
|
||||
---
|
||||
|
||||
:::important ENTERPRISE FEATURE
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Resource Policies are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Access Rules
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Access rules are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Workspace administrators can use the **Access Rules** page to manage workspace permissions, by specifying which users,
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Administrator Vaults
|
||||
---
|
||||
|
||||
:::info Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Administrator vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can define multiple [configuration vaults](webapp_settings_profile.md#configuration-vault) which will each be applied to designated
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Identity Providers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Identity provider integration is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can connect identity service providers to the server: configure an identity connection, which allows
|
||||
|
||||
@@ -100,7 +100,7 @@ these credentials cannot be recovered.
|
||||
### AI Application Gateway Tokens
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The AI Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The AI Application Gateway enables external access to ClearML tasks and applications. The gateway is configured with an
|
||||
@@ -146,7 +146,7 @@ in that workspace. You can rejoin the workspace only if you are re-invited.
|
||||
### Configuration Vault
|
||||
|
||||
:::info Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Configuration vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Use the configuration vault to store global ClearML configuration entries that can extend the ClearML [configuration file](../../configs/clearml_conf.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ user can only rejoin your workspace when you re-invite them.
|
||||
## Service Accounts
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Service accounts are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Service accounts are ClearML users that provide services with access to the ClearML API, but not the
|
||||
@@ -155,7 +155,7 @@ To delete a service account:
|
||||
## User Groups
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan, as part of the [Access Rules](webapp_settings_access_rules.md)
|
||||
User groups are available under the ClearML Enterprise plan, as part of the [Access Rules](webapp_settings_access_rules.md)
|
||||
feature.
|
||||
:::
|
||||
|
||||
|
||||
@@ -233,7 +233,7 @@ The **INFO** tab shows extended task information:
|
||||
### Latest Events Log
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The latest events log is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The Enterprise Server also displays a detailed history of task activity:
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Orchestration Dashboard
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The Orchestration Dashboard is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Use the orchestration dashboard to monitor all of your available and in-use compute resources:
|
||||
|
||||
@@ -127,7 +127,7 @@ module.exports = {
|
||||
activeBaseRegex: '^/docs/latest/docs/guides',
|
||||
},
|
||||
{
|
||||
label: 'Integrations',
|
||||
label: 'Code Integrations',
|
||||
to: '/docs/integrations',
|
||||
activeBaseRegex: '^/docs/latest/docs/integrations(?!/storage)',
|
||||
},
|
||||
|
||||
2
package-lock.json
generated
@@ -15,7 +15,7 @@
|
||||
"@docusaurus/plugin-google-analytics": "^3.6.1",
|
||||
"@docusaurus/plugin-google-gtag": "^3.6.1",
|
||||
"@docusaurus/preset-classic": "^3.6.1",
|
||||
"@easyops-cn/docusaurus-search-local": "^0.48.0",
|
||||
"@easyops-cn/docusaurus-search-local": "^0.48.5",
|
||||
"@mdx-js/react": "^3.0.0",
|
||||
"clsx": "^1.1.1",
|
||||
"joi": "^17.4.0",
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"@docusaurus/plugin-google-analytics": "^3.6.1",
|
||||
"@docusaurus/plugin-google-gtag": "^3.6.1",
|
||||
"@docusaurus/preset-classic": "^3.6.1",
|
||||
"@easyops-cn/docusaurus-search-local": "^0.48.0",
|
||||
"@easyops-cn/docusaurus-search-local": "^0.48.5",
|
||||
"@mdx-js/react": "^3.0.0",
|
||||
"clsx": "^1.1.1",
|
||||
"medium-zoom": "^1.0.6",
|
||||
|
||||
23
sidebars.js
@@ -635,11 +635,19 @@ module.exports = {
|
||||
'getting_started/architecture',
|
||||
]},*/
|
||||
{
|
||||
'Enterprise Server Deployment': [
|
||||
'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
|
||||
'deploying_clearml/enterprise_deploy/vpc_aws',
|
||||
'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
|
||||
]
|
||||
'Enterprise Server': {
|
||||
'Deployment Options': [
|
||||
'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
|
||||
'deploying_clearml/enterprise_deploy/vpc_aws',
|
||||
'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
|
||||
],
|
||||
'Maintenance': [
|
||||
'deploying_clearml/enterprise_deploy/import_projects',
|
||||
'deploying_clearml/enterprise_deploy/change_artifact_links',
|
||||
'deploying_clearml/enterprise_deploy/delete_tenant',
|
||||
]
|
||||
|
||||
}
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
@@ -651,9 +659,9 @@ module.exports = {
|
||||
'deploying_clearml/enterprise_deploy/appgw_install_k8s',
|
||||
]
|
||||
},
|
||||
'deploying_clearml/enterprise_deploy/delete_tenant',
|
||||
'deploying_clearml/enterprise_deploy/custom_billing',
|
||||
{
|
||||
'Enterprise Applications': [
|
||||
'UI Applications': [
|
||||
'deploying_clearml/enterprise_deploy/app_install_ubuntu_on_prem',
|
||||
'deploying_clearml/enterprise_deploy/app_install_ex_server',
|
||||
'deploying_clearml/enterprise_deploy/app_custom',
|
||||
@@ -671,6 +679,7 @@ module.exports = {
|
||||
label: 'Identity Provider Integration',
|
||||
link: {type: 'doc', id: 'user_management/identity_providers'},
|
||||
items: [
|
||||
'deploying_clearml/enterprise_deploy/sso_multi_tenant_login',
|
||||
'deploying_clearml/enterprise_deploy/sso_saml_k8s',
|
||||
'deploying_clearml/enterprise_deploy/sso_keycloak',
|
||||
'deploying_clearml/enterprise_deploy/sso_active_directory'
|
||||
|
||||