Merge branch 'main' of https://github.com/allegroai/clearml-docs into example_images
@@ -56,7 +56,7 @@ error, you are good to go.
|
||||
1. The session Task is enqueued in the selected queue, and a ClearML Agent pulls and executes it. The agent downloads the appropriate IDE(s) and
|
||||
launches it.
|
||||
|
||||
1. Once the agent finishes the initial setup of the interactive Task, the local `cleaml-session` connects to the host
|
||||
1. Once the agent finishes the initial setup of the interactive Task, the local `clearml-session` connects to the host
|
||||
machine via SSH, and tunnels both SSH and IDE over the SSH connection. If a container is specified, the
|
||||
IDE environment runs inside of it.
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ that you need.
|
||||
accessed, [compared](../webapp/webapp_exp_comparing.md) and [tracked](../webapp/webapp_exp_track_visual.md).
|
||||
- [ClearML Agent](../clearml_agent.md) does the heavy lifting. It reproduces the execution environment, clones your code,
|
||||
applies code patches, manages parameters (including overriding them on the fly), executes the code, and queues multiple tasks.
|
||||
It can even [build](../../clearml_agent/clearml_agent_docker_exec#exporting-a-task-into-a-standalone-docker-container) the docker container for you!
|
||||
It can even [build](../getting_started/clearml_agent_docker_exec.md#exporting-a-task-into-a-standalone-docker-container) the container for you!
|
||||
- [ClearML Pipelines](../pipelines/pipelines.md) ensure that steps run in the same order,
|
||||
programmatically chaining tasks together, while giving an overview of the execution pipeline's status.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ If you are afraid of clutter, use the archive option, and set up your own [clean
|
||||
|
||||
## Clone Tasks
|
||||
Define a ClearML Task with one of the following options:
|
||||
- Run the actual code with the `Task.init()` call. This will create and auto-populate the Task in CleaML (including Git Repo / Python Packages / Command line etc.).
|
||||
- Run the actual code with the `Task.init()` call. This will create and auto-populate the Task in ClearML (including Git Repo / Python Packages / Command line etc.).
|
||||
- Register local / remote code repository with `clearml-task`. See [details](../apps/clearml_task.md).
|
||||
|
||||
Once you have a Task in ClearML, you can clone and edit its definitions in the UI, then launch it on one of your nodes with [ClearML Agent](../clearml_agent.md).
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
---
|
||||
title: Dynamic GPU Allocation
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Dynamic GPU allocation is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The ClearML Enterprise server supports dynamic allocation of GPUs based on queue properties.
|
||||
|
||||
@@ -414,7 +414,7 @@ These settings define which Docker image and arguments should be used unless [ex
|
||||
* **`agent.default_docker.match_rules`** (*[dict]*)
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The `match_rules` configuration option is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
* Lookup table of rules that determine the default container and arguments when running a worker in Docker mode. The
|
||||
@@ -1599,7 +1599,7 @@ sdk {
|
||||
## Configuration Vault
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Configuration vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The ClearML Enterprise Server includes the configuration vault. Users can add configuration sections to the vault and, once
|
||||
|
||||
@@ -422,7 +422,7 @@ options.
|
||||
### Custom UI Context Menu Actions
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Custom UI context menu actions are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Create custom UI context menu actions to be performed on ClearML objects (projects, tasks, models, dataviews, or queues)
|
||||
|
||||
@@ -129,7 +129,7 @@ and ClearML Server needs to be installed.
|
||||
1. Add the `clearml-server` repository to Helm client.
|
||||
|
||||
```
|
||||
helm repo add allegroai https://allegroai.github.io/clearml-server-helm/
|
||||
helm repo add clearml https://clearml.github.io/clearml-server-helm/
|
||||
```
|
||||
|
||||
Confirm the `clearml-server` repository is now in the Helm client.
|
||||
|
||||
@@ -136,7 +136,7 @@ Deploying the server requires a minimum of 8 GB of memory, 16 GB is recommended.
|
||||
|
||||
2. Download the ClearML Server docker-compose YAML file.
|
||||
```
|
||||
sudo curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
sudo curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
```
|
||||
1. For Linux only, configure the **ClearML Agent Services**:
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ Deploying the server requires a minimum of 8 GB of memory, 16 GB is recommended.
|
||||
1. Save the ClearML Server docker-compose YAML file.
|
||||
|
||||
```
|
||||
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
|
||||
curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
|
||||
```
|
||||
|
||||
1. Run `docker-compose`. In PowerShell, execute the following commands:
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: Installing External Applications Server
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
UI application deployment is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML supports applications, which are extensions that allow additional capabilities, such as cloud auto-scaling,
|
||||
Hyperparameter Optimizations, etc. For more information, see [ClearML Applications](../../webapp/applications/apps_overview.md).
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: Application Installation on On-Prem and VPC Servers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
UI application deployment is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML Applications are like plugins that allow you to manage ML workloads and automatically run recurring workflows
|
||||
without any coding. Applications are installed on top of the ClearML Server.
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: AI Application Gateway
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The AI Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Services running through a cluster orchestrator such as Kubernetes or cloud hyperscaler require meticulous configuration
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Docker-Compose Deployment
|
||||
---
|
||||
title: Docker-Compose Deployment
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
The Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
## Requirements
|
||||
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Kubernetes Deployment
|
||||
---
|
||||
title: Kubernetes Deployment
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
The Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
This guide details the installation of the ClearML AI Application Gateway, specifically the ClearML Task Router Component.
|
||||
|
||||
@@ -6,8 +12,8 @@ This guide details the installation of the ClearML AI Application Gateway, speci
|
||||
|
||||
* Kubernetes cluster: `>= 1.21.0-0 < 1.32.0-0`
|
||||
* Helm installed and configured
|
||||
* Helm token to access allegroai helm-chart repo
|
||||
* Credentials for allegroai docker repo
|
||||
* Helm token to access `allegroai` helm-chart repo
|
||||
* Credentials for `allegroai` docker repo
|
||||
* A valid ClearML Server installation
|
||||
|
||||
## Optional for HTTPS
|
||||
@@ -21,7 +27,7 @@ This guide details the installation of the ClearML AI Application Gateway, speci
|
||||
|
||||
```
|
||||
helm repo add allegroai-enterprise \
|
||||
https://raw.githubusercontent.com/allegroai/clearml-enterprise-helm-charts/gh-pages \
|
||||
https://raw.githubusercontent.com/clearml/clearml-enterprise-helm-charts/gh-pages \
|
||||
--username <GITHUB_TOKEN> \
|
||||
--password <GITHUB_TOKEN>
|
||||
```
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Changing CleaML Artifacts Links
|
||||
title: Changing ClearML Artifacts Links
|
||||
---
|
||||
|
||||
This guide describes how to update artifact references in the ClearML Enterprise server.
|
||||
|
||||
122
docs/deploying_clearml/enterprise_deploy/custom_billing.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Custom Billing Events
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
Sending custom billing events is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
ClearML supports sending custom events to selected Kafka topics. Event sending is triggered by API calls and
|
||||
is available only for the companies with the `custom_events` settings set.
|
||||
|
||||
## Enabling Custom Events in ClearML Server
|
||||
|
||||
:::important Prerequisite
|
||||
**Precondition**: Customer Kafka for custom events is installed and reachable from the `apiserver`.
|
||||
:::
|
||||
|
||||
Set the following environment variables in the ClearML Enterprise helm chart under the `apiserver.extraEnv`:
|
||||
|
||||
* Enable custom events:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__enabled
|
||||
value: "true"
|
||||
```
|
||||
* Mount custom message template files into `/mnt/custom_events/templates` folder in the `apiserver` container and point
|
||||
the `apiserver` into it:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__template_folder
|
||||
value: "/mnt/custom_events/templates"
|
||||
```
|
||||
* Configure the Kafka host for sending events:
|
||||
|
||||
```
|
||||
- name: CLEARML__hosts__kafka__custom_events__host
|
||||
value: "[<KAFKA host address:port>]"
|
||||
```
|
||||
Configure Kafka security parameters. Below is the example for SASL plaintext security:
|
||||
|
||||
```
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__security_protocol
|
||||
value: "SASL_PLAINTEXT"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_mechanism
|
||||
value: "SCRAM-SHA-512"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_plain_username
|
||||
value: "<username>"
|
||||
- name: CLEARML__SECURE__KAFKA__CUSTOM_EVENTS__sasl_plain_password
|
||||
value: "<password>"
|
||||
```
|
||||
* Define Kafka topics for lifecycle and inventory messages:
|
||||
|
||||
```
|
||||
- name: CLEARML__services__custom_events__channels__main__topics__service_instance_lifecycle
|
||||
value: "lifecycle"
|
||||
- name: CLEARML__services__custom_events__channels__main__topics__service_instance_inventory
|
||||
value: "inventory"
|
||||
```
|
||||
* For the desired companies set up the custom events properties required by the event message templates:
|
||||
|
||||
```
|
||||
curl $APISERVER_URL/system.update_company_custom_events_config -H "Content-Type: application/json" -u $APISERVER_KEY:$APISERVER_SECRET -d'{
|
||||
"company": "<company_id>",
|
||||
"fields": {
|
||||
"service_instance_id": "<value>",
|
||||
"service_instance_name": "<value>",
|
||||
"service_instance_customer_tenant_name": "<value>",
|
||||
"service_instance_customer_space_name": "<value>",
|
||||
"service_instance_customer_space_id": "<value>",
|
||||
"parameters_connection_points": ["<value1>", "<value2>"]
|
||||
}}'
|
||||
```
|
||||
|
||||
## Sending Custom Events to the API Server
|
||||
|
||||
:::important Prerequisite
|
||||
**Precondition:** Dedicated custom-events Redis instance installed and reachable from all the custom events deployments.
|
||||
:::
|
||||
|
||||
Environment lifecycle events are sent directly by the `apiserver`. Other event types are emitted by the following helm charts:
|
||||
|
||||
* `clearml-pods-monitor-exporter` - Monitors running pods and sends container lifecycle events (should run one per cluster with a unique identifier, a UUID is required for the installation):
|
||||
|
||||
```
|
||||
# -- Universal Unique string to identify Pods Monitor instances across worker clusters. It cannot be empty.
|
||||
# Uniqueness is required across different cluster installations to preserve the reported data status.
|
||||
podsMonitorUUID: "<Unique ID>"
|
||||
# Interval
|
||||
checkIntervalSeconds: 60
|
||||
```
|
||||
* `clearml-pods-inventory` - Periodically sends inventory events about running pods.
|
||||
|
||||
```
|
||||
# Cron schedule - https://crontab.guru/
|
||||
cronJob:
|
||||
schedule: "@daily"
|
||||
```
|
||||
* `clearml-company-inventory` - Monitors Clearml companies and sends environment inventory events.
|
||||
|
||||
```
|
||||
# Cron schedule - https://crontab.guru/
|
||||
cronJob:
|
||||
schedule: "@daily"
|
||||
```
|
||||
|
||||
For every script chart add the below configuration to enable redis access and connection to the `apiserver`:
|
||||
|
||||
```
|
||||
clearml:
|
||||
apiServerUrlReference: "<APISERVER_URL>"
|
||||
apiServerKey: "<APISERVER_KEY>"
|
||||
apiServerSecret: "<APISERVER_SECRET>"
|
||||
redisConnection:
|
||||
host: "<REDIS_HOST>"
|
||||
port: <REDIS_PORT>
|
||||
password: "<REDIS_PWD>"
|
||||
```
|
||||
|
||||
See all other available options to customize the `custom-events` charts by running:
|
||||
```
|
||||
helm show readme allegroai-enterprise/<CHART_NAME>
|
||||
```
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Exporting and Importing ClearML Projects
|
||||
title: Project Migration
|
||||
---
|
||||
|
||||
When migrating from a ClearML Open Server to a ClearML Enterprise Server, you may need to transfer projects. This is done
|
||||
@@ -235,6 +235,6 @@ Note that this is not required if the new file server is replacing the old file
|
||||
exact address.
|
||||
|
||||
Once the projects' data has been copied to the target server, and the projects themselves were imported, see
|
||||
[Changing CleaML Artifacts Links](change_artifact_links.md) for information on how to fix the URLs.
|
||||
[Changing ClearML Artifacts Links](change_artifact_links.md) for information on how to fix the URLs.
|
||||
|
||||
|
||||
|
||||
@@ -2,14 +2,21 @@
|
||||
title: AWS EC2 AMIs
|
||||
---
|
||||
|
||||
:::note
|
||||
For upgrade purposes, the terms **Trains Server** and **ClearML Server** are interchangeable.
|
||||
:::
|
||||
<Collapsible title="Important: Upgrading to v2.x from v1.16.0 or older" type="info">
|
||||
|
||||
MongoDB major version was upgraded from `v5.x` to `6.x`. Please note that if your current ClearML Server version is older than
|
||||
`v1.17` (where MongoDB `v5.x` was first used), you'll need to first upgrade to ClearML Server v1.17.
|
||||
|
||||
First upgrade to ClearML Server v1.17 following the procedure below and using [this `docker-compose` file](https://github.com/clearml/clearml-server/blob/2976ce69cc91550a3614996e8a8d8cd799af2efd/upgrade/1_17_to_2_0/docker-compose.yml). Once successfully upgraded,
|
||||
you can proceed to upgrade to v2.x.
|
||||
|
||||
</Collapsible>
|
||||
|
||||
|
||||
The sections below contain the steps to upgrade ClearML Server on the [same AWS instance](#upgrading-on-the-same-aws-instance), and
|
||||
to upgrade and migrate to a [new AWS instance](#upgrading-and-migrating-to-a-new-aws-instance).
|
||||
|
||||
### Upgrading on the Same AWS Instance
|
||||
## Upgrading on the Same AWS Instance
|
||||
|
||||
This section contains the steps to upgrade ClearML Server on the same AWS instance.
|
||||
|
||||
@@ -42,7 +49,7 @@ If upgrading from Trains Server version 0.15 or older, a data migration is requi
|
||||
1. Download the latest `docker-compose.yml` file. Execute the following command:
|
||||
|
||||
```
|
||||
sudo curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
sudo curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
```
|
||||
|
||||
1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
|
||||
@@ -52,7 +59,7 @@ If upgrading from Trains Server version 0.15 or older, a data migration is requi
|
||||
docker-compose -f docker-compose.yml up -d
|
||||
```
|
||||
|
||||
### Upgrading and Migrating to a New AWS Instance
|
||||
## Upgrading and Migrating to a New AWS Instance
|
||||
|
||||
This section contains the steps to upgrade ClearML Server on the new AWS instance.
|
||||
|
||||
@@ -67,8 +74,9 @@ This section contains the steps to upgrade ClearML Server on the new AWS instanc
|
||||
1. On the old AWS instance, [backup your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration)
|
||||
and, if your configuration folder is not empty, backup your configuration.
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
If upgrading from Trains Server version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
1. If upgrading from Trains Server version 0.15 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. On the new AWS instance, [restore your data](clearml_server_aws_ec2_ami.md#backing-up-and-restoring-data-and-configuration) and, if the configuration folder is not empty, restore the
|
||||
configuration.
|
||||
|
||||
@@ -19,11 +19,13 @@ you can proceed to upgrade to v2.x.
|
||||
```
|
||||
docker-compose -f docker-compose.yml down
|
||||
```
|
||||
|
||||
1. [Backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) is recommended, and if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md),
|
||||
and then continue this upgrade.
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
|
||||
@@ -31,14 +33,12 @@ you can proceed to upgrade to v2.x.
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
1. [Backing up data](clearml_server_gcp.md#backing-up-and-restoring-data-and-configuration) is recommended, and if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
```
|
||||
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
```
|
||||
|
||||
1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
|
||||
|
||||
@@ -7,13 +7,13 @@ title: Kubernetes
|
||||
|
||||
```bash
|
||||
helm repo update
|
||||
helm upgrade clearml allegroai/clearml
|
||||
helm upgrade clearml clearml/clearml
|
||||
```
|
||||
|
||||
**To change the values in an existing installation,** execute the following:
|
||||
|
||||
```bash
|
||||
helm upgrade clearml allegroai/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
|
||||
helm upgrade clearml clearml/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
|
||||
```
|
||||
|
||||
See the [clearml-helm-charts repository](https://github.com/clearml/clearml-helm-charts/tree/main/charts/clearml#local-environment)
|
||||
|
||||
@@ -40,24 +40,26 @@ For backwards compatibility, the environment variables ``TRAINS_HOST_IP``, ``TRA
|
||||
```
|
||||
docker-compose -f docker-compose.yml down
|
||||
```
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. [Backing up data](clearml_server_linux_mac.md#backing-up-and-restoring-data-and-configuration) is recommended and, if the configuration folder is
|
||||
not empty, backing up the configuration.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. If upgrading from **Trains Server** to **ClearML Server**, rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`:
|
||||
|
||||
```
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv /opt/trains /opt/clearml
|
||||
```
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
```
|
||||
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
|
||||
```
|
||||
|
||||
1. Startup ClearML Server. This automatically pulls the latest ClearML Server build:
|
||||
|
||||
@@ -29,10 +29,7 @@ you can proceed to upgrade to v2.x.
|
||||
```
|
||||
docker-compose -f c:\opt\trains\docker-compose-win10.yml down
|
||||
```
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older, a data migration is required before continuing this upgrade. See instructions [here](clearml_server_es7_migration.md).
|
||||
|
||||
1. If upgrading from ClearML Server version older than 1.2, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Backing up data is recommended, and if the configuration folder is not empty, backing up the configuration.
|
||||
|
||||
@@ -40,13 +37,19 @@ you can proceed to upgrade to v2.x.
|
||||
For example, if the configuration is in ``c:\opt\clearml``, then backup ``c:\opt\clearml\config`` and ``c:\opt\clearml\data``.
|
||||
Before restoring, remove the old artifacts in ``c:\opt\clearml\config`` and ``c:\opt\clearml\data``, and then restore.
|
||||
:::
|
||||
|
||||
1. If upgrading from **Trains Server** to **ClearML Server**, rename `/opt/trains` and its subdirectories to `/opt/clearml`.
|
||||
|
||||
1. If upgrading from **Trains Server** version 0.15 or older to **ClearML Server**, do the following:
|
||||
|
||||
1. Follow these [data migration instructions](clearml_server_es7_migration.md).
|
||||
|
||||
1. Rename `/opt/trains` and its subdirectories to `/opt/clearml`.
|
||||
|
||||
1. If upgrading from ClearML Server version 1.1 or older, you need to migrate your data before upgrading your server. See instructions [here](clearml_server_mongo44_migration.md).
|
||||
|
||||
1. Download the latest `docker-compose.yml` file:
|
||||
|
||||
```
|
||||
curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
|
||||
curl https://raw.githubusercontent.com/clearml/clearml-server/master/docker/docker-compose-win10.yml -o c:\opt\clearml\docker-compose-win10.yml
|
||||
```
|
||||
|
||||
1. Startup ClearML Server. This automatically pulls the latest ClearML Server build.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Building Executable Task Containers
|
||||
title: Building Executable Task Containers
|
||||
---
|
||||
|
||||
## Exporting a Task into a Standalone Docker Container
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Managing Agent Work Schedules
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Agent work schedule management is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The Agent scheduler enables scheduling working hours for each Agent. During working hours, a worker will actively poll
|
||||
|
||||
@@ -32,19 +32,19 @@ training, and deploying models at every scale on any AI infrastructure.
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><a href="https://github.com/clearml/clearml/blob/master/docs/tutorials/Getting_Started_1_Experiment_Management.ipynb"><b>Step 1</b></a> - Experiment Management</td>
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/allegroai/clearml/blob/master/docs/tutorials/Getting_Started_1_Experiment_Management.ipynb">
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/clearml/clearml/blob/master/docs/tutorials/Getting_Started_1_Experiment_Management.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://github.com/clearml/clearml/blob/master/docs/tutorials/Getting_Started_2_Setting_Up_Agent.ipynb"><b>Step 2</b></a> - Remote Execution Agent Setup</td>
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/allegroai/clearml/blob/master/docs/tutorials/Getting_Started_2_Setting_Up_Agent.ipynb">
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/clearml/clearml/blob/master/docs/tutorials/Getting_Started_2_Setting_Up_Agent.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://github.com/clearml/clearml/blob/master/docs/tutorials/Getting_Started_3_Remote_Execution.ipynb"><b>Step 3</b></a> - Remotely Execute Tasks</td>
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/allegroai/clearml/blob/master/docs/tutorials/Getting_Started_3_Remote_Execution.ipynb">
|
||||
<td className="align-center"><a className="no-ext-icon" target="_blank" href="https://colab.research.google.com/github/clearml/clearml/blob/master/docs/tutorials/Getting_Started_3_Remote_Execution.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a></td>
|
||||
</tr>
|
||||
|
||||
@@ -49,7 +49,7 @@ Execution log at: https://app.clear.ml/projects/552d5399112d47029c146d5248570295
|
||||
### Executing a Local Script
|
||||
|
||||
For this example, use a local version of [this script](https://github.com/clearml/events/blob/master/webinar-0620/keras_mnist.py).
|
||||
1. Clone the [allegroai/events](https://github.com/clearml/events) repository
|
||||
1. Clone the [clearml/events](https://github.com/clearml/events) repository
|
||||
1. Go to the root folder of the cloned repository
|
||||
1. Run the following command:
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ and running, users can send Tasks to be executed on Google Colab's hardware.
|
||||
|
||||
|
||||
## Steps
|
||||
1. Open up [this Google Colab notebook](https://colab.research.google.com/github/allegroai/clearml/blob/master/examples/clearml_agent/clearml_colab_agent.ipynb).
|
||||
1. Open up [this Google Colab notebook](https://colab.research.google.com/github/clearml/clearml/blob/master/examples/clearml_agent/clearml_colab_agent.ipynb).
|
||||
|
||||
1. Run the first cell, which installs all the necessary packages:
|
||||
```
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Pipeline from Decorators
|
||||
---
|
||||
|
||||
The [pipeline_from_decorator.py](https://github.com/clearml/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py)
|
||||
example demonstrates the creation of a pipeline in ClearML using the [`PipelineDecorator`](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)
|
||||
example demonstrates the creation of a pipeline in ClearML using the [`PipelineDecorator`](../../references/sdk/automation_controller_pipelinedecorator.md#class-automationcontrollerpipelinedecorator)
|
||||
class.
|
||||
|
||||
This example creates a pipeline incorporating four tasks, each of which is created from a Python function using a custom decorator:
|
||||
@@ -14,11 +14,11 @@ This example creates a pipeline incorporating four tasks, each of which is creat
|
||||
* `step_four` - Uses data from `step_two` and the model from `step_three` to make a prediction.
|
||||
|
||||
The pipeline steps, defined in the `step_one`, `step_two`, `step_three`, and `step_four` functions, are each wrapped with the
|
||||
[`@PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent)
|
||||
[`@PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorcomponent)
|
||||
decorator, which creates a ClearML pipeline step for each one when the pipeline is executed.
|
||||
|
||||
The logic that executes these steps and controls the interaction between them is implemented in the `executing_pipeline`
|
||||
function. This function is wrapped with the [`@PipelineDecorator.pipeline`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorpipeline)
|
||||
function. This function is wrapped with the [`@PipelineDecorator.pipeline`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorpipeline)
|
||||
decorator which creates the ClearML pipeline task when it is executed.
|
||||
|
||||
The sections below describe in more detail what happens in the pipeline controller and steps.
|
||||
@@ -28,7 +28,7 @@ The sections below describe in more detail what happens in the pipeline controll
|
||||
In this example, the pipeline controller is implemented by the `executing_pipeline` function.
|
||||
|
||||
Using the `@PipelineDecorator.pipeline` decorator creates a ClearML Controller Task from the function when it is executed.
|
||||
For detailed information, see [`@PipelineDecorator.pipeline`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorpipeline).
|
||||
For detailed information, see [`@PipelineDecorator.pipeline`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorpipeline).
|
||||
|
||||
In the example script, the controller defines the interactions between the pipeline steps in the following way:
|
||||
1. The controller function passes its argument, `pickle_url`, to the pipeline's first step (`step_one`)
|
||||
@@ -39,13 +39,13 @@ In the example script, the controller defines the interactions between the pipel
|
||||
|
||||
:::info Local Execution
|
||||
In this example, the pipeline is set to run in local mode by using
|
||||
[`PipelineDecorator.run_locally()`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
|
||||
[`PipelineDecorator.run_locally()`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorrun_locally)
|
||||
before calling the pipeline function. See pipeline execution options [here](../../pipelines/pipelines_sdk_function_decorators.md#running-the-pipeline).
|
||||
:::
|
||||
|
||||
## Pipeline Steps
|
||||
Using the `@PipelineDecorator.component` decorator will make the function a pipeline component that can be called from the
|
||||
pipeline controller, which implements the pipeline's execution logic. For detailed information, see [`@PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent).
|
||||
pipeline controller, which implements the pipeline's execution logic. For detailed information, see [`@PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorcomponent).
|
||||
|
||||
When the pipeline controller calls a pipeline step, a corresponding ClearML task will be created. Notice that all package
|
||||
imports inside the function will be automatically logged as required packages for the pipeline execution step.
|
||||
@@ -63,7 +63,7 @@ executing_pipeline(
|
||||
```
|
||||
|
||||
By default, the pipeline controller and the pipeline steps are launched through ClearML [queues](../../fundamentals/agents_and_queues.md#what-is-a-queue).
|
||||
Use the [`PipelineDecorator.set_default_execution_queue`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
||||
Use the [`PipelineDecorator.set_default_execution_queue`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorset_default_execution_queue)
|
||||
method to specify the execution queue of all pipeline steps. The `execution_queue` parameter of the `@PipelineDecorator.component`
|
||||
decorator overrides the default queue value for the specific step for which it was specified.
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ The Slack API token and channel you create are required to configure the Slack a
|
||||
1. In **Development Slack Workspace**, select a workspace.
|
||||
1. Click **Create App**.
|
||||
1. In **Basic Information**, under **Display Information**, complete the following:
|
||||
- In **Short description**, enter "Allegro Train Bot".
|
||||
- In **Short description**, enter "ClearML Train Bot".
|
||||
- In **Background color**, enter "#202432".
|
||||
1. Click **Save Changes**.
|
||||
1. In **OAuth & Permissions**, under **Scopes**, click **Add an OAuth Scope**, and then select the following permissions
|
||||
|
||||
|
Before Width: | Height: | Size: 16 MiB After Width: | Height: | Size: 374 KiB |
|
Before Width: | Height: | Size: 12 MiB After Width: | Height: | Size: 13 MiB |
|
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 74 KiB |
@@ -4,14 +4,14 @@ title: PipelineDecorator
|
||||
|
||||
## Creating Pipelines Using Function Decorators
|
||||
|
||||
Use the [`PipelineDecorator`](../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)
|
||||
class to create pipelines from your existing functions. Use [`@PipelineDecorator.component`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent)
|
||||
to denote functions that comprise the steps of your pipeline, and [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorpipeline)
|
||||
Use the [`PipelineDecorator`](../references/sdk/automation_controller_pipelinedecorator.md#class-automationcontrollerpipelinedecorator)
|
||||
class to create pipelines from your existing functions. Use [`@PipelineDecorator.component`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorcomponent)
|
||||
to denote functions that comprise the steps of your pipeline, and [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorpipeline)
|
||||
for your main pipeline execution logic function.
|
||||
|
||||
## @PipelineDecorator.pipeline
|
||||
|
||||
Using the [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorpipeline)
|
||||
Using the [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorpipeline)
|
||||
decorator transforms the function which implements your pipeline's execution logic to a ClearML pipeline controller,
|
||||
an independently executed task.
|
||||
|
||||
@@ -70,13 +70,13 @@ parameters. When launching a new pipeline run from the [UI](../webapp/pipelines/
|
||||

|
||||
|
||||
## @PipelineDecorator.component
|
||||
Using the [`@PipelineDecorator.component`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent)
|
||||
Using the [`@PipelineDecorator.component`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorcomponent)
|
||||
decorator transforms a function into a ClearML pipeline step when called from a pipeline controller.
|
||||
|
||||
When the pipeline controller calls a pipeline step, a corresponding ClearML task is created.
|
||||
|
||||
:::tip Package Imports
|
||||
In the case that the `skip_global_imports` parameter of [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorpipeline)
|
||||
In the case that the `skip_global_imports` parameter of [`@PipelineDecorator.pipeline`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorpipeline)
|
||||
is set to `False`, all global imports will be automatically imported at the beginning of each step's execution.
|
||||
Otherwise, if set to `True`, make sure that each function which makes up a pipeline step contains package imports, which
|
||||
are automatically logged as required packages for the pipeline execution step.
|
||||
@@ -110,7 +110,7 @@ def step_one(pickle_data_url: str, extra: int = 43):
|
||||
* `packages` - A list of required packages or a local requirements.txt file. Example: `["tqdm>=2.1", "scikit-learn"]` or
|
||||
`"./requirements.txt"`. If not provided, packages are automatically added based on the imports used inside the function.
|
||||
* `execution_queue` (optional) - Queue in which to enqueue the specific step. This overrides the queue set with the
|
||||
[`PipelineDecorator.set_default_execution_queue method`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
||||
[`PipelineDecorator.set_default_execution_queue method`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorset_default_execution_queue)
|
||||
method.
|
||||
* `continue_on_fail` - If `True`, a failed step does not cause the pipeline to stop (or marked as failed). Notice, that
|
||||
steps that are connected (or indirectly connected) to the failed step are skipped (default `False`)
|
||||
@@ -186,14 +186,14 @@ specify which frameworks to log. See `Task.init`'s [`auto_connect_framework` par
|
||||
* `auto_connect_arg_parser` - Control automatic logging of argparse objects. See `Task.init`'s [`auto_connect_arg_parser` parameter](../references/sdk/task.md#taskinit)
|
||||
|
||||
You can also directly upload a model or an artifact from the step to the pipeline controller, using the
|
||||
[`PipelineDecorator.upload_model`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_model)
|
||||
and [`PipelineDecorator.upload_artifact`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorupload_artifact)
|
||||
[`PipelineDecorator.upload_model`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorupload_model)
|
||||
and [`PipelineDecorator.upload_artifact`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorupload_artifact)
|
||||
methods respectively.
|
||||
|
||||
|
||||
## Controlling Pipeline Execution
|
||||
### Default Execution Queue
|
||||
The [`PipelineDecorator.set_default_execution_queue`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorset_default_execution_queue)
|
||||
The [`PipelineDecorator.set_default_execution_queue`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorset_default_execution_queue)
|
||||
method lets you set a default queue through which all pipeline steps
|
||||
will be executed. Once set, step-specific overrides can be specified through the `@PipelineDecorator.component` decorator.
|
||||
|
||||
@@ -226,7 +226,7 @@ You can run the pipeline logic locally, while keeping the pipeline components ex
|
||||
#### Debugging Mode
|
||||
In debugging mode, the pipeline controller and all components are treated as regular Python functions, with components
|
||||
called synchronously. This mode is great to debug the components and design the pipeline as the entire pipeline is
|
||||
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratordebug_pipeline)
|
||||
executed on the developer machine with full ability to debug each function call. Call [`PipelineDecorator.debug_pipeline`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratordebug_pipeline)
|
||||
before the main pipeline logic function call.
|
||||
|
||||
Example:
|
||||
@@ -242,7 +242,7 @@ In local mode, the pipeline controller creates Tasks for each component, and com
|
||||
into sub-processes running on the same machine. Notice that the data is passed between the components and the logic with
|
||||
the exact same mechanism as in the remote mode (i.e. hyperparameters / artifacts), with the exception that the execution
|
||||
itself is local. Notice that each subprocess is using the exact same Python environment as the main pipeline logic. Call
|
||||
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorrun_locally)
|
||||
[`PipelineDecorator.run_locally`](../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorrun_locally)
|
||||
before the main pipeline logic function.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
---
|
||||
title: PipelineDecorator
|
||||
---
|
||||
|
||||
**AutoGenerated PlaceHolder**
|
||||
@@ -3,7 +3,7 @@ title: Identity Providers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Identity provider integration is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can seamlessly connect ClearML with their identity service providers to easily implement single sign-on
|
||||
|
||||
@@ -319,17 +319,10 @@ to an IAM user, and create credentials keys for that user to configure in the au
|
||||
"ssm:GetParameters",
|
||||
"ssm:GetParameter"
|
||||
],
|
||||
"Resource": "arn:aws:ssm:*::parameter/aws/service/marketplace/*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowUsingDeeplearningAMIAliases",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ssm:GetParametersByPath",
|
||||
"ssm:GetParameters",
|
||||
"ssm:GetParameter"
|
||||
],
|
||||
"Resource": "arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
|
||||
"Resource": [
|
||||
"arn:aws:ssm:*::parameter/aws/service/marketplace/*",
|
||||
"arn:aws:ssm:*::parameter/aws/service/deeplearning/*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -36,7 +36,7 @@ The pipeline run table contains the following columns:
|
||||
| Column | Description | Type |
|
||||
|---|---|---|
|
||||
| **RUN** | Pipeline run identifier | String |
|
||||
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinecontroller.md#class-automationcontrollerpipelinedecorator)'s `version` parameter | Version string |
|
||||
| **VERSION** | The pipeline version number. Corresponds to the [PipelineController](../../references/sdk/automation_controller_pipelinecontroller.md#class-pipelinecontroller)'s and [PipelineDecorator](../../references/sdk/automation_controller_pipelinedecorator.md#class-automationcontrollerpipelinedecorator)'s `version` parameter | Version string |
|
||||
| **TAGS** | Descriptive, user-defined, color-coded tags assigned to run. | Tag |
|
||||
| **STATUS** | Pipeline run's status. See a list of the [task states and state transitions](../../fundamentals/task.md#task-states). For Running, Failed, and Aborted runs, you will also see a progress indicator next to the status. See [here](../../pipelines/pipelines.md#tracking-pipeline-progress). | String |
|
||||
| **USER** | User who created the run. | String |
|
||||
|
||||
@@ -108,7 +108,7 @@ The details panel includes three tabs:
|
||||

|
||||
|
||||
* **Code** - For pipeline steps generated from functions using either [`PipelineController.add_function_step`](../../references/sdk/automation_controller_pipelinecontroller.md#add_function_step)
|
||||
or [`PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinecontroller.md#pipelinedecoratorcomponent),
|
||||
or [`PipelineDecorator.component`](../../references/sdk/automation_controller_pipelinedecorator.md#pipelinedecoratorcomponent),
|
||||
you can view the selected step's code.
|
||||
|
||||

|
||||
|
||||
@@ -3,7 +3,7 @@ title: Resource Policies
|
||||
---
|
||||
|
||||
:::important ENTERPRISE FEATURE
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Resource Policies are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Access Rules
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Access rules are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Workspace administrators can use the **Access Rules** page to manage workspace permissions, by specifying which users,
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Administrator Vaults
|
||||
---
|
||||
|
||||
:::info Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Administrator vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can define multiple [configuration vaults](webapp_settings_profile.md#configuration-vault) which will each be applied to designated
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Identity Providers
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Identity provider integration is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Administrators can connect identity service providers to the server: configure an identity connection, which allows
|
||||
|
||||
@@ -100,7 +100,7 @@ these credentials cannot be recovered.
|
||||
### AI Application Gateway Tokens
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The AI Application Gateway is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The AI Application Gateway enables external access to ClearML tasks and applications. The gateway is configured with an
|
||||
@@ -146,7 +146,7 @@ in that workspace. You can rejoin the workspace only if you are re-invited.
|
||||
### Configuration Vault
|
||||
|
||||
:::info Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Configuration vaults are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Use the configuration vault to store global ClearML configuration entries that can extend the ClearML [configuration file](../../configs/clearml_conf.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ user can only rejoin your workspace when you re-invite them.
|
||||
## Service Accounts
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
Service accounts are available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Service accounts are ClearML users that provide services with access to the ClearML API, but not the
|
||||
@@ -155,7 +155,7 @@ To delete a service account:
|
||||
## User Groups
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan, as part of the [Access Rules](webapp_settings_access_rules.md)
|
||||
User groups are available under the ClearML Enterprise plan, as part of the [Access Rules](webapp_settings_access_rules.md)
|
||||
feature.
|
||||
:::
|
||||
|
||||
|
||||
@@ -93,7 +93,7 @@ using to set up an environment (`pip` or `conda`) are available. Select which re
|
||||
|
||||
### Container
|
||||
The Container section list the following information:
|
||||
* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Docker containers](../getting_started/clearml_agent_docker_exec.md))
|
||||
* Image - a pre-configured container that ClearML Agent will use to remotely execute this task (see [Building Task Execution Environments in a Container](../getting_started/clearml_agent_base_docker.md))
|
||||
* Arguments - add container arguments
|
||||
* Setup shell script - a bash script to be executed inside the container before setting up the task's environment
|
||||
|
||||
@@ -230,13 +230,13 @@ The **INFO** tab shows extended task information:
|
||||
* [Task description](#description)
|
||||
* [Task details](#task-details)
|
||||
|
||||
### Latest Events Log
|
||||
### Latest Events Log
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
:::info Hosted Service and Enterprise Feature
|
||||
The latest events log is available only on the ClearML Hosted Service and under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
The Enterprise Server also displays a detailed history of task activity:
|
||||
The **INFO** tab includes a detailed history of task activity:
|
||||
* Task action (e.g. status changes, project move, etc.)
|
||||
* Action time
|
||||
* Acting user
|
||||
@@ -252,7 +252,7 @@ To download the task history as a CSV file, hover over the log and click <img sr
|
||||
ClearML maintains a system-wide, large but strict limit for task history items. Once the limit is reached, the oldest entries are purged to make room for fresh entries.
|
||||
:::
|
||||
|
||||
### Description
|
||||
### Description
|
||||
Add descriptive text to the task in the **Description** section. To modify the description, hover over the
|
||||
description box and click **Edit**.
|
||||
|
||||
@@ -304,7 +304,7 @@ All scalars that ClearML automatically logs, as well as those explicitly reporte
|
||||
|
||||
Scalar series can be displayed in [graph view](#graph-view) (default) or in [metric values view](#metric-values-view):
|
||||
|
||||
#### Graph View
|
||||
#### Graph View
|
||||
Scalar graph view (<img src="/docs/latest/icons/ico-charts-view.svg" alt="Graph view" className="icon size-md space-sm" />)
|
||||
shows scalar series plotted as a time series line chart. By default, a single plot is shown for each scalar metric,
|
||||
with all variants overlaid within.
|
||||
|
||||
@@ -72,7 +72,7 @@ and/or Reset functions.
|
||||
|
||||
|
||||
#### Default Container
|
||||
Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Docker containers](../getting_started/clearml_agent_docker_exec.md)).
|
||||
Select a pre-configured container that the [ClearML Agent](../clearml_agent.md) will use to remotely execute this task (see [Building Task Execution Environments in a Container](../getting_started/clearml_agent_base_docker.md)).
|
||||
|
||||
**To add, change, or delete a default container:**
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Orchestration Dashboard
|
||||
---
|
||||
|
||||
:::important Enterprise Feature
|
||||
This feature is available under the ClearML Enterprise plan.
|
||||
The Orchestration Dashboard is available under the ClearML Enterprise plan.
|
||||
:::
|
||||
|
||||
Use the orchestration dashboard to monitor all of your available and in-use compute resources:
|
||||
|
||||
@@ -424,22 +424,22 @@ To add an image, add an exclamation point, followed by the alt text enclosed by
|
||||
image enclosed in parentheses:
|
||||
|
||||
```
|
||||

|
||||

|
||||
```
|
||||
|
||||
The rendered output should look like this:
|
||||
|
||||

|
||||

|
||||
|
||||
To add a title to the image, which you can see in a tooltip when hovering over the image, add the title after the image's
|
||||
link:
|
||||
|
||||
```
|
||||

|
||||

|
||||
```
|
||||
The rendered output should look like this:
|
||||
|
||||
<img src="https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg" alt="Logo with Title" title="ClearML logo"/>
|
||||
<img src="https://raw.githubusercontent.com/clearml/clearml/master/docs/clearml-logo.svg" alt="Logo with Title" title="ClearML logo"/>
|
||||
|
||||
Hover over the image to see its title.
|
||||
|
||||
|
||||
@@ -114,7 +114,7 @@ module.exports = {
|
||||
{
|
||||
label: 'References',
|
||||
to: '/docs/references/sdk/task',
|
||||
activeBaseRegex: '^/docs/latest/docs/(references/|webapp/.*|hyperdatasets/webapp/.*|clearml_agent/(clearml_agent_ref|clearml_agent_env_var)|configs/(clearml_conf|env_vars)|apps/(clearml_task|clearml_param_search))(/.*)?$',
|
||||
activeBaseRegex: '^/docs/latest/docs/(references/.*|webapp/.*|hyperdatasets/webapp/.*|clearml_agent/(clearml_agent_ref|clearml_agent_env_var)|configs/(clearml_conf|env_vars)|apps/(clearml_task|clearml_param_search))(/.*)?$',
|
||||
},
|
||||
{
|
||||
label: 'Best Practices',
|
||||
@@ -127,7 +127,7 @@ module.exports = {
|
||||
activeBaseRegex: '^/docs/latest/docs/guides',
|
||||
},
|
||||
{
|
||||
label: 'Integrations',
|
||||
label: 'Code Integrations',
|
||||
to: '/docs/integrations',
|
||||
activeBaseRegex: '^/docs/latest/docs/integrations(?!/storage)',
|
||||
},
|
||||
|
||||
28
sidebars.js
@@ -399,8 +399,10 @@ module.exports = {
|
||||
'references/sdk/dataset',
|
||||
{'Pipeline': [
|
||||
'references/sdk/automation_controller_pipelinecontroller',
|
||||
'references/sdk/automation_controller_pipelinedecorator',
|
||||
'references/sdk/automation_job_clearmljob'
|
||||
]},
|
||||
]
|
||||
},
|
||||
'references/sdk/scheduler',
|
||||
'references/sdk/trigger',
|
||||
{'HyperParameter Optimization': [
|
||||
@@ -635,11 +637,19 @@ module.exports = {
|
||||
'getting_started/architecture',
|
||||
]},*/
|
||||
{
|
||||
'Enterprise Server Deployment': [
|
||||
'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
|
||||
'deploying_clearml/enterprise_deploy/vpc_aws',
|
||||
'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
|
||||
]
|
||||
'Enterprise Server': {
|
||||
'Deployment Options': [
|
||||
'deploying_clearml/enterprise_deploy/multi_tenant_k8s',
|
||||
'deploying_clearml/enterprise_deploy/vpc_aws',
|
||||
'deploying_clearml/enterprise_deploy/on_prem_ubuntu',
|
||||
],
|
||||
'Maintenance': [
|
||||
'deploying_clearml/enterprise_deploy/import_projects',
|
||||
'deploying_clearml/enterprise_deploy/change_artifact_links',
|
||||
'deploying_clearml/enterprise_deploy/delete_tenant',
|
||||
]
|
||||
|
||||
}
|
||||
},
|
||||
{
|
||||
type: 'category',
|
||||
@@ -651,11 +661,9 @@ module.exports = {
|
||||
'deploying_clearml/enterprise_deploy/appgw_install_k8s',
|
||||
]
|
||||
},
|
||||
'deploying_clearml/enterprise_deploy/delete_tenant',
|
||||
'deploying_clearml/enterprise_deploy/import_projects',
|
||||
'deploying_clearml/enterprise_deploy/change_artifact_links',
|
||||
'deploying_clearml/enterprise_deploy/custom_billing',
|
||||
{
|
||||
'Enterprise Applications': [
|
||||
'UI Applications': [
|
||||
'deploying_clearml/enterprise_deploy/app_install_ubuntu_on_prem',
|
||||
'deploying_clearml/enterprise_deploy/app_install_ex_server',
|
||||
'deploying_clearml/enterprise_deploy/app_custom',
|
||||
|
||||