# ClearML Ecosystem for Kubernetes {{ template "chart.deprecationWarning" . }} {{ template "chart.badgesSection" . }} {{ template "chart.description" . }} {{ template "chart.homepageLine" . }} {{ template "chart.maintainersSection" . }} ## Introduction The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/allegroai/clearml). It allows multiple users to collaborate and manage their experiments. **clearml-server** contains the following components: * The ClearML Web-App, a single-page UI for experiment management and browsing * RESTful API for: * Documenting and logging experiment information, statistics and results * Querying experiments history, logs and results * Locally-hosted file server for storing images and models making them easily accessible using the Web-App ## Local environment For development/evaluation it's possible to use [kind](https://kind.sigs.k8s.io). After installation, following commands will create a complete ClearML insatllation: ``` mkdir -pm 777 /tmp/clearml-kind cat < /tmp/clearml-kind.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: # API server's default nodePort is 30008. If you customize it in helm values by # `apiserver.service.nodePort`, `containerPort` should match it - containerPort: 30008 hostPort: 30008 listenAddress: "127.0.0.1" protocol: TCP # Web server's default nodePort is 30080. If you customize it in helm values by # `webserver.service.nodePort`, `containerPort` should match it - containerPort: 30080 hostPort: 30080 listenAddress: "127.0.0.1" protocol: TCP # File server's default nodePort is 30081. If you customize it in helm values by # `fileserver.service.nodePort`, `containerPort` should match it - containerPort: 30081 hostPort: 30081 listenAddress: "127.0.0.1" protocol: TCP extraMounts: - hostPath: /tmp/clearml-kind/ containerPath: /var/local-path-provisioner EOF kind create cluster --config /tmp/clearml-kind.yaml helm install clearml allegroai/clearml ``` After deployment, the services will be exposed on localhost on the following ports: * API server on `30008` * Web server on `30080` * File server on `30081` Data persisted in every Kubernetes volume by ClearML will be accessible in /tmp/clearml-kind folder on the host. ## Production cluster environment In a production environment it's suggested to install an ingress controller and verify that is working correctly. During ClearML deployment enable `ingress` section of chart values. This will create 3 ingress rules: * `app.` * `files.` * `api.` (*for example, `app.clearml.mydomainname.com`, `files.clearml.mydomainname.com` and `api.clearml.mydomainname.com`*) Just pointing the domain records to the IP where ingress controller is responding will complete the deployment process. ## Additional Configuration for ClearML Server You can also configure the **clearml-server** for: * fixed users (users with credentials) * non-responsive experiment watchdog settings For detailed instructions, see the [Optional Configuration](https://github.com/allegroai/clearml-server#optional-configuration) section in the **clearml-server** repository README file. {{ template "chart.sourcesSection" . }} {{ template "chart.requirementsSection" . }} {{ template "chart.valuesSection" . }}