Compare commits

...

11 Commits

Author SHA1 Message Date
Valeriano Manassero
874f1cf0ce Upgrade agent 124 21 (#119)
* Changed: bump up agent version

* Changed: bump up chart version
2022-11-30 08:25:12 +01:00
Valeriano Manassero
c54e6ef44a Upgrade 180 (#118)
* Changed: bump up app to version 1.8.0

* Changed: bump up chart version
2022-11-30 08:12:52 +01:00
Valeriano Manassero
ca54cb570f Serving refactoring (#116)
* Changed: remove unused status

* Changed: image tags refactoring

* Changed: Images references refactoring

* Added: serving ingresses

* Added: grafana ingress

* Added: grafana ingress

* Changed: chart version bump

* Changed: maintainers

* Added: value comments

* Fixed: typo

* Fixed: typos
2022-11-04 15:10:11 +01:00
Valeriano Manassero
5035814ed9 Changed: serving app version to 1.2.0 (#114) 2022-10-11 10:34:52 +02:00
Valeriano Manassero
462a8da239 Serving hpa (#113)
* Added: basic hpa

* Changed: version bump
2022-10-10 09:17:05 +02:00
Valeriano Manassero
8747bceb4e 1.7.0 upgrade (#112)
* Changed: mage update

* Changed: version update
2022-10-05 14:34:47 +02:00
Valeriano Manassero
6aea682b0d Fix: agent release (#109)
* Fix: agent release

* Changed: version bump
2022-09-16 08:42:34 +02:00
Valeriano Manassero
4704415662 Make PDR compatible with k8s 1.25 (#108)
* Changed: pdr version

* Changed: dependency update

* Changed: removed eol k8s

* Changed: kind versions update

* Removed: incompatible version with GH actions

* Changed: updated action
2022-09-16 08:28:41 +02:00
Brett Cullen
8374ece563 Added missing brackets around .Values.imageCredentials.existingSecret (#107) 2022-09-16 00:12:03 +03:00
Brett Cullen
0871e73831 Fixed missing brackets for k8 secret (docker config) (#106) 2022-09-15 23:35:36 +03:00
Niels ten Boom
a90b91f024 feat: expand volumemount capabilities for agent (#104)
* upgrade

* add upgrade instruction

* fix readme for agent

* Added newline at the end

* Try to fix CI

* Edited type added

* Update README.md

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2022-09-13 14:53:44 +02:00
44 changed files with 503 additions and 170 deletions

View File

@@ -2,6 +2,7 @@ name: Lint and Test Charts
on:
pull_request:
types: [opened, synchronize, edited, reopened]
paths:
- 'charts/**'
@@ -21,16 +22,16 @@ jobs:
strategy:
matrix:
k8s:
- v1.22.7
- v1.23.6
- v1.24.0
- v1.22.13
- v1.23.10
- v1.24.4
- v1.25.0
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Create kind ${{ matrix.k8s }} cluster
uses: helm/kind-action@v1.2.0
uses: helm/kind-action@v1.3.0
with:
version: v0.13.0
node_image: kindest/node:${{ matrix.k8s }}
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
@@ -42,11 +43,6 @@ jobs:
echo "::set-output name=changed::true"
echo "::set-output name=changed_charts::\"${changed//$'\n'/,}\""
fi
- name: Inject secrets
run: |
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUEKEY/${{ secrets.agentk8sglueKey }}/g" {} \;
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUESECRET/${{ secrets.agentk8sglueSecret }}/g" {} \;
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (lint and install)
run: ct lint-and-install --chart-dirs=charts --target-branch=main --helm-extra-args="--timeout=15m" --charts=${{steps.list-changed.outputs.changed_charts}} --debug=true
if: steps.list-changed.outputs.changed == 'true'

View File

@@ -2,9 +2,9 @@ apiVersion: v2
name: clearml-agent
description: MLOps platform
type: application
version: "1.3.0"
version: "2.0.2"
appVersion: "1.24"
kubeVersion: ">= 1.19.0-0 < 1.25.0-0"
kubeVersion: ">= 1.19.0-0 < 1.26.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
sources:

View File

@@ -1,6 +1,6 @@
# clearml-agent
# ClearML Kubernetes Agent
![Version: 1.3.0](https://img.shields.io/badge/Version-1.3.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.24](https://img.shields.io/badge/AppVersion-1.24-informational?style=flat-square)
![Version: 2.0.2](https://img.shields.io/badge/Version-2.0.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.24](https://img.shields.io/badge/AppVersion-1.24-informational?style=flat-square)
MLOps platform
@@ -12,6 +12,11 @@ MLOps platform
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
## Introduction
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/allegroai/clearml).
It allows you to schedule distributed experiments on a Kubernetes cluster.
## Source Code
* <https://github.com/allegroai/clearml-helm-charts>
@@ -19,26 +24,27 @@ MLOps platform
## Requirements
Kubernetes: `>= 1.19.0-0 < 1.25.0-0`
Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| agentk8sglue | object | `{"apiServerUrlReference":"https://api.clear.ml","clearmlcheckCertificate":true,"defaultContainerImage":"ubuntu:18.04","extraEnvs":[],"fileServerUrlReference":"https://files.clear.ml","id":"k8s-agent","image":{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-18"},"maxPods":10,"podTemplate":{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumes":[]},"queue":"default","replicaCount":1,"serviceAccountName":"default","webServerUrlReference":"https://app.clear.ml"}` | This agent will spawn queued experiments in new pods, a good use case is to combine this with GPU autoscaling nodes. https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue |
| agentk8sglue | object | `{"apiServerUrlReference":"https://api.clear.ml","clearmlcheckCertificate":true,"defaultContainerImage":"ubuntu:18.04","extraEnvs":[],"fileServerUrlReference":"https://files.clear.ml","id":"k8s-agent","image":{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"},"maxPods":10,"podTemplate":{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumeMounts":[],"volumes":[]},"queue":"default","replicaCount":1,"serviceAccountName":"default","webServerUrlReference":"https://app.clear.ml"}` | This agent will spawn queued experiments in new pods, a good use case is to combine this with GPU autoscaling nodes. https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue |
| agentk8sglue.apiServerUrlReference | string | `"https://api.clear.ml"` | Reference to Api server url |
| agentk8sglue.clearmlcheckCertificate | bool | `true` | Check certificates validity for evefry UrlReference below. |
| agentk8sglue.defaultContainerImage | string | `"ubuntu:18.04"` | default container image for ClearML Task pod |
| agentk8sglue.extraEnvs | list | `[]` | Environment variables to be exposed in the agentk8sglue pods |
| agentk8sglue.fileServerUrlReference | string | `"https://files.clear.ml"` | Reference to File server url |
| agentk8sglue.id | string | `"k8s-agent"` | ClearML worker ID (must be unique across the entire ClearMLenvironment) |
| agentk8sglue.image | object | `{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-18"}` | Glue Agent image configuration |
| agentk8sglue.image | object | `{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"}` | Glue Agent image configuration |
| agentk8sglue.maxPods | int | `10` | maximum concurrent consume ClearML Task pod |
| agentk8sglue.podTemplate | object | `{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumes":[]}` | template for pods spawned to consume ClearML Task |
| agentk8sglue.podTemplate | object | `{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumeMounts":[],"volumes":[]}` | template for pods spawned to consume ClearML Task |
| agentk8sglue.podTemplate.env | list | `[]` | environment variables for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.nodeSelector | object | `{}` | nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.resources | object | `{}` | resources declaration for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.tolerations | list | `[]` | tolerations setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.volumeMounts | list | `[]` | volumeMounts definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.volumes | list | `[]` | volumes definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.queue | string | `"default"` | ClearML queue this agent will consume |
| agentk8sglue.replicaCount | int | `1` | Glue Agent number of pods |
@@ -58,5 +64,26 @@ Kubernetes: `>= 1.19.0-0 < 1.25.0-0`
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)
# Upgrading Chart
### From v1.x to v2.x
Chart 1.x was under the assumption that all mounted volumes would be PVC's. Version > 2.x allows for more flexibility and will inject the yaml from podTemplate.volumes and podtemplate.volumeMounts directly.
v1.x
```
volumes:
- name: "yourvolume"
path: "/yourpath"
```
v2.x
```
volumes:
- name: "yourvolume"
persistentVolumeClaim:
claimName: "yourvolume"
volumeMounts:
- name: "yourvolume"
mountPath: "/yourpath"
```

View File

@@ -0,0 +1,45 @@
# ClearML Kubernetes Agent
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
## Introduction
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/allegroai/clearml).
It allows you to schedule distributed experiments on a Kubernetes cluster.
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
# Upgrading Chart
### From v1.x to v2.x
Chart 1.x was under the assumption that all mounted volumes would be PVC's. Version > 2.x allows for more flexibility and will inject the yaml from podTemplate.volumes and podtemplate.volumeMounts directly.
v1.x
```
volumes:
- name: "yourvolume"
path: "/yourpath"
```
v2.x
```
volumes:
- name: "yourvolume"
persistentVolumeClaim:
claimName: "yourvolume"
volumeMounts:
- name: "yourvolume"
mountPath: "/yourpath"
```

View File

@@ -11,27 +11,24 @@ data:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: {{.Values.imageCredentials.existingSecret}}
{{- else }}
- name: {{ include "agentk8sglue.referenceName" . }}-clearml-agent-registry-key
{{- end }}
{{- end }}
serviceAccountName: {{ .Values.agentk8sglue.serviceAccountName }}
{{- with .Values.agentk8sglue.podTemplate.volumes }}
volumes:
{{- range .Values.agentk8sglue.podTemplate.volumes }}
- name: {{ .name }}
persistentVolumeClaim:
claimName: {{ .name }}
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- resources:
{{- toYaml .Values.agentk8sglue.podTemplate.resources | nindent 10 }}
ports:
- containerPort: 10022
{{- with .Values.agentk8sglue.podTemplate.volumeMounts }}
volumeMounts:
{{- range .Values.agentk8sglue.podTemplate.volumes }}
- mountPath: {{ .path }}
name: {{ .name }}
{{- toYaml . | nindent 10 }}
{{- end }}
env:
- name: CLEARML_API_HOST

View File

@@ -19,31 +19,11 @@ spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: "{{.Values.imageCredentials.existingSecret}}"
{{- else }}
- name: {{ include "agentk8sglue.referenceName" . }}-clearml-agent-registry-key
{{- end }}
{{- end }}
initContainers:
- name: init-k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.apiServerUrlReference}}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done;
while [[ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.fileServerUrlReference}}/" -o /dev/null) =~ 403|405 ]] ; do
echo "waiting for fileserver" ;
sleep 5 ;
done;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.webServerUrlReference}}/" -o /dev/null) -ne 200 ] ; do
echo "waiting for webserver" ;
sleep 5 ;
done
containers:
- name: k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"

View File

@@ -36,7 +36,7 @@ agentk8sglue:
# -- Glue Agent image configuration
image:
repository: "allegroai/clearml-agent-k8s-base"
tag: "1.24-18"
tag: "1.24-21"
# -- Glue Agent number of pods
replicaCount: 1
@@ -71,7 +71,12 @@ agentk8sglue:
# -- volumes definition for pods spawned to consume ClearML Task (example in values.yaml comments)
volumes: []
# - name: "yourvolume"
# path: "/yourpath"
# persistentVolumeClaim:
# claimName: "yourvolume"
# -- volumeMounts definition for pods spawned to consume ClearML Task (example in values.yaml comments)
volumeMounts: []
# - name: "yourvolume"
# mountPath: "/yourpath"
# -- environment variables for pods spawned to consume ClearML Task (example in values.yaml comments)
env: []
# # to setup access to private repo, setup secret with git credentials:

View File

@@ -2,13 +2,12 @@ apiVersion: v2
name: clearml-serving
description: ClearML Serving Helm Chart
type: application
version: 0.4.1
appVersion: "0.9.0"
version: 0.7.0
appVersion: "1.2.0"
kubeVersion: ">= 1.19.0-0 < 1.26.0-0"
maintainers:
- name: valeriano-manassero
url: https://github.com/valeriano-manassero
- name: stefano-cherchi
url: https://github.com/stefano-cherchi
keywords:
- clearml
- "machine learning"

View File

@@ -1,6 +1,6 @@
# clearml-serving
![Version: 0.4.1](https://img.shields.io/badge/Version-0.4.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.9.0](https://img.shields.io/badge/AppVersion-0.9.0-informational?style=flat-square)
![Version: 0.7.0](https://img.shields.io/badge/Version-0.7.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.2.0](https://img.shields.io/badge/AppVersion-1.2.0-informational?style=flat-square)
ClearML Serving Helm Chart
@@ -9,63 +9,47 @@ ClearML Serving Helm Chart
| Name | Email | Url |
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
| stefano-cherchi | | <https://github.com/stefano-cherchi> |
## Requirements
Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| alertmanager.affinity | object | `{}` | |
| alertmanager.image | string | `"prom/alertmanager:v0.23.0"` | |
| alertmanager.nodeSelector | object | `{}` | |
| alertmanager.resources | object | `{}` | |
| alertmanager.tolerations | list | `[]` | |
| clearml.apiAccessKey | string | `"ClearML API Access Key"` | |
| clearml.apiHost | string | `"http://clearml-server-apiserver:8008"` | |
| clearml.apiSecretKey | string | `"ClearML API Secret Key"` | |
| clearml.defaultBaseServeUrl | string | `"http://127.0.0.1:8080/serve"` | |
| clearml.filesHost | string | `"http://clearml-server-fileserver:8081"` | |
| clearml.servingTaskId | string | `"ClearML Serving Task ID"` | |
| clearml.webHost | string | `"http://clearml-server-webserver:80"` | |
| clearml_serving_inference.affinity | object | `{}` | |
| alertmanager | object | `{"affinity":{},"image":{"repository":"prom/alertmanager","tag":"v0.23.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Alertmanager generic configigurations |
| clearml | object | `{"apiAccessKey":"ClearML API Access Key","apiHost":"http://clearml-server-apiserver:8008","apiSecretKey":"ClearML API Secret Key","defaultBaseServeUrl":"http://127.0.0.1:8080/serve","filesHost":"http://clearml-server-fileserver:8081","servingTaskId":"ClearML Serving Task ID","webHost":"http://clearml-server-webserver:80"}` | ClearMl generic configurations |
| clearml_serving_inference | object | `{"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-inference","tag":"1.2.0"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving inference configurations |
| clearml_serving_inference.affinity | object | `{}` | Affinity configuration |
| clearml_serving_inference.autoscaling | object | `{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50}` | Autoscaling configuration |
| clearml_serving_inference.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_inference.image | string | `"allegroai/clearml-serving-inference"` | |
| clearml_serving_inference.nodeSelector | object | `{}` | |
| clearml_serving_inference.resources | object | `{}` | |
| clearml_serving_inference.tolerations | list | `[]` | |
| clearml_serving_statistics.affinity | object | `{}` | |
| clearml_serving_inference.image | object | `{"repository":"allegroai/clearml-serving-inference","tag":"1.2.0"}` | Container Image |
| clearml_serving_inference.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_inference.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_inference.resources | object | `{}` | Pod resources definition |
| clearml_serving_inference.tolerations | list | `[]` | Tolerations configuration |
| clearml_serving_statistics | object | `{"affinity":{},"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-statistics","tag":"1.2.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving statistics configurations |
| clearml_serving_statistics.affinity | object | `{}` | Affinity configuration |
| clearml_serving_statistics.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_statistics.image | string | `"allegroai/clearml-serving-statistics"` | |
| clearml_serving_statistics.nodeSelector | object | `{}` | |
| clearml_serving_statistics.resources | object | `{}` | |
| clearml_serving_statistics.tolerations | list | `[]` | |
| clearml_serving_triton.affinity | object | `{}` | |
| clearml_serving_triton.enabled | bool | `true` | |
| clearml_serving_statistics.image | object | `{"repository":"allegroai/clearml-serving-statistics","tag":"1.2.0"}` | Container Image |
| clearml_serving_statistics.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_statistics.resources | object | `{}` | Pod resources definition |
| clearml_serving_statistics.tolerations | list | `[]` | Tolerations configuration |
| clearml_serving_triton | object | `{"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"enabled":true,"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-triton","tag":"1.2.0-22.07"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving Triton configurations |
| clearml_serving_triton.affinity | object | `{}` | Affinity configuration |
| clearml_serving_triton.autoscaling | object | `{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50}` | Autoscaling configuration |
| clearml_serving_triton.enabled | bool | `true` | Triton pod creation enable/disable |
| clearml_serving_triton.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_triton.image | string | `"allegroai/clearml-serving-triton"` | |
| clearml_serving_triton.nodeSelector | object | `{}` | |
| clearml_serving_triton.resources | object | `{}` | |
| clearml_serving_triton.tolerations | list | `[]` | |
| grafana.affinity | object | `{}` | |
| grafana.image | string | `"grafana/grafana:8.4.4-ubuntu"` | |
| grafana.nodeSelector | object | `{}` | |
| grafana.resources | object | `{}` | |
| grafana.tolerations | list | `[]` | |
| kafka.affinity | object | `{}` | |
| kafka.image | string | `"bitnami/kafka:3.1.0"` | |
| kafka.nodeSelector | object | `{}` | |
| kafka.resources | object | `{}` | |
| kafka.tolerations | list | `[]` | |
| prometheus.affinity | object | `{}` | |
| prometheus.image | string | `"prom/prometheus:v2.34.0"` | |
| prometheus.nodeSelector | object | `{}` | |
| prometheus.resources | object | `{}` | |
| prometheus.tolerations | list | `[]` | |
| zookeeper.affinity | object | `{}` | |
| zookeeper.image | string | `"bitnami/zookeeper:3.7.0"` | |
| zookeeper.nodeSelector | object | `{}` | |
| zookeeper.resources | object | `{}` | |
| zookeeper.tolerations | list | `[]` | |
| clearml_serving_triton.image | object | `{"repository":"allegroai/clearml-serving-triton","tag":"1.2.0-22.07"}` | Container Image |
| clearml_serving_triton.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_triton.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_triton.resources | object | `{}` | Pod resources definition |
| clearml_serving_triton.tolerations | list | `[]` | Tolerations configuration |
| grafana | object | `{"affinity":{},"image":{"repository":"grafana/grafana","tag":"8.4.4-ubuntu"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving-grafana.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | Grafana generic configigurations |
| kafka | object | `{"affinity":{},"image":{"repository":"bitnami/kafka","tag":"3.1.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Kafka generic configigurations |
| prometheus | object | `{"affinity":{},"image":{"repository":"prom/prometheus","tag":"v2.34.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Prometheus generic configigurations |
| zookeeper | object | `{"affinity":{},"image":{"repository":"bitnami/zookeeper","tag":"3.7.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Zookeeper generic configigurations |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)

View File

@@ -60,3 +60,33 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Return the target Kubernetes version
*/}}
{{- define "common.capabilities.kubeVersion" -}}
{{- if .Values.global }}
{{- if .Values.global.kubeVersion }}
{{- .Values.global.kubeVersion -}}
{{- else }}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
{{- end -}}
{{- else }}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for Horizontal Pod Autoscaler.
*/}}
{{- define "common.capabilities.hpa.apiVersion" -}}
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .context) -}}
{{- if .beta2 -}}
{{- print "autoscaling/v2beta2" -}}
{{- else -}}
{{- print "autoscaling/v2beta1" -}}
{{- end -}}
{{- else -}}
{{- print "autoscaling/v2" -}}
{{- end -}}
{{- end -}}

View File

@@ -19,10 +19,9 @@ spec:
clearml.serving.service: alertmanager
spec:
containers:
- image: {{ .Values.alertmanager.image }}
- image: "{{ .Values.alertmanager.image.repository }}:{{ .Values.alertmanager.image.tag }}"
name: clearml-serving-alertmanager
ports:
- containerPort: 9093
resources: {}
restartPolicy: Always
status: {}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 9093
selector:
clearml.serving.service: alertmanager
status:
loadBalancer: {}

View File

@@ -54,10 +54,9 @@ spec:
- name: EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_inference.extraPythonPackages }}'
{{- end }}
image: "{{ .Values.clearml_serving_inference.image }}:{{ .Chart.AppVersion }}"
image: "{{ .Values.clearml_serving_inference.image.repository }}:{{ .Values.clearml_serving_inference.image.tag }}"
name: clearml-serving-inference
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
status: {}

View File

@@ -0,0 +1,42 @@
{{- if .Values.clearml_serving_inference.autoscaling.enabled }}
apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: clearml-serving-inference-hpa
namespace: {{ .Release.Namespace | quote }}
annotations: {}
labels:
clearml.serving.service: clearml-serving-inference
spec:
scaleTargetRef:
apiVersion: "apps/v1"
kind: Deployment
name: clearml-serving-inference
minReplicas: {{ .Values.clearml_serving_inference.autoscaling.minReplicas }}
maxReplicas: {{ .Values.clearml_serving_inference.autoscaling.maxReplicas }}
metrics:
{{- if .Values.clearml_serving_inference.autoscaling.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.clearml_serving_inference.autoscaling.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.clearml_serving_inference.autoscaling.targetCPU }}
{{- end }}
{{- end }}
{{- if .Values.clearml_serving_inference.autoscaling.targetMemory }}
- type: Resource
resource:
name: memory
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.clearml_serving_inference.autoscaling.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.clearml_serving_inference.autoscaling.targetMemory }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,40 @@
{{- if .Values.clearml_serving_inference.ingress.enabled -}}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-inference
labels:
clearml.serving.service: clearml-serving-inference
annotations:
{{- toYaml .Values.clearml_serving_inference.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.clearml_serving_inference.ingress.tlsSecretName }}
tls:
- hosts:
- {{ .Values.clearml_serving_inference.ingress.hostName }}
secretName: {{ .Values.clearml_serving_inference.ingress.tlsSecretName }}
{{- end }}
rules:
- host: {{ .Values.clearml_serving_inference.ingress.hostName }}
http:
paths:
- path: {{ .Values.clearml_serving_inference.ingress.path }}
{{ if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
backend:
service:
name: clearml-serving-inference
port:
number: 8080
{{ else }}
backend:
serviceName: clearml-serving-inference
servicePort: 8080
{{ end }}
{{- end }}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 8080
selector:
clearml.serving.service: clearml-serving-inference
status:
loadBalancer: {}

View File

@@ -40,10 +40,9 @@ spec:
- name: EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_statistics.extraPythonPackages }}'
{{- end }}
image: "{{ .Values.clearml_serving_statistics.image }}:{{ .Chart.AppVersion }}"
image: "{{ .Values.clearml_serving_statistics.image.repository }}:{{ .Values.clearml_serving_statistics.image.tag }}"
name: clearml-serving-statistics
ports:
- containerPort: 9999
resources: {}
restartPolicy: Always
status: {}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 9999
selector:
clearml.serving.service: clearml-serving-statistics
status:
loadBalancer: {}

View File

@@ -41,12 +41,11 @@ spec:
- name: EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_triton.extraPythonPackages }}'
{{- end }}
image: "{{ .Values.clearml_serving_triton.image }}:{{ .Chart.AppVersion }}"
image: "{{ .Values.clearml_serving_triton.image.repository }}:{{ .Values.clearml_serving_triton.image.tag }}"
name: clearml-serving-triton
ports:
- containerPort: 8001
resources: {}
restartPolicy: Always
status: {}
{{ end }}

View File

@@ -0,0 +1,42 @@
{{- if .Values.clearml_serving_triton.autoscaling.enabled }}
apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: clearml-serving-triton-hpa
namespace: {{ .Release.Namespace | quote }}
annotations: {}
labels:
clearml.serving.service: clearml-serving-triton
spec:
scaleTargetRef:
apiVersion: "apps/v1"
kind: Deployment
name: clearml-serving-triton
minReplicas: {{ .Values.clearml_serving_triton.autoscaling.minReplicas }}
maxReplicas: {{ .Values.clearml_serving_triton.autoscaling.maxReplicas }}
metrics:
{{- if .Values.clearml_serving_triton.autoscaling.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.clearml_serving_triton.autoscaling.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.clearml_serving_triton.autoscaling.targetCPU }}
{{- end }}
{{- end }}
{{- if .Values.clearml_serving_triton.autoscaling.targetMemory }}
- type: Resource
resource:
name: memory
{{- if semverCompare "<1.23-0" (include "common.capabilities.kubeVersion" .) }}
targetAverageUtilization: {{ .Values.clearml_serving_triton.autoscaling.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.clearml_serving_triton.autoscaling.targetMemory }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,42 @@
{{- if .Values.clearml_serving_triton.enabled -}}
{{- if .Values.clearml_serving_triton.ingress.enabled -}}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-triton
labels:
clearml.serving.service: clearml-serving-triton
annotations:
{{- toYaml .Values.clearml_serving_triton.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.clearml_serving_triton.ingress.tlsSecretName }}
tls:
- hosts:
- {{ .Values.clearml_serving_triton.ingress.hostName }}
secretName: {{ .Values.clearml_serving_triton.ingress.tlsSecretName }}
{{- end }}
rules:
- host: {{ .Values.clearml_serving_triton.ingress.hostName }}
http:
paths:
- path: {{ .Values.clearml_serving_triton.ingress.path }}
{{ if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
backend:
service:
name: clearml-serving-triton
port:
number: 8001
{{ else }}
backend:
serviceName: clearml-serving-triton
servicePort: 8001
{{ end }}
{{- end }}
{{- end }}

View File

@@ -13,6 +13,4 @@ spec:
targetPort: 8001
selector:
clearml.serving.service: clearml-serving-triton
status:
loadBalancer: {}
{{ end }}

View File

@@ -20,7 +20,7 @@ spec:
clearml.serving.service: grafana
spec:
containers:
- image: {{ .Values.grafana.image }}
- image: "{{ .Values.grafana.image.repository }}:{{ .Values.grafana.image.tag }}"
name: clearml-serving-grafana
ports:
- containerPort: 3000
@@ -33,4 +33,3 @@ spec:
- name: grafana-conf
secret:
secretName: grafana-config
status: {}

View File

@@ -0,0 +1,40 @@
{{- if .Values.grafana.ingress.enabled -}}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-grafana
labels:
clearml.serving.service: clearml-serving-grafana
annotations:
{{- toYaml .Values.grafana.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.grafana.ingress.tlsSecretName }}
tls:
- hosts:
- {{ .Values.grafana.ingress.hostName }}
secretName: {{ .Values.grafana.ingress.tlsSecretName }}
{{- end }}
rules:
- host: {{ .Values.grafana.ingress.hostName }}
http:
paths:
- path: {{ .Values.grafana.ingress.path }}
{{ if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
backend:
service:
name: clearml-serving-grafana
port:
number: 3000
{{ else }}
backend:
serviceName: clearml-serving-grafana
servicePort: 3000
{{ end }}
{{- end }}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 3000
selector:
clearml.serving.service: grafana
status:
loadBalancer: {}

View File

@@ -32,10 +32,9 @@ spec:
value: clearml-serving-zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: '"topic_test:1:1"'
image: {{ .Values.kafka.image }}
image: "{{ .Values.kafka.image.repository }}:{{ .Values.kafka.image.tag }}"
name: clearml-serving-kafka
ports:
- containerPort: 9092
resources: {}
restartPolicy: Always
status: {}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 9092
selector:
clearml.serving.service: kafka
status:
loadBalancer: {}

View File

@@ -27,7 +27,7 @@ spec:
- --web.console.templates=/etc/prometheus/consoles
- --storage.tsdb.retention.time=200h
- --web.enable-lifecycle
image: {{ .Values.prometheus.image }}
image: "{{ .Values.prometheus.image.repository }}:{{ .Values.prometheus.image.tag }}"
name: clearml-serving-prometheus
ports:
- containerPort: 9090
@@ -40,4 +40,3 @@ spec:
- name: prometheus-conf
secret:
secretName: prometheus-config
status: {}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 9090
selector:
clearml.serving.service: prometheus
status:
loadBalancer: {}

View File

@@ -22,10 +22,9 @@ spec:
- env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: {{ .Values.zookeeper.image }}
image: "{{ .Values.zookeeper.image.repository }}:{{ .Values.zookeeper.image.tag }}"
name: clearml-serving-zookeeper
ports:
- containerPort: 2181
resources: {}
restartPolicy: Always
status: {}

View File

@@ -12,5 +12,3 @@ spec:
targetPort: 2181
selector:
clearml.serving.service: zookeeper
status:
loadBalancer: {}

View File

@@ -1,5 +1,4 @@
# Default values for clearml-serving.
# -- ClearMl generic configurations
clearml:
apiAccessKey: "ClearML API Access Key"
apiSecretKey: "ClearML API Secret Key"
@@ -9,71 +8,157 @@ clearml:
defaultBaseServeUrl: http://127.0.0.1:8080/serve
servingTaskId: "ClearML Serving Task ID"
# -- Zookeeper generic configigurations
zookeeper:
image: bitnami/zookeeper:3.7.0
image:
repository: "bitnami/zookeeper"
tag: "3.7.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Kafka generic configigurations
kafka:
image: bitnami/kafka:3.1.0
image:
repository: "bitnami/kafka"
tag: "3.1.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Prometheus generic configigurations
prometheus:
image: prom/prometheus:v2.34.0
image:
repository: "prom/prometheus"
tag: "v2.34.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Grafana generic configigurations
grafana:
image: grafana/grafana:8.4.4-ubuntu
image:
repository: "grafana/grafana"
tag: "8.4.4-ubuntu"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
ingress:
enabled: false
hostName: "serving-grafana.clearml.127-0-0-1.nip.io"
tlsSecretName: ""
annotations: {}
path: "/"
# -- Alertmanager generic configigurations
alertmanager:
image: prom/alertmanager:v0.23.0
image:
repository: "prom/alertmanager"
tag: "v0.23.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- ClearML serving statistics configurations
clearml_serving_statistics:
image: allegroai/clearml-serving-statistics
# -- Container Image
image:
repository: "allegroai/clearml-serving-statistics"
tag: "1.2.0"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
tolerations: []
# -- Affinity configuration
affinity: {}
# -- Pod resources definition
resources: {}
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- ClearML serving inference configurations
clearml_serving_inference:
image: allegroai/clearml-serving-inference
# -- Container Image
image:
repository: "allegroai/clearml-serving-inference"
tag: "1.2.0"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
tolerations: []
# -- Affinity configuration
affinity: {}
# -- Pod resources definition
resources: {}
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- Autoscaling configuration
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 11
targetCPU: 50
targetMemory: 50
# -- Ingress exposing configurations
ingress:
enabled: false
hostName: "serving.clearml.127-0-0-1.nip.io"
tlsSecretName: ""
annotations: {}
path: "/"
# -- ClearML serving Triton configurations
clearml_serving_triton:
# -- Triton pod creation enable/disable
enabled: true
image: allegroai/clearml-serving-triton
# -- Container Image
image:
repository: "allegroai/clearml-serving-triton"
tag: "1.2.0-22.07"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
tolerations: []
# -- Affinity configuration
affinity: {}
# -- Pod resources definition
resources: {}
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- Autoscaling configuration
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 11
targetCPU: 50
targetMemory: 50
# -- Ingress exposing configurations
ingress:
enabled: false
hostName: "serving-grpc.clearml.127-0-0-1.nip.io"
tlsSecretName: ""
annotations: {}
# # Example for AWS ALB
# kubernetes.io/ingress.class: alb
# alb.ingress.kubernetes.io/backend-protocol: HTTP
# alb.ingress.kubernetes.io/backend-protocol-version: GRPC
# alb.ingress.kubernetes.io/certificate-arn: <cerntificate arn>
# alb.ingress.kubernetes.io/ssl-redirect: '443'
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# alb.ingress.kubernetes.io/target-type: ip
#
# # Example for NNGINX ingress controller
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
path: "/"

View File

@@ -2,8 +2,9 @@ apiVersion: v2
name: clearml
description: MLOps platform
type: application
version: "4.2.0"
appVersion: "1.6.0"
version: "4.4.0"
appVersion: "1.8.0"
kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
sources:

View File

@@ -1,6 +1,6 @@
# ClearML Ecosystem for Kubernetes
![Version: 4.2.0](https://img.shields.io/badge/Version-4.2.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.6.0](https://img.shields.io/badge/AppVersion-1.6.0-informational?style=flat-square)
![Version: 4.4.0](https://img.shields.io/badge/Version-4.4.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.8.0](https://img.shields.io/badge/AppVersion-1.8.0-informational?style=flat-square)
MLOps platform
@@ -119,6 +119,8 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
## Requirements
Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
| Repository | Name | Version |
|------------|------|---------|
| file://../../dependency_charts/elasticsearch | elasticsearch | 7.16.2 |
@@ -136,7 +138,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| apiserver.extraEnvs | list | `[]` | |
| apiserver.image.pullPolicy | string | `"IfNotPresent"` | |
| apiserver.image.repository | string | `"allegroai/clearml"` | |
| apiserver.image.tag | string | `"1.6.0"` | |
| apiserver.image.tag | string | `"1.8.0"` | |
| apiserver.livenessDelay | int | `60` | |
| apiserver.nodeSelector | object | `{}` | |
| apiserver.podAnnotations | object | `{}` | |
@@ -188,15 +190,15 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| elasticsearch.volumeClaimTemplate.resources.requests.storage | string | `"50Gi"` | |
| externalServices.elasticsearchHost | string | `""` | Existing ElasticSearch Hostname to use if elasticsearch.enabled is false |
| externalServices.elasticsearchPort | int | `9200` | Existing ElasticSearch Port to use if elasticsearch.enabled is false |
| externalServices.mongodbHost | string | `""` | Existing MongoDB Hostname to use if elasticsearch.enabled is false |
| externalServices.mongodbPort | int | `27017` | Existing MongoDB Port to use if elasticsearch.enabled is false |
| externalServices.redisHost | string | `""` | Existing Redis Hostname to use if elasticsearch.enabled is false |
| externalServices.redisPort | int | `6379` | Existing Redis Port to use if elasticsearch.enabled is false |
| externalServices.mongodbHost | string | `""` | Existing MongoDB Hostname to use if mongodb.enabled is false |
| externalServices.mongodbPort | int | `27017` | Existing MongoDB Port to use if mongodb.enabled is false |
| externalServices.redisHost | string | `""` | Existing Redis Hostname to use if redis.enabled is false |
| externalServices.redisPort | int | `6379` | Existing Redis Port to use if redis.enabled is false |
| fileserver.affinity | object | `{}` | |
| fileserver.extraEnvs | list | `[]` | |
| fileserver.image.pullPolicy | string | `"IfNotPresent"` | |
| fileserver.image.repository | string | `"allegroai/clearml"` | |
| fileserver.image.tag | string | `"1.6.0"` | |
| fileserver.image.tag | string | `"1.8.0"` | |
| fileserver.nodeSelector | object | `{}` | |
| fileserver.podAnnotations | object | `{}` | |
| fileserver.replicaCount | int | `1` | |
@@ -263,7 +265,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| webserver.extraEnvs | list | `[]` | |
| webserver.image.pullPolicy | string | `"IfNotPresent"` | |
| webserver.image.repository | string | `"allegroai/clearml"` | |
| webserver.image.tag | string | `"1.6.0"` | |
| webserver.image.tag | string | `"1.8.0"` | |
| webserver.nodeSelector | object | `{}` | |
| webserver.podAnnotations | object | `{}` | |
| webserver.replicaCount | int | `1` | |

View File

@@ -83,7 +83,7 @@ apiserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.6.0"
tag: "1.8.0"
extraEnvs: []
@@ -155,7 +155,7 @@ fileserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.6.0"
tag: "1.8.0"
extraEnvs: []
@@ -200,7 +200,7 @@ webserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.6.0"
tag: "1.8.0"
podAnnotations: {}
@@ -229,13 +229,13 @@ externalServices:
elasticsearchHost: ""
# -- Existing ElasticSearch Port to use if elasticsearch.enabled is false
elasticsearchPort: 9200
# -- Existing MongoDB Hostname to use if elasticsearch.enabled is false
# -- Existing MongoDB Hostname to use if mongodb.enabled is false
mongodbHost: ""
# -- Existing MongoDB Port to use if elasticsearch.enabled is false
# -- Existing MongoDB Port to use if mongodb.enabled is false
mongodbPort: 27017
# -- Existing Redis Hostname to use if elasticsearch.enabled is false
# -- Existing Redis Hostname to use if redis.enabled is false
redisHost: ""
# -- Existing Redis Port to use if elasticsearch.enabled is false
# -- Existing Redis Port to use if redis.enabled is false
redisPort: 6379
redis: # configuration from https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml

View File

@@ -1,6 +1,6 @@
---
{{- if .Values.maxUnavailable }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: "{{ template "elasticsearch.uname" . }}-pdb"

View File

@@ -1,6 +1,6 @@
{{- if .Values.podSecurityPolicy.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodSecurityPolicy
metadata:
name: {{ default $fullName .Values.podSecurityPolicy.name | quote }}

View File

@@ -1,5 +1,5 @@
{{- if and (include "mongodb.arbiter.enabled" .) .Values.arbiter.pdb.create }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "mongodb.fullname" . }}-arbiter

View File

@@ -1,5 +1,5 @@
{{- if and (eq .Values.architecture "replicaset") .Values.pdb.create }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "mongodb.fullname" . }}

View File

@@ -58,7 +58,7 @@ Return the appropriate apiVersion for PodSecurityPolicy.
*/}}
{{- define "podSecurityPolicy.apiVersion" -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "policy/v1beta1" -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "extensions/v1beta1" -}}
{{- end -}}

View File

@@ -1,5 +1,5 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ template "redis.fullname" . }}