Compare commits

..

144 Commits

Author SHA1 Message Date
riccardocaselli-clearml
747c018adb Issue 343: [clearml] apiserver-asyncdelete cannot start (-> CrashLoopBackOff in kubernetes) (#353)
Some checks failed
Release Charts / release (push) Has been cancelled
Close inactive issues / close-issues (push) Has been cancelled
Fixed: casted port to string before concatenation

Co-authored-by: Casell <supercasell@gmail.com>
2025-03-12 10:27:22 +01:00
Filippo Brintazzoli
299cc2adb4 Changed: Support kubernetes 1.32 (#351)
Some checks failed
Release Charts / release (push) Has been cancelled
Close inactive issues / close-issues (push) Has been cancelled
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-02-25 11:47:55 +01:00
Filippo Brintazzoli
1479cf9ed2 [Serving] Changed: Support kubernetes 1.32 (#350)
* Changed: Support kubernetes 1.32

* Changed: increase version

---------

Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-02-25 11:31:22 +01:00
Filippo Brintazzoli
2a4d9569f3 [Agent] Changed: Support kubernetes 1.32 (#349)
* Changed: Support kubernetes 1.32

* Changed: increase version

---------

Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-02-25 11:28:07 +01:00
Filippo Brintazzoli
6c212f5b82 Changed: ClearML GH Organization Rename (#347)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-21 17:29:57 +01:00
Filippo Brintazzoli
4885e01750 Fixed: ClearML org rename (#346)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-21 17:04:32 +01:00
Filippo Brintazzoli
311f6ea9e0 Fixed: ClearML org rename (#345)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-21 17:03:57 +01:00
Filippo Brintazzoli
966a0e69ab Fixed: ClearML org rename (#344)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-21 17:03:00 +01:00
Filippo Brintazzoli
4cd31fa843 Fixed: release because of previous artifacthub fix (#341)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-09 11:05:31 +01:00
Filippo Brintazzoli
6b2954ab9f Fixed: artifacthub annotation (#340)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-09 10:50:00 +01:00
Filippo Brintazzoli
fc518f9389 Changed: App v2.0 and MongoDB v6 (#339)
* Changed: App v2.0 and MongoDB v6

* Fixed: helm-docs

---------

Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-09 10:05:12 +01:00
Filippo Brintazzoli
389159aa0c [clearml] Updated app version to 1.17.1 and MongoDB v5 (#338)
* Changed: Updated app version 1.17.1

* Changed: updated to Mongo 5

---------

Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-03 09:58:48 +01:00
Filippo Brintazzoli
7c53365cd8 Added: Support for additional ServiceAccount annotations (#337)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2025-01-02 16:36:58 +01:00
Daglar Berk Erdem
a51c7ee856 [ClearML] Added service account annotations (#336)
* fix: removed harcoded apiVersion

* feat: add support for custom annotations to the created service account

* feat: added service account annotations

* chore: updated README.md

* chore: chart version bump

* Revert "fix: removed harcoded apiVersion"

This reverts commit 18da292366.

* Revert "feat: add support for custom annotations to the created service account"

This reverts commit 8dc926bf1b.

---------

Co-authored-by: Dağlar Berk Erdem <daglar@codeway.co>
2025-01-02 15:56:11 +01:00
Filippo Brintazzoli
67c3720cf9 Changed: Kind versions for CI (#329)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-10-07 14:24:19 +02:00
Filippo Brintazzoli
c501ede9be Added: Support for Kubernetes 1.31 (#328)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-10-07 14:21:23 +02:00
Filippo Brintazzoli
10a33b65f7 Added: Support for Kubernetes 1.31 (#327)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-10-07 14:20:39 +02:00
Filippo Brintazzoli
0866033bac Fixed: Triton deployment tolerations placement fix (#325)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-09-27 17:10:16 +02:00
Ben Lewis
02a87e18f5 Fix duplicate CLEARML__secure__auth__token_secret in apiserver-asyncdelete-deployment.yaml (#322)
* Fix duplicate `CLEARML__secure__auth__token_secret` in `apiserver-asyncdelete-deployment.yaml`

* Bump `version` in `Chart.yaml`

* Bump version in `README.md`

* Update Chart.yaml

Apply request to change `Chart.yaml`
2024-09-23 09:56:29 +02:00
Filippo Brintazzoli
7532609c35 Changed: Update to 1.16.2 (#315)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-08-27 17:21:54 +02:00
Filippo Brintazzoli
bf17429258 Updated clearml-serving README and CI (#313)
* Changed: updated Readme

* Changed: CI actions versions

* Changed: CI default test values

* Fixed: newline

* Changed: tests for CI

* Fixed: newline

* Changed: removed CI values customization

---------

Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-08-21 16:02:20 +02:00
Filippo Brintazzoli
adcb4b0fc9 Changed: updated Readme (#314)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-08-19 15:52:58 +02:00
Filippo Brintazzoli
873f222d68 Changed: updated Readme (#312)
Co-authored-by: fbrintazzoli <filippo.brintazzoli@clear.ml>
2024-08-19 15:49:47 +02:00
Valeriano Manassero
fbbbee5ef2 1.16.1 release (#307)
* Changed: image version 1.16.1

* Changed: bump up
2024-07-22 08:18:05 +02:00
Valeriano Manassero
e569ed3d9e Changed: assign issue to filippo (#304) 2024-06-27 09:32:48 +02:00
Valeriano Manassero
2b0d67000a 1.16 (#302)
* Added: env vars

* Changed: upgrade app to 1.16

* Changed: bump up

* Changed: bump up
2024-06-27 08:54:39 +02:00
Valeriano Manassero
700d6b244b Fixed: rename yml (#301) 2024-06-19 13:48:30 +02:00
Valeriano Manassero
61c96ddf6d Fixed: remove blank issues (#300) 2024-06-19 13:46:31 +02:00
Valeriano Manassero
ad3bdf2372 CreateQueue disabled by default (#294)
* Fixed: create queue disable by default

* Changed: bump up

* Fixed: command composition
2024-06-06 09:51:21 +02:00
Daglar Berk Erdem
6cd821742b fix volumeMount indentation (#298)
* fix volumeMount indentation

* version bump
2024-06-06 09:31:52 +02:00
Daglar Berk Erdem
ac96346607 feature: ability to add volume and volumeMounts to deployments (#296) 2024-06-04 11:34:41 +02:00
Daglar Berk Erdem
312813cc34 Added: deploymentAnnotations (#292)
* added deploymentAnnotations

* version minor bump

* fixed version in README

* removed precommit file

---------

Co-authored-by: Dağlar Berk Erdem <daglar@codeway.co>
2024-05-28 08:40:54 +02:00
Uzmar Gomez
255cabfd7f Feature/multiplequeues (#286)
* copy changes

* add workflow when push

* update readme

* update readme

* delete test workflow

* bump chart version to 5.2.0

* test ci

* delete test ci

* bump kubeversion

* update readme

* Update Chart.yaml

---------

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2024-05-15 16:19:40 +02:00
Valeriano Manassero
204e2ac350 Fixed: 1.5.6 release (#289) 2024-05-15 09:32:02 +02:00
Valeriano Manassero
56d246fcea Added: k8s 1.30 support (#288) 2024-05-15 09:26:44 +02:00
Valeriano Manassero
0e4a809d46 Added: 1.30 support (#287) 2024-05-15 08:59:34 +02:00
Uzmar Gomez
cfe62484af Use existingClearmlConfigSecret (#283)
* use config as secret

* update chart and readme

* avoid creating secret if not needed

* move string on values.yaml
2024-05-14 11:05:03 +02:00
Valeriano Manassero
5ded87ec92 Update 1.15.1 (#284)
* Changed: 1.15.1 update

* Changed: bump up version
2024-05-14 08:24:24 +02:00
stephanbertl
65393cd4bf affinity for serving deployments was incorrectly put under containers… (#280)
* affinity for serving deployments was incorrectly put under containers. It needs to be put at pod spec level.

* Fixed: helm docs generation

* Fixed: helm-docs generation

---------

Co-authored-by: = <s.bertl@iaea.org>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2024-05-07 14:03:29 +02:00
savitha-qs
4845bd3f4f Issue-275: Honor existingAgentk8sglueSecret in deployment template (#276)
* Issue-275: Honor existingAgentk8sglueSecret in deployment template

* Issue-275: rev the patch version of the chart

* Issue-275: update README for clearml-agent chart

* Issue-275: update annotation in Chart.yaml

* Issue-275: remove item not relevant to Issue 275 from Chart annotations

* Issue-275: add new line to end of file to get lint to pass

* Update Chart.yaml

---------

Co-authored-by: Savitha Ganapathi <sganapathi@quantumscape.com>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2024-04-02 08:47:14 +02:00
Valeriano Manassero
4cb8db5baf 277 apiserver and asyncdelete apiserver having the same selector (#278)
* Fixed: asyncdelete wrong selector

* Changed: bump up
2024-03-28 14:12:55 +01:00
Valeriano Manassero
4242e518ae 273 clearml chart service async delete not deleting files (#274)
* Fixed: missing asyncdelete config mount

* Changed: bump up version
2024-03-26 14:33:26 +01:00
Valeriano Manassero
4ca4bc82c4 Release 1.15.0 (#272)
* Changed: upgrade app to 1.15

* Fixed: missing scheme

* Added: async delete
2024-03-21 14:58:07 +01:00
Valeriano Manassero
75d9086bb9 Changed: bump up version (#271) 2024-03-21 14:47:57 +01:00
Valeriano Manassero
546c2b6d2e Changed: bump up to support k8s 1.29 (#270)
* Changed: bump up to support k8s 1.29

* Changed: k8s versions

* Changed: exclude serving

* Fixed: changes
2024-02-20 10:57:17 +01:00
Valeriano Manassero
0a2bc7c2a8 Update 1.14.1 (#269)
* Changed: bump version to 1.14.1

* Changed: bump up version
2024-02-13 16:17:00 +01:00
Valeriano Manassero
25de2dfaeb 262 clearml agent wrong nindent for agentk8sglue deployment template (#267)
* Fixed: wrong indentation

* Changed: bump up
2024-01-10 09:23:52 +01:00
Valeriano Manassero
28d9fe82f6 Fix label typo (#266)
* Fixed: clearml labels function reference name

* Changed: bump up version
2024-01-09 17:18:58 +01:00
Valeriano Manassero
e680990d10 App 1.14 (#264)
* Changed: app version to 1.14

* Changed: bump chart version
2024-01-05 08:06:23 +01:00
stephanbertl
fdbbe5b90d fixed #256. added runtimeClassName and fixed incorrect placement of nodeSelector (#259)
* fixed #256.
nodeSelector was incorrectly placed under the container.
moved it to pod spec
added runtimeClassName to the pod spec to select specific GPU nodes.

* increment version number

* added artifacthub.io/changes

* update readme.md

* try to fix helm docs generation issue

* update readme.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

---------

Co-authored-by: IAEA_SG\BERTLS <s.bertl@iaea.org>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2023-11-20 14:37:01 +01:00
Valeriano Manassero
eff09794bd Upgrade app 1.13 (#260)
* Changed: app version upgrade

* Changed: bump up chart

* Changed: helm-docs update

* Changed:  ci update

* Fixed: depth
2023-11-16 07:40:10 +01:00
Valeriano Manassero
d13ad20e34 Fix serving extraenvs (#258)
* Fixed: missing extraEnvs

* Changed: bump version
2023-10-27 07:41:19 +02:00
Valeriano Manassero
5ed9673ea1 254 clearml server helper function wrong readiness probes fail because wrong redis service is referenced (#257)
* Fixed: redis name generation

* Changed: bump up version
2023-10-25 16:15:15 +02:00
Valeriano Manassero
0892882cf7 Mount files additionalconfigs (#255)
* Added: mount file for additional configs in pod

* Changed: bump up version

* Fixed: missing parameters for deployment

* Fixed naming typo

* Changed: changelog fixes added
2023-10-25 15:20:46 +02:00
Erik Jacobs
a1673ae8e4 adds a default clearml-core service account for the core components (#253)
* adds a default clearml-core service account for the core components

* Changed: use many sa

* Fixed: sa name

* Fixed: sa name

* Fixed: sa name

* Fixed: naming

* Update README.md

---------

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2023-09-27 09:19:55 +02:00
Valeriano Manassero
503ab437ad 246 allegroaiclearml elastic statefulset does not use a dedicated serviceaccount (#252)
* Fixed: missing serviceAccount for elasticsearch

* Changed: bump up version
2023-08-22 08:12:12 +02:00
Valeriano Manassero
a36519536b 250 clearml serving issue title (#251)
* Fixed: name reference

* Changed: bump up version
2023-08-18 10:30:21 +02:00
Valeriano Manassero
b42a93e361 248 any chart test on kubernetes 128 (#249)
* Changed: updated k8s versions

* Added: support for 1.28

* Fixed: typo in annotation
2023-08-16 08:47:10 +02:00
Valeriano Manassero
d170d0f606 Changed: released version (#245) 2023-08-02 09:18:20 +02:00
Valeriano Manassero
43495f4a59 Upgrade app 1.12 (#244)
* Changed: upgrade app to 1.12

* Changed: helm-docs update
2023-08-02 09:16:20 +02:00
Robin
5ef3727154 add imagecredentials to triton docker image (#242)
* add imagecredentials to triton docker image

* bump version

* add secrets to all serving charts

* add changelog entry

* Fixed: removed one chart annotations

---------

Co-authored-by: Robin <robinvandijk@klippa.com>
Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2023-07-19 11:53:25 +02:00
Valeriano Manassero
3f7d1a1c1e 240 fix init container waits forever pinging a redis in production config (#241)
* Fixed: unused leftover

* Fixed: init container fail

* Changed: bump up version
2023-07-19 11:28:26 +02:00
Jan Wytze Zuidema
c8aaf91f52 Add the ability to disable Serving Statistics and to configure the Kafka url (#239)
Co-authored-by: Jan Wytze Zuidema <janwytze@klippa.com>
2023-06-30 09:58:07 +02:00
Valeriano Manassero
65671a35b2 225 clear ml serving ingress doesnt work (#235)
* Changed: removed deprecated network policy

* Changed: bump up

* Changed: bump up

* Fixed: missing ingress names

* Changed: changelog update
2023-06-19 17:00:46 +02:00
Valeriano Manassero
dd2289f3e1 Changed: template update (#237) 2023-06-19 16:59:00 +02:00
Valeriano Manassero
a53b0e8eac 233 clearml default externalservices values (#234)
* Added: default externalServices values

* Changed: bump up version
2023-06-16 09:23:30 +02:00
Valeriano Manassero
bad5618226 Repo templating (#232)
* Added: issue templates

* Added: pr template
2023-06-16 09:00:40 +02:00
Valeriano Manassero
8b5cc58675 229 resouces in the values file for agentk8sglue deployment (#230)
* Added: resources definitions

* Changed: bump up
2023-06-15 09:05:38 +02:00
Valeriano Manassero
63bc2c944c 227 agentk8sglue configmapyaml contains a double name tag for the imagepullsecrets (#228)
* Fixed: typo

* Changed: bump up
2023-06-14 15:53:19 +02:00
Tino Tap
2080cae5e8 Issue #218 (#219)
* Update values.yaml

added existingAdditionalSecret

* add existingSecret

* Update values.yaml

* better implemented

* Update Chart.yaml

to version 7.2.1

* Update values.yaml

added documentation to values.yaml

* artifact hub annotations

* Changed: moved to helper function

* Changed: helm-docs update

* changed the key to exstingSecret om _helpers

---------

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2023-06-14 15:43:23 +02:00
Valeriano Manassero
2861b5b074 Serving-1.3 (#226)
* Fixed: if triton is disabled, ignore autoscaling

* Changed: app version bump to 1.3.0

* Changed: bump up

* Changed: bump up
2023-06-14 10:14:22 +02:00
Valeriano Manassero
9ba1d0ac1a Added artifact hub annotations (#224) 2023-06-13 13:35:30 +02:00
Jan Wytze Zuidema
22a7dea1fb Add the ability to define custom environment variables for inference (#221) (#222)
Co-authored-by: Jan Wytze Zuidema <janwytze@klippa.com>
2023-06-08 15:32:21 +02:00
Valeriano Manassero
550b7ca527 Fix ci (#223)
* Fixed: no chart change detected

* Fixed: get entire depth
2023-06-08 15:01:49 +02:00
Valeriano Manassero
3f67293663 Update CONTRIBUTING.md (#220) 2023-06-08 09:12:57 +02:00
Valeriano Manassero
9131a64b38 214 missing resources in initcontainers (#217)
* Added: initContainers resources definition

* Changed: version bump
2023-06-07 08:34:19 +02:00
pollfly
61d9d931ae Edit README (#215) 2023-06-07 08:10:29 +02:00
Valeriano Manassero
80372304cd Changed: application version upgrade to 1.11 (#213) 2023-05-29 17:21:07 +02:00
Valeriano Manassero
78ba93a0df 210 wrong usage of extra python packages environment variable (#212)
* Fixed: env var name reference

* Changed: version bump
2023-05-29 12:55:00 +02:00
Valeriano Manassero
1b6b3dce94 Changed: slack channel reference (#209) 2023-05-18 11:12:03 +03:00
Valeriano Manassero
1ba6440c58 [Serving] Fix resources setting (#208)
* Fixed: resources

* Changed: bump up version

* Fixed: indentation

* Fixed: indentation
2023-05-11 16:35:40 +02:00
Valeriano Manassero
5b31ea8599 Remove unsupported dynamic svc (#206)
* Removed: unsupported values

* Changed: version bump

* Changed: removed not needed value

* Changed: helm-docs

* Removed: unsupported values
2023-05-08 17:25:01 +02:00
Valeriano Manassero
876df432d4 Changed: version bump 2023-04-14 12:11:19 +02:00
Valeriano Manassero
bf755ed6b8 Changed: set replication for this scenario 2023-04-14 12:11:10 +02:00
Valeriano Manassero
9b6372d730 Fixed: redis svc name creation 2023-04-14 12:10:55 +02:00
Valeriano Manassero
25af4a4d8f Changed: remove enterprise features (#204) 2023-04-13 17:44:58 +02:00
Valeriano Manassero
da2fb44479 Check compatibility with k8s 1.27 (#203)
* Fixed: typo

* Added: k8s 1.27

* Changed: bump up version

* Changed: actions versions bump up

* Fixed: gh action usage

* Fixed: deep chackout
2023-04-12 09:07:11 +02:00
Valeriano Manassero
d1f46dac7a Fix missing events permissions (#202)
* Added: events

* Changed: bump up version
2023-04-04 07:22:06 +02:00
Valeriano Manassero
02163e3779 Refactor affinity agent (#201)
* Changed: refactor affinity section

* Changed: bump up version
2023-03-30 22:45:21 +02:00
Valeriano Manassero
01f1b8703d Fix indentation nodeaffinity (#200)
* Fixed: indentation

* Changed: bump up version
2023-03-30 15:16:49 +02:00
Valeriano Manassero
dad921e562 Affinity patch (#199)
* Fixed: affiniti indentation

* Changed: bump up version
2023-03-30 14:26:33 +02:00
Valeriano Manassero
9be7ad40c0 Enterprise backofflimit (#198)
* Fixed: typo

* Fixed: backoff limit

* Changed: bump up version

* Changed: helm docs update
2023-03-30 11:56:57 +02:00
Valeriano Manassero
4f1cebab11 Upgrade app to 1.10 (#197)
* Changed: image update to 1.10

* Changed: bump up version
2023-03-29 11:54:57 +02:00
Valeriano Manassero
870338ebff 195 missing initcontainers section in agentk8sglue configmapyaml (#196)
* Added: init-container

* Changed: bump up version
2023-03-24 08:14:52 +01:00
Valeriano Manassero
27b52fa5b3 193 after upgrade to the new chart version error (#194)
* Fixed: agent selector label

* Changed: version bump
2023-03-20 15:16:23 +01:00
Valeriano Manassero
4e3169c033 flow for charts changes only (#192) 2023-03-20 11:59:15 +01:00
Valeriano Manassero
70f6544ad7 temporarily force release (#191) 2023-03-20 11:57:57 +01:00
Valeriano Manassero
5f8cc597ad Update release.yaml (#190) 2023-03-20 11:51:44 +01:00
Valeriano Manassero
cbc1239d10 Serving 1.0.0 refactoring (#189)
* Changed: use dep charts

* Changed: improved ingresses

* Changed: naming management

* Fixed: naming

* Fixed: disable kubestats for prom

* Added: dependencies

* Fixed: typos
2023-03-20 11:49:42 +01:00
Valeriano Manassero
957b7b2423 Fix full name and nonroot pod template (#188)
* Fixed: typo

* Added: /tmp env var

* Changed: use fullname

* Fixed: fullname usage
2023-03-20 09:19:43 +01:00
Valeriano Manassero
6d9771be41 Improve informations on README (#186)
* Changed: docs sections

* Added: comment on top

* Changed: version bump
2023-03-16 13:40:02 +01:00
Valeriano Manassero
a69530d07a Update dependency charts (#184)
* Changed: update dependency charts

* Changed: update values for dependencies

* Added: major release update instructions

* Changed: version update

* Added: dep repos

* Changed: improved securityContexts

* Added: security context for enterprise apps

* Changed: agent split securityContexts

* Added: custom start scripts for apps

* Fixed: missing description

* Changed: updated images

* Added: non-privileged/non-root configs

* Fixed: title level

* CHanged: changelog update

* Added: global registry setting

* Added: services annotations

* Fixed: non-root enterprise reference
2023-03-16 08:42:27 +01:00
Valeriano Manassero
e1fb190b1f Update inactive-issues.yaml (#185) 2023-03-15 11:54:59 +01:00
Valeriano Manassero
e4f9cbfe8e Apps additional rolebindings (#182)
* Added: additional rolebindings

* Changed: bump up version
2023-03-09 12:43:05 +01:00
Valeriano Manassero
a9d57db3a8 Force agent upgrade apps (#181)
* Fixed: force agent update

* Changed: bump up version
2023-03-09 11:28:12 +01:00
Valeriano Manassero
08b92ba622 Fix apps baseimage (#180)
* Fixed: apps base image

* Changed: bump up version
2023-03-09 08:19:30 +01:00
Valeriano Manassero
5b77cf41c2 Add external clusterrolebinding and rolebinding support (#179)
* Added: external rb and crb support

* Changed: bump up version
2023-03-07 13:09:30 +01:00
Valeriano Manassero
a6db8b4262 Fix init container waits forever pinging a mongodb in production config (#178)
* Fixed: hostname healthcheck for mongodb

* Changed: bump up version
2023-03-07 08:19:22 +01:00
Valeriano Manassero
dd4d8bf086 Filemount apps agent (#176)
* Added: filemounts support for apps agent

* Changed: bump up version
2023-03-06 14:23:50 +01:00
Valeriano Manassero
bf959d2f70 Update apps agent (#175)
* Changed: apps agent version bump

* Changed: chart version bump
2023-03-03 14:09:03 +01:00
Valeriano Manassero
340d261f11 Fixed: openshift examples (#172) 2023-02-20 14:50:48 +01:00
Valeriano Manassero
e1fcc5b466 Enterprise create queue (#171)
* Fixed: typo in env example

* Added: create queues switch

* Added: force configuration file mount

* Changed: bump version

* Fixed: helm docs
2023-02-20 13:52:36 +01:00
pollfly
013734c184 edits (#168) 2023-02-16 13:25:00 +01:00
Valeriano Manassero
fded7aa5b4 165 clearml agent priorityclassname in pod template (#166)
* Added: priorityclass name

* Changed: bump up version
2023-02-16 09:39:23 +01:00
Valeriano Manassero
5540188db1 Add job support for task pod (#162)
* Added: task as job support

* Added: template generator

* Fixed: typo

* Changed: bump version

* Added: changelog reference

* Fixed: include function name

* Fixed: checksum generator

* Added: nindent

* Added: changelog item

* Fixed: job env var switch

* Fixed: double Restart policy removed

* Fixed: job template apiVersion
2023-02-15 15:27:59 +01:00
Valeriano Manassero
1f23bcf7ca 160 fileserver doesnt have an option to be with ephemeral storage (#164)
* Added: fileserver emptyDir support

* Changed: bump up version
2023-02-14 16:31:27 +01:00
Valeriano Manassero
3075f5e280 157 improve documentation (#159)
* Changed: updated installation guide

* Fixed: typo in copy and paste

* Changed: updated install guide

* Fixed: use relative path
2023-02-14 08:44:04 +01:00
Valeriano Manassero
97550c720f Fix cookiename availability (#158)
* Fixed: cookieName availability

* Changed: bump up version
2023-02-14 08:42:26 +01:00
Valeriano Manassero
a29a144119 Changed: redis cluster configuration for production (#156) 2023-02-13 12:22:01 +01:00
Valeriano Manassero
a4f77c624d Create inactive-issues.yaml 2023-02-13 08:58:08 +01:00
Valeriano Manassero
dd1c201eeb Avoid collisions in internal helper variable naming (#154)
* Fixed: helper variable rename to avoid collisions

* Changed: bump version
2023-02-13 08:17:53 +01:00
Valeriano Manassero
7995fc8441 Add external multihost elasticsearch support (#150)
* Changed: elasticsearch connstring creation

* Changed: elasticsearch connstring creation

* Changed: bump up version
2023-02-09 10:29:00 +01:00
Valeriano Manassero
99903085cd Fix existing secret reference (#149)
* Fixed: existingSecret reference

* Changed: bump version

* Changed: bump up version
2023-02-09 10:11:03 +01:00
Valeriano Manassero
9fc2b7ddda Fix existing secret apiserver (#148)
* Fixed: missing brackets

* Changed: bump vesion

* Fixed: trailing space in changelog
2023-02-08 14:20:25 +01:00
Valeriano Manassero
c7b3a28989 146 agentadd affinity config (#147)
* Added: affinity parameter

* Changed: bump version
2023-02-02 12:20:06 +01:00
Valeriano Manassero
12baef0d75 fixed: typos (#145) 2023-02-02 11:50:11 +01:00
Valeriano Manassero
72916e171a Added: specific platform configurations (#144) 2023-01-31 09:25:53 +01:00
Valeriano Manassero
126f313cdf Add agent pod securitycontext (#143)
* Added: securityContext for agent

* Changed: bump up version

* Added: support for k8s 1.26
2023-01-31 09:16:25 +01:00
Valeriano Manassero
9aa1997ebd 141 apiserver init check improvements (#142)
* Added: check also redis and mongo before starting apiserver

* Changed: bump version
2023-01-30 12:44:41 +01:00
Valeriano Manassero
db325a95a0 Fileserver existing pvc support (#140)
* Added: support for existing fileserver PVC

* Changed: bump up version

* Changed: changelog update
2023-01-25 17:12:54 +01:00
Valeriano Manassero
9e97c03b5f Fix override url (#139)
* Fixed: url override generation

* Changed: bump up version

* Changed: supported k8s versions

* Changed: changelog update
2023-01-25 16:34:28 +01:00
Valeriano Manassero
16506130ba Changed: updated version references (#138) 2023-01-25 16:16:23 +01:00
Valeriano Manassero
e2d60312d3 Fix enterprise apps deployment (#137)
* Fixed: apps deployment

* Changed: version bump
2023-01-24 13:24:15 +01:00
Valeriano Manassero
7c3ed7eb72 Fix external mongodb connstring (#135)
* Changed: maongodb.enabled check not needed

* Changed: external MongoDB connection string

* Changed: bump up version

* Added: artifacthub changelog annotation
2023-01-24 09:27:42 +01:00
Valeriano Manassero
67d4b5b95d Enterprise apps sa (#134)
* Changed: don't use cluster wide access

* Changed: bump version
2023-01-20 10:24:34 +01:00
Valeriano Manassero
832090a791 Configurable securitycontext (#133)
* Added: configurable securityContext

* Changed: bump up version

* Changed: bump up version
2023-01-19 15:00:22 +01:00
Valeriano Manassero
e1049fa0ab Ingressclassname (#132)
* Added: ingressclassname

* Changed: bump up version
2023-01-19 07:48:30 +01:00
Valeriano Manassero
5f62daac0f Existing resource for additionalconfigs (#130)
* Added: additionalConfigs reference for existing resurce

* Changed: version bump
2023-01-18 13:34:29 +01:00
Valeriano Manassero
cdcd35c224 Enterprise 3.15.3 (#129)
* Changed: enterprise version bump

* Changed: version bump
2023-01-16 16:46:26 +01:00
Valeriano Manassero
3fd3f30030 Enterprise override tag (#127)
* Added: override for enterprise  image tag

* Changed: version bump

* Added: enterprise image tage overrides

* Changed: bump up version
2023-01-12 09:12:19 +01:00
Valeriano Manassero
bdea0e778b Fix nodeport (#126)
* fixed: agent nodeSelector

* Changed: version bump
2023-01-12 08:21:38 +01:00
Valeriano Manassero
1ea09e63e5 Fix fileserver pvc class (#125)
* Fixed: fileserver custom storageclass

* Changed: version bump
2023-01-10 16:48:06 +01:00
Valeriano Manassero
1cc3018ef3 Fix enterprise secret generation (#124)
* Fixed: secret reference

* Changed: bump up version
2023-01-09 16:05:08 +01:00
Valeriano Manassero
3b689bf051 Various fixes after major releases (#123)
* Fixed: env vars

* Changed: version bump

* Fixed: config path

* Fixed: queues generation

* Fixed: typo

* Fixed: no default queue set

* Fixed: enterprise only sec creds

* Fixed: typo
2023-01-05 11:52:53 +01:00
239 changed files with 1973 additions and 16203 deletions

47
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Bug Report
description: Create a report to help us improve
title: "[name of the chart e.g. clearml-agent] Issue Title"
labels: [bug]
assignees:
- filippo-clearml
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report! Please be cautious with the sensitive information/logs while filing the issue.
- type: textarea
id: desc
attributes:
label: Describe the bug a clear and concise description of what the bug is.
validations:
required: true
- type: input
id: helm-version
attributes:
label: What's your helm version?
description: Enter the output of `$ helm version`
placeholder: Copy paste the entire output of the above
validations:
required: true
- type: input
id: kubectl-version
attributes:
label: What's your kubectl version?
description: Enter the output of `$ kubectl version`
validations:
required: true
- type: input
id: chart-version
attributes:
label: What's the chart version?
description: Enter the version of the chart that you encountered this bug.
validations:
required: true
- type: textarea
id: changed-values
attributes:
label: Enter the changed values of values.yaml?
description: Please enter only values which differ from the defaults. Enter `NONE` if nothing's changed.
placeholder: 'key: value'
validations:
required: false

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

View File

@@ -0,0 +1,40 @@
name: Feature request
description: Suggest an idea for this project
title: "[name of the chart e.g. clearml-agent] Issue Title"
labels: [enhancement]
assignees:
- filippo-clearml
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: textarea
id: desc
attributes:
label: Is your feature request related to a problem ?
description: Give a clear and concise description of what the problem is.
placeholder: ex. I'd like to have [...]
validations:
required: true
- type: textarea
id: prop-solution
attributes:
label: Describe the solution you'd like.
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered.
description: A clear and concise description of any alternative solutions or features you've considered. If nothing, please enter `NONE`
validations:
required: true
- type: textarea
id: additional-ctxt
attributes:
label: Additional context.
description: Add any other context or screenshots about the feature request here.
validations:
required: false

18
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,18 @@
**What this PR does / why we need it**:
**Checklist**
- [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/clearml/clearml-helm-charts/blob/main/CONTRIBUTING.md#pull-requests) guide (**required**)
- [ ] Verify the work you plan to merge addresses an existing [issue](https://github.com/clearml/clearml-helm-charts/issues) (If not, open a new one) (**required**)
- [ ] Check your branch with `helm lint` (**required**)
- [ ] Update `version` in `Chart.yaml` according [semver](https://semver.org/) rules (**required**)
- [ ] Substitute `annotations` section in `Chart.yaml` annotating implementations (useful for Artifecthub changelog) (**required**)
- [ ] Update chart README using [helm-docs](https://github.com/norwoodj/helm-docs) (**required**)
**Which issue(s) this PR fixes**:
Fixes #<issue number>
**Special notes for your reviewer**:

View File

@@ -1,7 +1,12 @@
#!/bin/bash
#!/bin/bash -xe
CHART_DIRS="$(git diff --find-renames --name-only "$(git rev-parse --abbrev-ref HEAD)" remotes/origin/main -- 'charts' | grep '[cC]hart.yaml' | sed -e 's#/[Cc]hart.yaml##g')"
HELM_DOCS_VERSION="1.11.0"
if [[ -z "$CHART_DIRS" ]]; then
echo "No Chart.yaml changes detected, aborting helm-docs"
exit 1
fi
HELM_DOCS_VERSION="1.11.3"
curl --silent --show-error --fail --location --output /tmp/helm-docs.tar.gz https://github.com/norwoodj/helm-docs/releases/download/v"${HELM_DOCS_VERSION}"/helm-docs_"${HELM_DOCS_VERSION}"_Linux_x86_64.tar.gz
tar -xf /tmp/helm-docs.tar.gz helm-docs

View File

@@ -11,7 +11,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v4.1.7
with:
fetch-depth: 0
- name: Run helm-docs
run: .github/helm-docs.sh
install-chart:
@@ -22,27 +24,43 @@ jobs:
strategy:
matrix:
k8s:
- v1.22.13
- v1.23.10
- v1.24.4
- v1.25.0
- v1.29.8
- v1.30.4
- v1.31.0
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v4.1.7
with:
fetch-depth: 0
- name: Create kind ${{ matrix.k8s }} cluster
uses: helm/kind-action@v1.3.0
uses: helm/kind-action@v1.10.0
with:
node_image: kindest/node:${{ matrix.k8s }}
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.6.1
with:
version: v3.8.0
- name: Add bitnami repo
run: helm repo add bitnami https://charts.bitnami.com/bitnami
- name: Add elastic repo
run: helm repo add elastic https://helm.elastic.co
- name: Add prometheus repo
run: helm repo add prometheus https://prometheus-community.github.io/helm-charts
- name: Add grafana repo
run: helm repo add grafana https://grafana.github.io/helm-charts
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --chart-dirs=charts --target-branch=main)
changed=$(ct list-changed --chart-dirs charts --target-branch main)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "::set-output name=changed_charts::\"${changed//$'\n'/,}\""
echo "changed=true" >> "$GITHUB_OUTPUT"
echo "changed_charts=\"${changed//$'\n'/,}\"" >> "$GITHUB_OUTPUT"
fi
- name: Run chart-testing (lint and install)
run: ct lint-and-install --chart-dirs=charts --target-branch=main --helm-extra-args="--timeout=15m" --charts=${{steps.list-changed.outputs.changed_charts}} --debug=true
- name: Inject secrets
run: |
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUEKEY/${{ secrets.AGENTK8SGLUEKEY }}/g" {} \;
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUESECRET/${{ secrets.AGENTK8SGLUESECRET }}/g" {} \;
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (lint and install)
run: ct lint-and-install --chart-dirs charts --target-branch main --helm-extra-args "--timeout=15m" --charts=${{steps.list-changed.outputs.changed_charts}} --debug true
if: steps.list-changed.outputs.changed == 'true'

22
.github/workflows/inactive-issues.yaml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v8.0.0
with:
days-before-issue-stale: 28
days-before-issue-close: 14
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 4 weeks with no activity."
close-issue-message: "This issue was closed because it has been inactive for 2 weeks since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -17,12 +17,16 @@ jobs:
run: helm repo add bitnami https://charts.bitnami.com/bitnami
- name: Add elastic repo
run: helm repo add elastic https://helm.elastic.co
- name: Add prometheus repo
run: helm repo add prometheus https://prometheus-community.github.io/helm-charts
- name: Add grafana repo
run: helm repo add grafana https://grafana.github.io/helm-charts
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.2.1
uses: helm/chart-releaser-action@v1.5.0
env:
CR_TOKEN: '${{ secrets.CR_TOKEN }}'
with:

View File

@@ -3,8 +3,8 @@
:+1::tada: Firstly, we thank you for taking the time to contribute! :tada::+1:
Contribution comes in many forms:
* Reporting [issues](https://github.com/allegroai/clearml-helm-charts/issues) you've come upon
* Participating in issue discussions in the [issue tracker](https://github.com/allegroai/clearml-helm-charts/issues) and the [ClearML community slack space](https://join.slack.com/t/allegroai-trains/shared_invite/enQtOTQyMTI1MzQxMzE4LTY5NTUxOTY1NmQ1MzQ5MjRhMGRhZmM4ODE5NTNjMTg2NTBlZGQzZGVkMWU3ZDg1MGE1MjQxNDEzMWU2NmVjZmY)
* Reporting [issues](https://github.com/clearml/clearml-helm-charts/issues) you've come upon
* Participating in issue discussions in the [issue tracker](https://github.com/clearml/clearml-helm-charts/issues) and the [ClearML community slack space](https://joinslack.clear.ml)
* Suggesting new features or enhancements
* Implementing new features or fixing outstanding issues
@@ -16,7 +16,7 @@ Use your best judgment and feel free to propose changes to this document in a pu
By following these guidelines, you help maintainers and the community understand your report, reproduce the behavior, and find related reports.
Before reporting an issue, please check whether it already appears [here](https://github.com/allegroai/clearml-helm-charts/issues).
Before reporting an issue, please check whether it already appears [here](https://github.com/clearml/clearml-helm-charts/issues).
If it does, join the on-going discussion instead.
**Note**: If you find a **Closed** issue that may be the same issue which you are currently experiencing,
@@ -50,11 +50,14 @@ Enhancement suggestions are tracked as GitHub issues. After you determine which
Before you submit a new PR:
* Verify the work you plan to merge addresses an existing [issue](https://github.com/allegroai/clearml-helm-charts/issues) (If not, open a new one)
* Check related discussions in the [ClearML slack community](https://join.slack.com/t/allegroai-trains/shared_invite/enQtOTQyMTI1MzQxMzE4LTY5NTUxOTY1NmQ1MzQ5MjRhMGRhZmM4ODE5NTNjMTg2NTBlZGQzZGVkMWU3ZDg1MGE1MjQxNDEzMWU2NmVjZmY) (Or start your own discussion on the `#clearml-dev` channel)
* Make sure your code conforms to the ClearML coding standards by running:
`flake8 --max-line-length=120 --statistics --show-source --extend-ignore=E501 ./clearml*`
* Verify the work you plan to merge addresses an existing [issue](https://github.com/clearml/clearml-helm-charts/issues) (If not, open a new one)
* Check related discussions in the [ClearML slack community](https://joinslack.clear.ml) (or start your own discussion on the `#clearml-dev` channel)
* Check your branch with `helm lint`
* Update `version` in `Chart.yaml` according [semver](https://semver.org/) rules
* Substitute `annotations` section in `Chart.yaml` annotating implementations (useful for Artifecthub changelog)
* Update chart README using [helm-docs](https://github.com/norwoodj/helm-docs)
In your PR include:
* A reference to the issue it addresses
* A brief description of the approach you've taken for implementing

47
INSTALL.md Normal file
View File

@@ -0,0 +1,47 @@
# ClearML Helm Charts Installation guide
## Requirements
* Set up a Kubernetes Cluster - for setting up Kubernetes on various platforms refer to the Kubernetes [getting started guide](http://kubernetes.io/docs/getting-started-guides/).
* Set up a single-node LOCAL Kubernetes on laptop/desktop - for setting up Kubernetes on your laptop/desktop, we suggest [kind](https://kind.sigs.k8s.io).
* For **Kubernetes Tanzu users** - see [prerequisites](https://github.com/clearml/clearml-helm-charts/tree/main/platform-specific-configs/tanzu)
for setting up ClearML on a Tanzu cluster
* For **Kubernetes Openshift users** - see [prerequisites](https://github.com/clearml/clearml-helm-charts/tree/main/platform-specific-configs/openshift)
for setting up ClearML on an Openshift cluster,
* Install Helm - Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes
resources. To install Helm, refer to the [Helm install guide](https://github.com/helm/helm#install) and ensure that the `helm` binary is in the `PATH` of your shell.
## Helm Charts Installation
### Helm Repo
```bash
$ helm repo add clearml https://clearml.github.io/clearml-helm-charts
$ helm repo update
```
### ClearML Server Ecosystem
```bash
$ helm install clearml clearml/clearml
```
### ClearML Agent
A ClearML Agent is always related to a ClearML server ecosystem (by default using the `app.clear.ml` hosted server, but
can be on the same or different Kubernetes cluster or a single server installation).
In the ClearML UI, go to **Settings > Workspace** and click **Create New Credentials**. The dialog that pops up displays
the new credentials.
In the Helm chart `install` command below:
* Set `ACCESSKEY` to the new credentials' `access_key` value
* Set `SECRETKEY` to the new credentials' `secret_key` value
* Set `APISERVERURL` to the new credentials' `api_server` value
* Set `FILESSERVERURL` to the new credentials' `files_server` value
* Set `WEBSERVERURL` to the new credentials' `web_server` value
```bash
$ helm install clearml-agent clearml/clearml-agent --set clearml.agentk8sglueKey=ACCESSKEY --set clearml.agentk8sglueSecret=SECRETKEY --set agentk8sglue.apiServerUrlReference=APISERVERURL --set agentk8sglue.fileServerUrlReference=FILESERVERURL --set agentk8sglue.webServerUrlReference=WEBSERVERURL
```

View File

@@ -1,12 +1,12 @@
# ClearML Helm Charts Library for Kubernetes
# ClearML Helm Charts for Kubernetes
## Auto-Magical Experiment Manager & Version Control for AI
Helm charts provided by [Allegro AI](https://clear.ml), ready to launch on Kubernetes using [Kubernetes Helm](https://github.com/helm/helm).
Helm charts provided by [ClearML](https://clear.ml), ready to launch on Kubernetes using [Kubernetes Helm](https://github.com/helm/helm).
## Introduction
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/allegroai/clearml).
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/clearml/clearml).
It allows multiple users to collaborate and manage their experiments.
By default, **ClearML** is set up to work with the **ClearML** demo server, which is open to anyone and resets periodically.
In order to host your own server, you will need to install **clearml-server** and point **ClearML** to it.
@@ -23,61 +23,44 @@ Use this repository to deploy **clearml-server** on Kubernetes clusters.
## Provided in this repository
### [All around Helm Chart](https://github.com/allegroai/clearml-helm-charts/tree/main/charts/clearml)
### [ClearML server chart](https://github.com/clearml/clearml-helm-charts/tree/main/charts/clearml)
### [ClearML agent chart](https://github.com/clearml/clearml-helm-charts/tree/main/charts/clearml-agent)
### [ClearML serving chart](https://github.com/clearml/clearml-helm-charts/tree/main/charts/clearml-serving)
## Who We Are
ClearML is supported by the team behind *allegro.ai*,
where we build deep learning pipelines and infrastructure for enterprise companies.
ClearML is supported by you :heart: and the [clear.ml](https://clear.ml) team, which helps enterprise companies build
scalable MLOps.
We built ClearML to track and control the glorious but messy process of training production-grade deep learning models.
We are committed to vigorously supporting and expanding the capabilities of ClearML.
We promise to always be backwardly compatible, making sure all your logs, data and pipelines
We promise to always be backwards compatible, making sure all your logs, data, and pipelines
will always upgrade with you.
## License
Apache License, Version 2.0, (see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for more information)
## Requirements
## Installation Guide
### Setup a Kubernetes Cluster
For setting up Kubernetes on various platforms refer to the Kubernetes [getting started guide](http://kubernetes.io/docs/getting-started-guides/).
### Setup a single node LOCAL Kubernetes on laptop/desktop
For setting up Kubernetes on your laptop/desktop we suggest [kind](https://kind.sigs.k8s.io).
### Install Helm
Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
To install Helm, refer to the [Helm install guide](https://github.com/helm/helm#install) and ensure that the `helm` binary is in the `PATH` of your shell.
## Usage
```bash
$ helm repo add allegroai https://allegroai.github.io/clearml-helm-charts
$ helm repo update
$ helm search repo allegroai
$ helm install <release-name> allegroai/<chart>
```
For installation instructions, follow related [Installation Guide](INSTALL.md).
## Documentation, Community & Support
More information in the [official documentation](https://allegro.ai/clearml/docs) and [on YouTube](https://www.youtube.com/c/ClearML).
See more information in the [official documentation](https://clear.ml/docs/latest/docs) and [on YouTube](https://www.youtube.com/c/ClearML).
If you have any questions: post on our [Slack Channel](https://join.slack.com/t/clearml/shared_invite/zt-c0t13pty-aVUZZW1TSSSg2vyIGVPBhg), or tag your questions on [stackoverflow](https://stackoverflow.com/questions/tagged/clearml) with '**[clearml](https://stackoverflow.com/questions/tagged/clearml)**' tag (*previously [trains](https://stackoverflow.com/questions/tagged/trains) tag*).
If you have any questions, post on our [Slack Channel](https://joinslack.clear.ml), or tag your questions on [stackoverflow](https://stackoverflow.com/questions/tagged/clearml) with '**[clearml](https://stackoverflow.com/questions/tagged/clearml)**' tag (*previously [trains](https://stackoverflow.com/questions/tagged/trains) tag*).
For feature requests or bug reports, please use [GitHub issues](https://github.com/allegroai/clearml-helm-charts/issues).
For feature requests or bug reports, please use [GitHub Issues](https://github.com/clearml/clearml-helm-charts/issues).
Additionally, you can always find us at *clearml@allegro.ai*
Additionally, you can always find us at *support@clear.ml*
## Contributing
**PRs are always welcomed** :heart: See more details in the ClearML [Guidelines for Contributing](https://github.com/allegroai/clearml-helm-charts/blob/main/CONTRIBUTING.md).
**PRs are always welcomed** :heart: See more details in the ClearML [Guidelines for Contributing](https://github.com/clearml/clearml-helm-charts/blob/main/CONTRIBUTING.md).
_May the force (and the goddess of learning rates) be with you!_

View File

@@ -1,19 +1,24 @@
apiVersion: v2
name: clearml-agent
description: MLOps platform
description: MLOps platform Task running agent
type: application
version: "3.1.0"
version: "5.3.2"
appVersion: "1.24"
kubeVersion: ">= 1.19.0-0 < 1.26.0-0"
kubeVersion: ">= 1.21.0-0 < 1.33.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
icon: https://raw.githubusercontent.com/clearml/clearml/master/docs/clearml-logo.svg
sources:
- https://github.com/allegroai/clearml-helm-charts
- https://github.com/allegroai/clearml
- https://github.com/clearml/clearml-helm-charts
- https://github.com/clearml/clearml
maintainers:
- name: valeriano-manassero
url: https://github.com/valeriano-manassero
- name: filippo-clearml
url: https://github.com/filippo-clearml
keywords:
- clearml
- "machine learning"
- mlops
- "task agent"
annotations:
artifacthub.io/changes: |
- kind: changed
description: "Support kubernetes 1.32"

View File

@@ -1,8 +1,8 @@
# ClearML Kubernetes Agent
![Version: 3.1.0](https://img.shields.io/badge/Version-3.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.24](https://img.shields.io/badge/AppVersion-1.24-informational?style=flat-square)
![Version: 5.3.2](https://img.shields.io/badge/Version-5.3.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.24](https://img.shields.io/badge/AppVersion-1.24-informational?style=flat-square)
MLOps platform
MLOps platform Task running agent
**Homepage:** <https://clear.ml>
@@ -10,56 +10,99 @@ MLOps platform
| Name | Email | Url |
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
| filippo-clearml | | <https://github.com/filippo-clearml> |
## Introduction
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/allegroai/clearml).
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/clearml/clearml).
It allows you to schedule distributed experiments on a Kubernetes cluster.
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
# Upgrading Chart
## Upgrades/ Values upgrades
Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml-agent clearml/clearml-agent
```
Changing values on existing installation can be done with:
```
helm upgrade clearml-agent clearml/clearml-agent --version <CURRENT CHART VERSION> -f custom_values.yaml
```
### Major upgrade from 3.* to 4.*
Before issuing helm upgrade:
* if using securityContexts check for new value form in values.yaml (podSecurityContext and containerSecurityContext)
## Source Code
* <https://github.com/allegroai/clearml-helm-charts>
* <https://github.com/allegroai/clearml>
* <https://github.com/clearml/clearml-helm-charts>
* <https://github.com/clearml/clearml>
## Requirements
Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
Kubernetes: `>= 1.21.0-0 < 1.33.0-0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| agentk8sglue | object | `{"annotations":{},"apiServerUrlReference":"https://api.clear.ml","basePodTemplate":{"annotations":{},"env":[],"fileMounts":[],"hostAliases":{},"initContainers":[],"labels":{},"nodeSelector":{},"resources":{},"schedulerName":"","securityContext":{},"tolerations":[],"volumeMounts":[],"volumes":[]},"clearmlcheckCertificate":true,"containerCustomBashScript":"","customBashScript":"","debugMode":false,"defaultContainerImage":"ubuntu:18.04","extraEnvs":[],"fileMounts":[],"fileServerUrlReference":"https://files.clear.ml","image":{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"},"labels":{},"nodeSelector":{},"queue":"default","replicaCount":1,"serviceExistingAccountName":"","volumeMounts":[],"volumes":[],"webServerUrlReference":"https://app.clear.ml"}` | This agent will spawn queued experiments in new pods, a good use case is to combine this with GPU autoscaling nodes. https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue |
| agentk8sglue | object | `{"additionalClusterRoleBindings":[],"additionalRoleBindings":[],"affinity":{},"annotations":{},"apiServerUrlReference":"https://api.clear.ml","basePodTemplate":{"affinity":{},"annotations":{},"containerSecurityContext":{},"env":[],"fileMounts":[],"hostAliases":[],"initContainers":[],"labels":{},"nodeSelector":{},"podSecurityContext":{},"priorityClassName":"","resources":{},"schedulerName":"","tolerations":[],"volumeMounts":[],"volumes":[]},"clearmlcheckCertificate":true,"containerSecurityContext":{},"createQueueIfNotExists":false,"defaultContainerImage":"ubuntu:18.04","extraEnvs":[],"fileMounts":[],"fileServerUrlReference":"https://files.clear.ml","image":{"registry":"","repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"},"initContainers":{"resources":{}},"labels":{},"nodeSelector":{},"podSecurityContext":{},"queue":"default","replicaCount":1,"resources":{},"serviceAccountAnnotations":{},"serviceExistingAccountName":"","tolerations":[],"volumeMounts":[],"volumes":[],"webServerUrlReference":"https://app.clear.ml"}` | This agent will spawn queued experiments in new pods, a good use case is to combine this with GPU autoscaling nodes. https://github.com/clearml/clearml-agent/tree/master/docker/k8s-glue |
| agentk8sglue.additionalClusterRoleBindings | list | `[]` | additional existing ClusterRoleBindings |
| agentk8sglue.additionalRoleBindings | list | `[]` | additional existing RoleBindings |
| agentk8sglue.affinity | object | `{}` | affinity setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.annotations | object | `{}` | annotations setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.apiServerUrlReference | string | `"https://api.clear.ml"` | Reference to Api server url |
| agentk8sglue.basePodTemplate | object | `{"annotations":{},"env":[],"fileMounts":[],"hostAliases":{},"initContainers":[],"labels":{},"nodeSelector":{},"resources":{},"schedulerName":"","securityContext":{},"tolerations":[],"volumeMounts":[],"volumes":[]}` | base template for pods spawned to consume ClearML Task |
| agentk8sglue.basePodTemplate | object | `{"affinity":{},"annotations":{},"containerSecurityContext":{},"env":[],"fileMounts":[],"hostAliases":[],"initContainers":[],"labels":{},"nodeSelector":{},"podSecurityContext":{},"priorityClassName":"","resources":{},"schedulerName":"","tolerations":[],"volumeMounts":[],"volumes":[]}` | base template for pods spawned to consume ClearML Task |
| agentk8sglue.basePodTemplate.affinity | object | `{}` | affinity setup for pods spawned to consume ClearML Task |
| agentk8sglue.basePodTemplate.annotations | object | `{}` | annotations setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.containerSecurityContext | object | `{}` | securityContext setup for containers spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.env | list | `[]` | environment variables for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.fileMounts | list | `[]` | file definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.hostAliases | object | `{}` | hostAliases setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.hostAliases | list | `[]` | hostAliases setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.initContainers | list | `[]` | initContainers definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.labels | object | `{}` | labels setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.nodeSelector | object | `{}` | nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.podSecurityContext | object | `{}` | securityContext setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.priorityClassName | string | `""` | priorityClassName setup for pods spawned to consume ClearML Task |
| agentk8sglue.basePodTemplate.resources | object | `{}` | resources declaration for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.schedulerName | string | `""` | schedulerName setup for pods spawned to consume ClearML Task |
| agentk8sglue.basePodTemplate.securityContext | object | `{}` | securityContext setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.tolerations | list | `[]` | tolerations setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.volumeMounts | list | `[]` | volume mounts definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.basePodTemplate.volumes | list | `[]` | volumes definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.clearmlcheckCertificate | bool | `true` | Check certificates validity for evefry UrlReference below. |
| agentk8sglue.containerCustomBashScript | string | `""` | Custom Bash script for the Task Pods ran by Glue Agent |
| agentk8sglue.debugMode | bool | `false` | Enable Debugging logs for Agent pod |
| agentk8sglue.containerSecurityContext | object | `{}` | container securityContext setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.createQueueIfNotExists | bool | `false` | if ClearML queue does not exist, it will be create it if the value is set to true |
| agentk8sglue.defaultContainerImage | string | `"ubuntu:18.04"` | default container image for ClearML Task pod |
| agentk8sglue.extraEnvs | list | `[]` | Extra Environment variables for Glue Agent |
| agentk8sglue.fileMounts | list | `[]` | file definition for Glue Agent (example in values.yaml comments) |
| agentk8sglue.fileServerUrlReference | string | `"https://files.clear.ml"` | Reference to File server url |
| agentk8sglue.image | object | `{"repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"}` | Glue Agent image configuration |
| agentk8sglue.image | object | `{"registry":"","repository":"allegroai/clearml-agent-k8s-base","tag":"1.24-21"}` | Glue Agent image configuration |
| agentk8sglue.initContainers | object | `{"resources":{}}` | Glue Agent pod initContainers configs |
| agentk8sglue.initContainers.resources | object | `{}` | Glue Agent initcontainers pod resources |
| agentk8sglue.labels | object | `{}` | labels setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.nodeSelector | object | `{}` | nodeSelector setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.queue | string | `"default"` | ClearML queue this agent will consume |
| agentk8sglue.podSecurityContext | object | `{}` | container securityContext setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.queue | string | `"default"` | ClearML queue this agent will consume. Multiple queues can be specified with the following format: queue1,queue2,queue3 |
| agentk8sglue.replicaCount | int | `1` | Glue Agent number of pods |
| agentk8sglue.serviceExistingAccountName | string | `""` | if set, don't create a serviceAccountName but use defined existing one |
| agentk8sglue.resources | object | `{}` | Glue Agent pod resources |
| agentk8sglue.serviceAccountAnnotations | object | `{}` | Add the provided map to the annotations for the ServiceAccount resource created by this chart |
| agentk8sglue.serviceExistingAccountName | string | `""` | If set, do not create a serviceAccountName and use the existing one with the provided name |
| agentk8sglue.tolerations | list | `[]` | tolerations setup for Agent pod (example in values.yaml comments) |
| agentk8sglue.volumeMounts | list | `[]` | volume mounts definition for Glue Agent (example in values.yaml comments) |
| agentk8sglue.volumes | list | `[]` | volumes definition for Glue Agent (example in values.yaml comments) |
| agentk8sglue.webServerUrlReference | string | `"https://app.clear.ml"` | Reference to Web server url |
@@ -69,19 +112,8 @@ Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
| clearml.clearmlConfig | string | `"sdk {\n}"` | ClearML configuration file |
| clearml.existingAgentk8sglueSecret | string | `""` | If this is set, chart will not generate a secret but will use what is defined here |
| clearml.existingClearmlConfigSecret | string | `""` | If this is set, chart will not generate a secret but will use what is defined here |
| enterpriseFeatures | object | `{"applyVaultEnvVars":true,"enabled":false,"maxPods":10,"monitoredResources":{"maxResources":0,"maxResourcesFieldName":"resources|limits|nvidia.com/gpu","minResourcesFieldName":"resources|limits|nvidia.com/gpu"},"queues":{"default":{"templateOverrides":{}}},"serviceAccountClusterAccess":false,"useOwnerToken":true}` | Enterprise features (work only with an Enterprise license) |
| enterpriseFeatures.applyVaultEnvVars | bool | `true` | push env vars from Clear.ML Vault to task pods |
| enterpriseFeatures.enabled | bool | `false` | Enable/Disable Enterprise features |
| enterpriseFeatures.maxPods | int | `10` | maximum concurrent consume ClearML Task pod |
| enterpriseFeatures.monitoredResources | object | `{"maxResources":0,"maxResourcesFieldName":"resources|limits|nvidia.com/gpu","minResourcesFieldName":"resources|limits|nvidia.com/gpu"}` | GPU resource general counters |
| enterpriseFeatures.monitoredResources.maxResources | int | `0` | Maximum resources counter |
| enterpriseFeatures.monitoredResources.maxResourcesFieldName | string | `"resources|limits|nvidia.com/gpu"` | Field name used by Agent to count maximum resources |
| enterpriseFeatures.monitoredResources.minResourcesFieldName | string | `"resources|limits|nvidia.com/gpu"` | Field name used by Agent to count minimum resources |
| enterpriseFeatures.queues | object | `{"default":{"templateOverrides":{}}}` | ClearML queues and related template OVERRIDES used this agent will consume |
| enterpriseFeatures.queues.default | object | `{"templateOverrides":{}}` | name of the queue will be used for this template |
| enterpriseFeatures.queues.default.templateOverrides | object | `{}` | overrides of the base template for this queue (must be declared even if empty!) |
| enterpriseFeatures.serviceAccountClusterAccess | bool | `false` | service account access every namespace flag |
| enterpriseFeatures.useOwnerToken | bool | `true` | Agent must use owner Token |
| global | object | `{"imageRegistry":"docker.io"}` | Global parameters section |
| global.imageRegistry | string | `"docker.io"` | Images registry |
| imageCredentials | object | `{"email":"someone@host.com","enabled":false,"existingSecret":"","password":"pwd","registry":"docker.io","username":"someone"}` | Private image registry configuration |
| imageCredentials.email | string | `"someone@host.com"` | Email |
| imageCredentials.enabled | bool | `false` | Use private authentication mode |
@@ -89,36 +121,10 @@ Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
| imageCredentials.password | string | `"pwd"` | Registry password |
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
| sessions | object | `{"dynamicSvcs":false,"externalIP":"0.0.0.0","maxServices":20,"portModeEnabled":false,"setInteractiveQueuesTag":true,"startingPort":30000,"svcAnnotations":{},"svcType":"NodePort"}` | Sessions internal service configuration |
| sessions.dynamicSvcs | bool | `false` | Enable/Disable dynamic svc for sessions pods |
| sessions | object | `{"externalIP":"0.0.0.0","maxServices":20,"portModeEnabled":false,"startingPort":30000,"svcAnnotations":{},"svcType":"NodePort"}` | Sessions internal service configuration |
| sessions.externalIP | string | `"0.0.0.0"` | External IP sessions clients can connect to |
| sessions.maxServices | int | `20` | maximum number of NodePorts exposed |
| sessions.portModeEnabled | bool | `false` | Enable/Disable sessions portmode WARNING: only one Agent deployment can have this set to true |
| sessions.setInteractiveQueuesTag | bool | `true` | set interactive queue tags |
| sessions.startingPort | int | `30000` | starting range of exposed NodePorts |
| sessions.svcAnnotations | object | `{}` | specific annotations for session services |
| sessions.svcType | string | `"NodePort"` | service type ("NodePort" or "ClusterIP" or "LoadBalancer") |
# Upgrading Chart
### From v1.x to v2.x
Chart 1.x was under the assumption that all mounted volumes would be PVC's. Version > 2.x allows for more flexibility and will inject the yaml from podTemplate.volumes and podtemplate.volumeMounts directly.
v1.x
```
volumes:
- name: "yourvolume"
path: "/yourpath"
```
v2.x
```
volumes:
- name: "yourvolume"
persistentVolumeClaim:
claimName: "yourvolume"
volumeMounts:
- name: "yourvolume"
mountPath: "/yourpath"
```

View File

@@ -11,35 +11,42 @@
## Introduction
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/allegroai/clearml).
The **clearml-agent** is the Kubernetes agent for for [ClearML](https://github.com/clearml/clearml).
It allows you to schedule distributed experiments on a Kubernetes cluster.
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
# Upgrading Chart
## Upgrades/ Values upgrades
Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml-agent clearml/clearml-agent
```
Changing values on existing installation can be done with:
```
helm upgrade clearml-agent clearml/clearml-agent --version <CURRENT CHART VERSION> -f custom_values.yaml
```
### Major upgrade from 3.* to 4.*
Before issuing helm upgrade:
* if using securityContexts check for new value form in values.yaml (podSecurityContext and containerSecurityContext)
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
# Upgrading Chart
### From v1.x to v2.x
Chart 1.x was under the assumption that all mounted volumes would be PVC's. Version > 2.x allows for more flexibility and will inject the yaml from podTemplate.volumes and podtemplate.volumeMounts directly.
v1.x
```
volumes:
- name: "yourvolume"
path: "/yourpath"
```
v2.x
```
volumes:
- name: "yourvolume"
persistentVolumeClaim:
claimName: "yourvolume"
volumeMounts:
- name: "yourvolume"
mountPath: "/yourpath"
```

View File

@@ -1,23 +1,56 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "clearml.name" -}}
{{- .Release.Name | trunc 59 | trimSuffix "-" }}
{{- define "clearmlAgent.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "clearmlAgent.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "clearml.chart" -}}
{{- define "clearmlAgent.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 59 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "clearml.labels" -}}
helm.sh/chart: {{ include "clearml.chart" . }}
{{ include "clearml.selectorLabels" . }}
{{- define "clearmlAgent.labels" -}}
helm.sh/chart: {{ include "clearmlAgent.chart" . }}
{{ include "clearmlAgent.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if $.Values.agentk8sglue.labels }}
{{ toYaml $.Values.agentk8sglue.labels }}
{{- end }}
{{- end }}
{{/*
Common labels (agentk8sglue)
*/}}
{{- define "agentk8sglue.labels" -}}
helm.sh/chart: {{ include "clearmlAgent.chart" . }}
{{ include "agentk8sglue.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
@@ -30,7 +63,7 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{/*
Common annotations
*/}}
{{- define "clearml.annotations" -}}
{{- define "clearmlAgent.annotations" -}}
{{- if $.Values.agentk8sglue.annotations }}
{{ toYaml $.Values.agentk8sglue.annotations }}
{{- end }}
@@ -39,8 +72,8 @@ Common annotations
{{/*
Selector labels
*/}}
{{- define "clearml.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
{{- define "clearmlAgent.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearmlAgent.fullname" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
@@ -48,18 +81,34 @@ app.kubernetes.io/instance: {{ .Release.Name }}
Selector labels (agentk8sglue)
*/}}
{{- define "agentk8sglue.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/instance: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearmlAgent.fullname" . }}
app.kubernetes.io/instance: {{ include "clearmlAgent.fullname" . }}
{{- end }}
{{/*
Registry name
*/}}
{{- define "registryNamePrefix" -}}
{{- $registryName := "" -}}
{{- if .globalValues }}
{{- if .globalValues.imageRegistry }}
{{- $registryName = printf "%s/" .globalValues.imageRegistry -}}
{{- end -}}
{{- end -}}
{{- if .imageRegistryValue }}
{{- $registryName = printf "%s/" .imageRegistryValue -}}
{{- end -}}
{{- printf "%s" $registryName }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "clearml.serviceAccountName" -}}
{{- define "clearmlAgent.serviceAccountName" -}}
{{- if .Values.agentk8sglue.serviceExistingAccountName }}
{{- .Values.agentk8sglue.serviceExistingAccountName }}
{{- else }}
{{- include "clearml.name" . }}-sa
{{- include "clearmlAgent.fullname" . }}-sa
{{- end }}
{{- end }}
@@ -71,15 +120,3 @@ Create secret to access docker registry
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
{{/*
Create a string composed by queue names
*/}}
{{- define "agentk8sglue.queues" -}}
{{- $list := list }}
{{- range $key, $value := .Values.agentk8sglue.queues }}
{{- $list = append $list (printf "%s" $key) }}
{{- end }}
{{- join " " $list }}
{{- end }}

View File

@@ -1,188 +1,8 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "clearml.name" . }}-pt
name: {{ include "clearmlAgent.fullname" . }}-pt
data:
{{- if .Values.enterpriseFeatures.enabled }}
template.yaml: |
{{- range $key, $value := $.Values.agentk8sglue.queues }}
{{ $key }}:
apiVersion: v1
metadata:
namespace: {{ $.Release.Namespace }}
{{- if $value.templateOverrides.labels }}
labels:
{{- toYaml $value.templateOverrides.labels | nindent 10 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.labels }}
labels:
{{- toYaml $.Values.agentk8sglue.basePodTemplate.labels | nindent 10 }}
{{- end}}
{{- if $value.templateOverrides.annotations }}
annotations:
{{- toYaml $value.templateOverrides.annotations | nindent 10 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.annotations }}
annotations:
{{- toYaml $.Values.agentk8sglue.basePodTemplate.annotations | nindent 10 }}
{{- end}}
spec:
{{- if $.Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if $.Values.imageCredentials.existingSecret }}
- name: $.Values.imageCredentials.existingSecret
{{- else }}
- name: {{ include "clearml.name" $ }}-ark
{{- end }}
{{- end }}
{{- if $value.templateOverrides.schedulerName }}
schedulerName: {{ $value.templateOverrides.schedulerName }}
{{- else if $.Values.agentk8sglue.basePodTemplate.schedulerName }}
schedulerName: {{ $.Values.agentk8sglue.basePodTemplate.schedulerName }}
{{- end}}
restartPolicy: Never
{{- if $value.templateOverrides.securityContext }}
securityContext:
{{- toYaml $value.templateOverrides.securityContext | nindent 10 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.securityContext }}
securityContext:
{{- toYaml $.Values.agentk8sglue.basePodTemplate.securityContext | nindent 10 }}
{{- end}}
{{- if $value.templateOverrides.hostAliases }}
{{- with $value.templateOverrides.hostAliases }}
hostAliases:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- else if $.Values.agentk8sglue.basePodTemplate.hostAliases }}
{{- with $.Values.agentk8sglue.basePodTemplate.hostAliases }}
hostAliases:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
volumes:
{{- if $value.templateOverrides.volumes }}
{{- toYaml $value.templateOverrides.volumes | nindent 10 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.volumes }}
{{- toYaml $.Values.agentk8sglue.basePodTemplate.volumes | nindent 10 }}
{{- end }}
{{- if $value.templateOverrides.fileMounts }}
- name: filemounts
secret:
secretName: {{ include "clearml.name" $ }}-{{ $key }}-fm
{{- else if $.Values.agentk8sglue.basePodTemplate.fileMounts }}
- name: filemounts
secret:
secretName: {{ include "clearml.name" $ }}-fm
{{- end }}
{{- if not $.Values.enterpriseFeatures.serviceAccountClusterAccess }}
serviceAccountName: {{ include "clearml.serviceAccountName" $ }}
{{- end }}
{{- if $value.templateOverrides.initContainers }}
initContainers:
{{- toYaml $value.templateOverrides.initContainers | nindent 10 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.initContainers }}
initContainers:
{{- toYaml $.Values.agentk8sglue.basePodTemplate.initContainers | nindent 10 }}
{{- end }}
containers:
- resources:
{{- if $value.templateOverrides.resources }}
{{- toYaml $value.templateOverrides.resources | nindent 12 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.resources }}
{{- toYaml $.Values.agentk8sglue.basePodTemplate.resources | nindent 12 }}
{{- end}}
ports:
- containerPort: 10022
volumeMounts:
{{- if $value.templateOverrides.volumeMounts }}
{{- toYaml $value.templateOverrides.volumeMounts | nindent 12 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.volumeMounts }}
{{- toYaml $.Values.agentk8sglue.basePodTemplate.volumeMounts | nindent 12 }}
{{- end }}
{{- if $value.templateOverrides.fileMounts }}
{{- range $value.templateOverrides.fileMounts }}
- name: filemounts
mountPath: "{{ .folderPath }}/{{ .name }}"
subPath: "{{ .name }}"
readOnly: true
{{- end }}
{{- else if $.Values.agentk8sglue.basePodTemplate.fileMounts }}
{{- range $.Values.agentk8sglue.basePodTemplate.fileMounts }}
- name: filemounts
mountPath: "{{ .folderPath }}/{{ .name }}"
subPath: "{{ .name }}"
readOnly: true
{{- end }}
{{- end }}
env:
- name: CLEARML_API_HOST
value: {{ $.Values.agentk8sglue.apiServerUrlReference }}
- name: CLEARML_WEB_HOST
value: {{ $.Values.agentk8sglue.webServerUrlReference }}
- name: CLEARML_FILES_HOST
value: {{ $.Values.agentk8sglue.fileServerUrlReference }}
{{- if not $.Values.agentk8sglue.useOwnerToken }}
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearml.name" . }}-ac
{{- end }}
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearml.name" . }}-ac
{{- end }}
key: agentk8sglue_secret
{{- end }}
- name: PYTHONUNBUFFERED
value: "x"
{{- if not $.Values.agentk8sglue.clearmlcheckCertificate }}
- name: CLEARML_API_HOST_VERIFY_CERT
value: "false"
{{- end }}
{{- if $value.templateOverrides.env }}
{{- toYaml $value.templateOverrides.env | nindent 12 }}
{{- else if $.Values.agentk8sglue.basePodTemplate.env }}
{{- toYaml $.Values.agentk8sglue.basePodTemplate.env | nindent 12 }}
{{- end }}
{{- if $value.templateOverrides.nodeSelector }}
{{- with $value.templateOverrides.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- else if $.Values.agentk8sglue.basePodTemplate.nodeSelector }}
{{- with $.Values.agentk8sglue.basePodTemplate.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
{{- if $value.templateOverrides.tolerations }}
{{- with $value.templateOverrides.tolerations }}
tolerations:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- else if $.Values.agentk8sglue.basePodTemplate.tolerations }}
{{- with $.Values.agentk8sglue.basePodTemplate.tolerations }}
tolerations:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
{{- end }}
secrets.yaml: |
{{- range $key, $value := $.Values.agentk8sglue.queues }}
{{ $key }}:
{{- if $value.templateOverrides.fileMounts }}
- {{ include "clearml.name" $ }}-{{ $key }}-fm
{{- else if $.Values.agentk8sglue.basePodTemplate.fileMounts }}
- {{ include "clearml.name" $ }}-fm
{{- end }}
{{- end }}
{{- else }}
template.yaml: |
apiVersion: v1
metadata:
@@ -195,19 +15,26 @@ data:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: {{.Values.imageCredentials.existingSecret}}
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: name: {{ include "clearml.name" $ }}-ark
- name: {{ include "clearmlAgent.fullname" $ }}-ark
{{- end }}
{{- end }}
{{- with .Values.agentk8sglue.basePodTemplate.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "clearml.serviceAccountName" $ }}
serviceAccountName: {{ include "clearmlAgent.serviceAccountName" $ }}
securityContext:
{{ toYaml .Values.agentk8sglue.basePodTemplate.podSecurityContext | nindent 8 }}
priorityClassName: {{ .Values.agentk8sglue.basePodTemplate.priorityClassName }}
initContainers:
{{- toYaml .Values.agentk8sglue.basePodTemplate.initContainers | nindent 8 }}
containers:
- resources:
{{- toYaml .Values.agentk8sglue.basePodTemplate.resources | nindent 10 }}
securityContext:
{{ toYaml .Values.agentk8sglue.basePodTemplate.containerSecurityContext | nindent 10 }}
ports:
- containerPort: 10022
{{- with .Values.agentk8sglue.basePodTemplate.volumeMounts }}
@@ -227,7 +54,7 @@ data:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearml.name" . }}-ac
name: {{ include "clearmlAgent.fullname" . }}-ac
{{- end }}
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
@@ -236,7 +63,7 @@ data:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearml.name" . }}-ac
name: {{ include "clearmlAgent.fullname" . }}-ac
{{- end }}
key: agentk8sglue_secret
{{- if .Values.agentk8sglue.basePodTemplate.env }}
@@ -250,7 +77,10 @@ data:
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- with .Values.agentk8sglue.basePodTemplate.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.sessions.portModeEnabled }}
{{- range untilStep 1 ( ( add .Values.sessions.maxServices 1 ) | int ) 1 }}
services-{{ . }}.yaml: |
@@ -259,7 +89,7 @@ data:
metadata:
name: clearml-session-{{ . }}
labels:
{{- include "clearml.labels" $ | nindent 8 }}
{{- include "clearmlAgent.labels" $ | nindent 8 }}
{{- with $.Values.sessions.svcAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
@@ -278,6 +108,6 @@ data:
nodePort: {{ add $.Values.sessions.startingPort . }}
{{- end }}
selector:
ai-allegro-agent-serial: pod-{{ . }}
ai.allegro.agent.serial: pod-{{ . }}
{{- end }}
{{- end }}

View File

@@ -1,11 +1,11 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clearml.name" . }}
name: {{ include "clearmlAgent.fullname" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- include "agentk8sglue.labels" . | nindent 4 }}
annotations:
{{- include "clearml.annotations" . | nindent 4 }}
{{- include "clearmlAgent.annotations" . | nindent 4 }}
spec:
replicas: {{ .Values.agentk8sglue.replicaCount }}
selector:
@@ -14,177 +14,176 @@ spec:
template:
metadata:
annotations:
checksum/config: {{ printf "%s%s" .Values.clearml .Values.agentk8sglue | sha256sum }}
{{- include "clearml.annotations" . | nindent 8 }}
checksum/config: {{ printf "%s" .Values | sha256sum }}
{{- include "clearmlAgent.annotations" . | nindent 8 }}
labels:
{{- include "clearml.labels" . | nindent 8 }}
{{- include "agentk8sglue.labels" . | nindent 8 }}
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: {{ include "clearml.name" . }}-ark
- name: {{ include "clearmlAgent.fullname" . }}-ark
{{- end }}
{{- end }}
serviceAccountName: {{ include "clearml.serviceAccountName" . }}
serviceAccountName: {{ include "clearmlAgent.serviceAccountName" . }}
securityContext:
{{ toYaml .Values.agentk8sglue.podSecurityContext | nindent 8 }}
initContainers:
- name: init-k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.apiServerUrlReference}}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done;
while [[ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.fileServerUrlReference}}/" -o /dev/null) =~ 403|405 ]] ; do
echo "waiting for fileserver" ;
sleep 5 ;
done;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.webServerUrlReference}}/" -o /dev/null) -ne 200 ] ; do
echo "waiting for webserver" ;
sleep 5 ;
done
- name: init-k8s-glue
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.agentk8sglue.image.registry) }}{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.apiServerUrlReference}}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done;
while [[ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.fileServerUrlReference}}/" -o /dev/null) =~ 403|405 ]] ; do
echo "waiting for fileserver" ;
sleep 5 ;
done;
while [ $(curl {{ if not .Values.agentk8sglue.clearmlcheckCertificate }}--insecure{{ end }} -sw '%{http_code}' "{{.Values.agentk8sglue.webServerUrlReference}}/" -o /dev/null) -ne 200 ] ; do
echo "waiting for webserver" ;
sleep 5 ;
done
securityContext:
{{ toYaml .Values.agentk8sglue.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.agentk8sglue.initContainers.resources | nindent 12 }}
containers:
- name: k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- >
export PATH=$PATH:$HOME/bin;
source /root/.bashrc && /root/entrypoint.sh
volumeMounts:
- name: {{ include "clearml.name" . }}-pt
mountPath: /root/template
{{ if .Values.clearml.clearmlConfig }}
- name: k8sagent-clearml-conf-volume
mountPath: /root/clearml.conf
subPath: clearml.conf
readOnly: true
{{- end }}
{{- if .Values.agentk8sglue.volumeMounts }}
{{- toYaml .Values.agentk8sglue.volumeMounts | nindent 10 }}
{{- end }}
{{- range .Values.agentk8sglue.fileMounts }}
- name: filemounts
mountPath: "{{ .folderPath }}/{{ .name }}"
subPath: "{{ .name }}"
readOnly: true
{{- end }}
env:
- name: CLEARML_API_HOST
value: "{{.Values.agentk8sglue.apiServerUrlReference}}"
- name: CLEARML_WEB_HOST
value: "{{.Values.agentk8sglue.webServerUrlReference}}"
- name: CLEARML_FILES_HOST
value: "{{.Values.agentk8sglue.fileServerUrlReference}}"
{{- if not .Values.agentk8sglue.clearmlcheckCertificate }}
- name: CLEARML_API_HOST_VERIFY_CERT
value: "false"
{{- end }}
{{- if .Values.sessions.portModeEnabled }}
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml \
--ports-mode --num-of-services {{ .Values.sessions.maxServices }} \
--base-port {{ .Values.sessions.startingPort }} \
--gateway-address {{ .Values.sessions.externalIP }}{{ if .Values.enterpriseFeatures.enabled }}{{ if .Values.enterpriseFeatures.useOwnerToken }} --use-owner-token{{ end }}{{ end }}"
{{- if .Values.sessions.dynamicSvcs }}
- name: CLEARML_K8S_GLUE_POD_POST_APPLY_CMD
value: "kubectl -n {namespace} apply -f ~/template/services-{pod_number}.yaml ; kubectl -n {namespace} label svc clearml-session-{pod_number} service-for={pod_name}"
- name: CLEARML_K8S_GLUE_POD_POST_DELETE_CMD
value: "kubectl -n {namespace} delete svc -l service-for={pod_name}"
{{- end }}
{{- else}}
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml \
--max-pods {{.Values.enterpriseFeatures.maxPods}}{{ if .Values.enterpriseFeatures.enabled }}{{ if .Values.enterpriseFeatures.useOwnerToken }} --use-owner-token{{ end }}{{ end }}"
{{- end }}
- name: CLEARML_K8S_GLUE_LIMIT_POD_LABEL
value: "ai-allegro-agent-serial=pod-{pod_number}"
- name: CLEARML_K8S_SECRETS_LIST_FILE
value: /root/template/secrets.yaml
- name: K8S_DEFAULT_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ include "clearml.name" . }}-ac
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ include "clearml.name" . }}-ac
key: agentk8sglue_secret
- name: CLEARML_WORKER_ID
value: {{ include "clearml.name" . }}
- name: CLEARML_AGENT_UPDATE_REPO
value: ""
- name: FORCE_CLEARML_AGENT_REPO
value: ""
- name: CLEARML_DOCKER_IMAGE
value: "{{.Values.agentk8sglue.defaultContainerImage}}"
{{ if .Values.agentk8sglue.customBashScript }}
- name: CLEARML_K8S_GLUE_EXTRA_BASH_SCRIPT
value: "{{.Values.agentk8sglue.customBashScript}}"
{{- end }}
{{ if .Values.agentk8sglue.containerCustomBashScript }}
- name: CLEARML_K8S_GLUE_POD_BASH_SCRIPT
value: "{{.Values.agentk8sglue.containerCustomBashScript}}"
{{- end }}
{{- if .Values.agentk8sglue.debugMode }}
- name: "CLEARML_K8S_GLUE_DEBUG"
value: "1"
{{- end }}
{{- if .Values.agentk8sglue.extraEnvs }}
{{ toYaml .Values.agentk8sglue.extraEnvs | nindent 10 }}
{{- end }}
{{- if .Values.sessions.portModeEnabled }}
{{- if .Values.sessions.setInteractiveQueuesTag }}
- name: "CLEARML_K8S_GLUE_SET_QUEUE_SYSTEM_TAGS"
value: "interactive"
{{- end }}
{{- end }}
{{- if .Values.enterpriseFeatures.enabled }}
- name: K8S_GLUE_QUEUE
value: {{ include "agentk8sglue.queues" . | quote }}
- name: CLEARML_K8S_GLUE_APPLY_VAULT_ENV_VARS
value: {{ .Values.enterpriseFeatures.applyVaultEnvVars | quote }}
- name: "CLEARML_K8S_GLUE_POD_MIN_RES_FIELD"
value: {{.Values.enterpriseFeatures.monitoredResources.minResourcesFieldName}}
- name: "CLEARML_K8S_GLUE_MAX_RESOURCES"
value: "{{.Values.agentk8sglue.monitoredResources.maxResources}}"
- name: "CLEARML_K8S_GLUE_POD_MAX_RES_FIELD"
value: {{.Values.enterpriseFeatures.monitoredResources.maxResourcesFieldName}}
{{- else }}
- name: K8S_GLUE_QUEUE
value: {{ .Values.agentk8sglue.queue }}
{{- end }}
{{- with .Values.agentk8sglue.basePodTemplate.nodeSelector}}
- name: k8s-glue
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.agentk8sglue.image.registry) }}{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- >
export PATH=$PATH:$HOME/bin;
source /root/.bashrc && /root/entrypoint.sh
volumeMounts:
- name: {{ include "clearmlAgent.fullname" . }}-pt
mountPath: /root/template
{{ if or (.Values.clearml.clearmlConfig) (.Values.clearml.existingClearmlConfigSecret) }}
- name: k8sagent-clearml-conf-volume
mountPath: /root/clearml.conf
subPath: clearml.conf
readOnly: true
{{- end }}
{{- if .Values.agentk8sglue.volumeMounts }}
{{- toYaml .Values.agentk8sglue.volumeMounts | nindent 12 }}
{{- end }}
{{- range .Values.agentk8sglue.fileMounts }}
- name: filemounts
mountPath: "{{ .folderPath }}/{{ .name }}"
subPath: "{{ .name }}"
readOnly: true
{{- end }}
env:
- name: CLEARML_API_HOST
value: "{{.Values.agentk8sglue.apiServerUrlReference}}"
- name: CLEARML_WEB_HOST
value: "{{.Values.agentk8sglue.webServerUrlReference}}"
- name: CLEARML_FILES_HOST
value: "{{.Values.agentk8sglue.fileServerUrlReference}}"
{{- if not .Values.agentk8sglue.clearmlcheckCertificate }}
- name: CLEARML_API_HOST_VERIFY_CERT
value: "false"
{{- end }}
{{- if .Values.sessions.portModeEnabled }}
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml \
--ports-mode --num-of-services {{ .Values.sessions.maxServices }} \
--base-port {{ .Values.sessions.startingPort }} \
--gateway-address {{ .Values.sessions.externalIP }} \
{{- if .Values.agentk8sglue.createQueueIfNotExists }} --create-queue{{- end }}
"
{{- else}}
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml \
{{- if .Values.agentk8sglue.createQueueIfNotExists }} --create-queue{{- end }}
"
{{- end }}
{{ if or (.Values.clearml.clearmlConfig) (.Values.clearml.existingClearmlConfigSecret) }}
- name: CLEARML_CONFIG_FILE
value: /root/clearml.conf
{{- end }}
- name: K8S_DEFAULT_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearmlAgent.fullname" . }}-ac
{{- end }}
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
{{- if .Values.clearml.existingAgentk8sglueSecret }}
name: {{ .Values.clearml.existingAgentk8sglueSecret }}
{{- else }}
name: {{ include "clearmlAgent.fullname" . }}-ac
{{- end }}
key: agentk8sglue_secret
- name: CLEARML_WORKER_ID
value: {{ include "clearmlAgent.fullname" . }}
- name: CLEARML_AGENT_UPDATE_REPO
value: ""
- name: FORCE_CLEARML_AGENT_REPO
value: ""
- name: CLEARML_DOCKER_IMAGE
value: "{{.Values.agentk8sglue.defaultContainerImage}}"
- name: K8S_GLUE_QUEUE
value: {{ .Values.agentk8sglue.queue }}
{{- if .Values.agentk8sglue.extraEnvs }}
{{ toYaml .Values.agentk8sglue.extraEnvs | nindent 12 }}
{{- end }}
securityContext:
{{ toYaml .Values.agentk8sglue.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.agentk8sglue.resources | nindent 12 }}
{{- with .Values.agentk8sglue.nodeSelector}}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.agentk8sglue.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.agentk8sglue.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: {{ include "clearml.name" . }}-pt
- name: {{ include "clearmlAgent.fullname" . }}-pt
configMap:
name: {{ include "clearml.name" . }}-pt
{{ if .Values.clearml.clearmlConfig }}
name: {{ include "clearmlAgent.fullname" . }}-pt
{{ if .Values.clearml.existingClearmlConfigSecret }}
- name: k8sagent-clearml-conf-volume
secret:
secretName: {{ include "clearml.name" . }}-ac
secretName: {{ .Values.clearml.existingClearmlConfigSecret }}
items:
- key: clearml.conf
path: clearml.conf
{{ else if .Values.clearml.clearmlConfig }}
- name: k8sagent-clearml-conf-volume
secret:
secretName: {{ include "clearmlAgent.fullname" . }}-ac
items:
- key: clearml.conf
path: clearml.conf
{{ end }}
{{- if .Values.agentk8sglue.volumes }}
{{- toYaml .Values.agentk8sglue.volumes | nindent 8 }}
{{- end }}
{{ if .Values.agentk8sglue.fileMounts }}
- name: filemounts
secret:
secretName: {{ include "clearml.name" . }}-afm
secretName: {{ include "clearmlAgent.fullname" . }}-afm
{{ end }}
{{- if .Values.agentk8sglue.volumes }}
{{- toYaml .Values.agentk8sglue.volumes | nindent 8 }}
{{- end }}

View File

@@ -2,47 +2,18 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "clearml.serviceAccountName" . }}
name: {{ include "clearmlAgent.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- if .Values.agentk8sglue.serviceAccountAnnotations }}
annotations:
{{- toYaml .Values.agentk8sglue.serviceAccountAnnotations | nindent 4 }}
{{- end }}
{{- end }}
{{- if .Values.enterpriseFeatures.serviceAccountClusterAccess }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "clearml.name" . }}-kpa
rules:
- apiGroups:
- ""
resources:
- pods
- secrets
- services
verbs: ["get", "list", "watch", "create", "patch", "delete"]
- apiGroups:
- ""
resources:
- namespaces
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "clearml.name" . }}-kpa
subjects:
- kind: ServiceAccount
name: {{ include "clearml.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "clearml.name" . }}-kpa
{{- else }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "clearml.name" . }}-kpa
name: {{ include "clearmlAgent.fullname" . }}-kpa
rules:
- apiGroups:
- ""
@@ -50,23 +21,61 @@ rules:
- pods
- secrets
- services
- events
verbs: ["get", "list", "watch", "create", "patch", "delete"]
- apiGroups:
- ""
resources:
- namespaces
verbs: ["list"]
{{- if .Values.agentk8sglue.taskAsJob }}
- apiGroups:
- batch
- extensions
resources:
- jobs
verbs: ["get", "list", "watch", "create", "patch", "delete"]
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "clearml.name" . }}-kpa
name: {{ include "clearmlAgent.fullname" . }}-kpa
subjects:
- kind: ServiceAccount
name: {{ include "clearml.serviceAccountName" . }}
name: {{ include "clearmlAgent.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "clearml.name" . }}-kpa
name: {{ include "clearmlAgent.fullname" . }}-kpa
{{- range .Values.agentk8sglue.additionalClusterRoleBindings }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "clearmlAgent.fullname" $ }}-kpa-{{ . }}
subjects:
- kind: ServiceAccount
name: {{ include "clearmlAgent.serviceAccountName" $ }}
namespace: {{ $.Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ . }}
{{- end }}
{{- range .Values.agentk8sglue.additionalRoleBindings }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "clearmlAgent.fullname" $ }}-kpa-{{ . }}
subjects:
- kind: ServiceAccount
name: {{ include "clearmlAgent.serviceAccountName" $ }}
namespace: {{ $.Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ . }}
{{- end }}

View File

@@ -1,18 +1,20 @@
{{- if or (not .Values.clearml.existingAgentk8sglueSecret) (not .Values.clearml.existingClearmlConfigSecret) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearml.name" . }}-ac
name: {{ include "clearmlAgent.fullname" . }}-ac
data:
agentk8sglue_key: {{ .Values.clearml.agentk8sglueKey | b64enc }}
agentk8sglue_secret: {{ .Values.clearml.agentk8sglueSecret | b64enc }}
clearml.conf: {{ .Values.clearml.clearmlConfig | b64enc }}
{{- end }}
---
{{- if .Values.imageCredentials.enabled }}
{{- if not .Values.imageCredentials.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearml.name" . }}-ark
name: {{ include "clearmlAgent.fullname" . }}-ark
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}

View File

@@ -2,36 +2,9 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearml.name" . }}-afm
name: {{ include "clearmlAgent.fullname" . }}-afm
data:
{{- range .Values.agentk8sglue.fileMounts }}
{{ .name }}: {{ .fileContent | b64enc }}
{{- end }}
{{ end }}
---
{{- if .Values.enterpriseFeatures.enabled }}
{{ if .Values.agentk8sglue.basePodTemplate.fileMounts }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearml.name" . }}-fm
data:
{{- range .Values.agentk8sglue.basePodTemplate.fileMounts }}
{{ .name }}: {{ .fileContent | b64enc }}
{{- end }}
{{ end }}
---
{{- range $key, $value := $.Values.agentk8sglue.queues }}
{{ if .templateOverrides.fileMounts }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearml.name" $ }}-{{ $key }}-fm
data:
{{- range .templateOverrides.fileMounts }}
{{ .name }}: {{ .fileContent | b64enc }}
{{- end }}
{{ end }}
---
{{- end }}
{{- end }}

View File

@@ -1,5 +1,4 @@
{{- if .Values.sessions.portModeEnabled }}
{{- if not .Values.sessions.dynamicSvcs }}
{{- range untilStep 1 ( ( add .Values.sessions.maxServices 1 ) | int ) 1 }}
---
apiVersion: v1
@@ -7,7 +6,7 @@ kind: Service
metadata:
name: clearml-session-{{ . }}
labels:
{{- include "clearml.labels" $ | nindent 4 }}
{{- include "clearmlAgent.labels" $ | nindent 4 }}
{{- with $.Values.sessions.svcAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
@@ -29,4 +28,3 @@ spec:
ai.allegro.agent.serial: pod-{{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,3 +1,8 @@
# -- Global parameters section
global:
# -- Images registry
imageRegistry: "docker.io"
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
@@ -24,6 +29,16 @@ clearml:
# -- If this is set, chart will not generate a secret but will use what is defined here
existingClearmlConfigSecret: ""
# The secret should be defined as the following example
#
# apiVersion: v1
# kind: Secret
# metadata:
# name: secret-name
# stringData:
# clearml.conf: |-
# sdk {
# }
# -- ClearML configuration file
clearmlConfig: |-
sdk {
@@ -31,25 +46,33 @@ clearml:
# -- This agent will spawn queued experiments in new pods, a good use case is to combine this with
# GPU autoscaling nodes.
# https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue
# https://github.com/clearml/clearml-agent/tree/master/docker/k8s-glue
agentk8sglue:
# -- Glue Agent image configuration
image:
registry: ""
repository: "allegroai/clearml-agent-k8s-base"
tag: "1.24-21"
# -- Glue Agent number of pods
replicaCount: 1
# -- if set, don't create a serviceAccountName but use defined existing one
# -- Glue Agent pod resources
resources: {}
# -- Glue Agent pod initContainers configs
initContainers:
# -- Glue Agent initcontainers pod resources
resources: {}
# -- Add the provided map to the annotations for the ServiceAccount resource created by this chart
serviceAccountAnnotations: {}
# -- If set, do not create a serviceAccountName and use the existing one with the provided name
serviceExistingAccountName: ""
# -- Check certificates validity for evefry UrlReference below.
clearmlcheckCertificate: true
# -- Enable Debugging logs for Agent pod
debugMode: false
# -- Reference to Api server url
apiServerUrlReference: "https://api.clear.ml"
# -- Reference to File server url
@@ -59,26 +82,41 @@ agentk8sglue:
# -- default container image for ClearML Task pod
defaultContainerImage: ubuntu:18.04
# -- ClearML queue this agent will consume
# -- ClearML queue this agent will consume. Multiple queues can be specified with the following format: queue1,queue2,queue3
queue: default
# -- Custom Bash script for the Glue Agent
# -- if ClearML queue does not exist, it will be create it if the value is set to true
createQueueIfNotExists: false
# -- labels setup for Agent pod (example in values.yaml comments)
labels: {}
# schedulerName: scheduler
# -- annotations setup for Agent pod (example in values.yaml comments)
annotations: {}
# key1: value1
customBashScript: ""
# -- Custom Bash script for the Task Pods ran by Glue Agent
containerCustomBashScript: ""
# -- Extra Environment variables for Glue Agent
extraEnvs: []
# - name: PYTHONPATH
# value: "somepath"
# - name: PYTHONPATH
# value: "somepath"
# -- container securityContext setup for Agent pod (example in values.yaml comments)
podSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- container securityContext setup for Agent pod (example in values.yaml comments)
containerSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- additional existing ClusterRoleBindings
additionalClusterRoleBindings: []
# - privileged
# -- additional existing RoleBindings
additionalRoleBindings: []
# - privileged
# -- nodeSelector setup for Agent pod (example in values.yaml comments)
nodeSelector: {}
# fleet: agent-nodes
# -- tolerations setup for Agent pod (example in values.yaml comments)
tolerations: []
# -- affinity setup for Agent pod (example in values.yaml comments)
affinity: {}
# -- volumes definition for Glue Agent (example in values.yaml comments)
volumes: []
# - name: "yourvolume"
@@ -154,24 +192,33 @@ agentk8sglue:
# - name: CURL_CA_BUNDLE
# value: ""
# - name: PYTHONWARNINGS
# value: "=\"ignore:Unverified HTTPS request\""
# value: "ignore:Unverified HTTPS request"
# -- resources declaration for pods spawned to consume ClearML Task (example in values.yaml comments)
resources: {}
# limits:
# nvidia.com/gpu: 1
# -- priorityClassName setup for pods spawned to consume ClearML Task
priorityClassName: ""
# -- nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments)
nodeSelector: {}
# fleet: gpu-nodes
# -- tolerations setup for pods spawned to consume ClearML Task (example in values.yaml comments)
tolerations: []
# - key: "nvidia.com/gpu"
# operator: Exists
# effect: "NoSchedule"
# -- nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments)
nodeSelector: {}
# fleet: gpu-nodes
# -- affinity setup for pods spawned to consume ClearML Task
affinity: {}
# -- securityContext setup for pods spawned to consume ClearML Task (example in values.yaml comments)
securityContext: {}
# runAsUser: 1000
podSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- securityContext setup for containers spawned to consume ClearML Task (example in values.yaml comments)
containerSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- hostAliases setup for pods spawned to consume ClearML Task (example in values.yaml comments)
hostAliases: {}
hostAliases: []
# - ip: "127.0.0.1"
# hostnames:
# - "foo.local"
@@ -181,8 +228,6 @@ agentk8sglue:
sessions:
# -- Enable/Disable sessions portmode WARNING: only one Agent deployment can have this set to true
portModeEnabled: false
# -- Enable/Disable dynamic svc for sessions pods
dynamicSvcs: false
# -- specific annotations for session services
svcAnnotations: {}
# -- service type ("NodePort" or "ClusterIP" or "LoadBalancer")
@@ -193,40 +238,3 @@ sessions:
startingPort: 30000
# -- maximum number of NodePorts exposed
maxServices: 20
# -- set interactive queue tags
setInteractiveQueuesTag: true
# -- Enterprise features (work only with an Enterprise license)
enterpriseFeatures:
# -- Enable/Disable Enterprise features
enabled: false
# -- service account access every namespace flag
serviceAccountClusterAccess: false
# -- push env vars from Clear.ML Vault to task pods
applyVaultEnvVars: true
# -- GPU resource general counters
monitoredResources:
# -- Field name used by Agent to count minimum resources
minResourcesFieldName: "resources|limits|nvidia.com/gpu"
# -- Maximum resources counter
maxResources: 0
# -- Field name used by Agent to count maximum resources
maxResourcesFieldName: "resources|limits|nvidia.com/gpu"
# -- maximum concurrent consume ClearML Task pod
maxPods: 10
# -- Agent must use owner Token
useOwnerToken: true
# -- ClearML queues and related template OVERRIDES used this agent will consume
queues:
# -- name of the queue will be used for this template
default:
# -- overrides of the base template for this queue (must be declared even if empty!)
templateOverrides: {}
## -- name of the queue will be used for this template
# default-gpu:
# # -- overrides of the base template for this queue
# templateOverrides:
# # -- resources declaration for pods spawned to consume ClearML Task
# resources:
# limits:
# nvidia.com/gpu: 1

View File

@@ -0,0 +1,12 @@
dependencies:
- name: kafka
repository: https://charts.bitnami.com/bitnami
version: 21.4.0
- name: prometheus
repository: https://prometheus-community.github.io/helm-charts
version: 19.7.2
- name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.52.3
digest: sha256:b28d01875a50b24230ba164d14671225b71d79172192a97e345661e4832f484b
generated: "2023-03-16T09:10:35.77395+01:00"

View File

@@ -2,14 +2,36 @@ apiVersion: v2
name: clearml-serving
description: ClearML Serving Helm Chart
type: application
version: 0.7.0
appVersion: "1.2.0"
kubeVersion: ">= 1.19.0-0 < 1.26.0-0"
version: "1.5.10"
appVersion: "1.3.0"
kubeVersion: ">= 1.21.0-0 < 1.33.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/clearml/clearml/master/docs/clearml-logo.svg
sources:
- https://github.com/clearml/clearml-helm-charts
- https://github.com/clearml/clearml
maintainers:
- name: valeriano-manassero
url: https://github.com/valeriano-manassero
- name: filippo-clearml
url: https://github.com/filippo-clearml
keywords:
- clearml
- "machine learning"
- mlops
- "model serving"
dependencies:
- name: kafka
version: "21.4.0"
repository: "https://charts.bitnami.com/bitnami"
condition: kafka.enabled
- name: prometheus
version: "19.7.2"
repository: "https://prometheus-community.github.io/helm-charts"
condition: prometheus.enabled
- name: grafana
version: "6.52.3"
repository: "https://grafana.github.io/helm-charts"
condition: grafana.enabled
annotations:
artifacthub.io/changes: |
- kind: changed
description: Support kubernetes 1.32

View File

@@ -1,55 +1,124 @@
# clearml-serving
# ClearML Kubernetes Serving
![Version: 0.7.0](https://img.shields.io/badge/Version-0.7.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.2.0](https://img.shields.io/badge/AppVersion-1.2.0-informational?style=flat-square)
![Version: 1.5.10](https://img.shields.io/badge/Version-1.5.10-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.3.0](https://img.shields.io/badge/AppVersion-1.3.0-informational?style=flat-square)
ClearML Serving Helm Chart
**Homepage:** <https://clear.ml>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
| filippo-clearml | | <https://github.com/filippo-clearml> |
## Introduction
The **clearml-serving** is the Kubernetes serving for for [ClearML](https://github.com/clearml/clearml-serving).
It allows you to serve models on a Kubernetes cluster.
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
# Upgrading Chart
## Upgrades/ Values upgrades
Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml-serving clearml/clearml-serving
```
Changing values on existing installation can be done with:
```
helm upgrade clearml-serving clearml/clearml-serving --version <CURRENT CHART VERSION> -f custom_values.yaml
```
## Source Code
* <https://github.com/clearml/clearml-helm-charts>
* <https://github.com/clearml/clearml>
## Requirements
Kubernetes: `>= 1.19.0-0 < 1.26.0-0`
Kubernetes: `>= 1.21.0-0 < 1.33.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://charts.bitnami.com/bitnami | kafka | 21.4.0 |
| https://grafana.github.io/helm-charts | grafana | 6.52.3 |
| https://prometheus-community.github.io/helm-charts | prometheus | 19.7.2 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| alertmanager | object | `{"affinity":{},"image":{"repository":"prom/alertmanager","tag":"v0.23.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Alertmanager generic configigurations |
| clearml | object | `{"apiAccessKey":"ClearML API Access Key","apiHost":"http://clearml-server-apiserver:8008","apiSecretKey":"ClearML API Secret Key","defaultBaseServeUrl":"http://127.0.0.1:8080/serve","filesHost":"http://clearml-server-fileserver:8081","servingTaskId":"ClearML Serving Task ID","webHost":"http://clearml-server-webserver:80"}` | ClearMl generic configurations |
| clearml_serving_inference | object | `{"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-inference","tag":"1.2.0"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving inference configurations |
| clearml | object | `{"apiAccessKey":"ClearML API Access Key","apiHost":"http://clearml-server-apiserver:8008","apiSecretKey":"ClearML API Secret Key","defaultBaseServeUrl":"http://127.0.0.1:8080/serve","filesHost":"http://clearml-server-fileserver:8081","kafkaServeUrl":"","servingTaskId":"ClearML Serving Task ID","webHost":"http://clearml-server-webserver:80"}` | ClearMl generic configurations |
| clearml_serving_inference | object | `{"additionalConfigs":{},"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"existingAdditionalConfigsConfigMap":"","existingAdditionalConfigsSecret":"","extraEnvironment":[],"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-inference","tag":"1.3.0"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving inference configurations |
| clearml_serving_inference.additionalConfigs | object | `{}` | files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret |
| clearml_serving_inference.affinity | object | `{}` | Affinity configuration |
| clearml_serving_inference.autoscaling | object | `{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50}` | Autoscaling configuration |
| clearml_serving_inference.existingAdditionalConfigsConfigMap | string | `""` | reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml) |
| clearml_serving_inference.existingAdditionalConfigsSecret | string | `""` | reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap |
| clearml_serving_inference.extraEnvironment | list | `[]` | Extra environment variables |
| clearml_serving_inference.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_inference.image | object | `{"repository":"allegroai/clearml-serving-inference","tag":"1.2.0"}` | Container Image |
| clearml_serving_inference.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_inference.image | object | `{"repository":"allegroai/clearml-serving-inference","tag":"1.3.0"}` | Container Image |
| clearml_serving_inference.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_inference.ingress.annotations | object | `{}` | Ingress annotations |
| clearml_serving_inference.ingress.enabled | bool | `false` | Enable/Disable ingress |
| clearml_serving_inference.ingress.hostName | string | `"serving.clearml.127-0-0-1.nip.io"` | Ingress hostname domain |
| clearml_serving_inference.ingress.ingressClassName | string | `""` | ClassName (must be defined if no default ingressClassName is available) |
| clearml_serving_inference.ingress.path | string | `"/"` | Ingress root path url |
| clearml_serving_inference.ingress.tlsSecretName | string | `""` | Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule. |
| clearml_serving_inference.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_inference.resources | object | `{}` | Pod resources definition |
| clearml_serving_inference.tolerations | list | `[]` | Tolerations configuration |
| clearml_serving_statistics | object | `{"affinity":{},"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-statistics","tag":"1.2.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving statistics configurations |
| clearml_serving_statistics | object | `{"additionalConfigs":{},"affinity":{},"enabled":true,"existingAdditionalConfigsConfigMap":"","existingAdditionalConfigsSecret":"","extraEnvironment":[],"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-statistics","tag":"1.3.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving statistics configurations |
| clearml_serving_statistics.additionalConfigs | object | `{}` | files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret |
| clearml_serving_statistics.affinity | object | `{}` | Affinity configuration |
| clearml_serving_statistics.enabled | bool | `true` | Enable ClearML Serving Statistics |
| clearml_serving_statistics.existingAdditionalConfigsConfigMap | string | `""` | reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml) |
| clearml_serving_statistics.existingAdditionalConfigsSecret | string | `""` | reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap |
| clearml_serving_statistics.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_statistics.image | object | `{"repository":"allegroai/clearml-serving-statistics","tag":"1.2.0"}` | Container Image |
| clearml_serving_statistics.image | object | `{"repository":"allegroai/clearml-serving-statistics","tag":"1.3.0"}` | Container Image |
| clearml_serving_statistics.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_statistics.resources | object | `{}` | Pod resources definition |
| clearml_serving_statistics.tolerations | list | `[]` | Tolerations configuration |
| clearml_serving_triton | object | `{"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"enabled":true,"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-triton","tag":"1.2.0-22.07"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving Triton configurations |
| clearml_serving_triton | object | `{"additionalConfigs":{},"affinity":{},"autoscaling":{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50},"enabled":true,"existingAdditionalConfigsConfigMap":"","existingAdditionalConfigsSecret":"","extraEnvironment":[],"extraPythonPackages":[],"image":{"repository":"allegroai/clearml-serving-triton","tag":"1.3.0"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | ClearML serving Triton configurations |
| clearml_serving_triton.additionalConfigs | object | `{}` | files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret |
| clearml_serving_triton.affinity | object | `{}` | Affinity configuration |
| clearml_serving_triton.autoscaling | object | `{"enabled":false,"maxReplicas":11,"minReplicas":1,"targetCPU":50,"targetMemory":50}` | Autoscaling configuration |
| clearml_serving_triton.enabled | bool | `true` | Triton pod creation enable/disable |
| clearml_serving_triton.existingAdditionalConfigsConfigMap | string | `""` | reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml) |
| clearml_serving_triton.existingAdditionalConfigsSecret | string | `""` | reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap |
| clearml_serving_triton.extraEnvironment | list | `[]` | Extra environment variables |
| clearml_serving_triton.extraPythonPackages | list | `[]` | Extra Python Packages to be installed in running pods |
| clearml_serving_triton.image | object | `{"repository":"allegroai/clearml-serving-triton","tag":"1.2.0-22.07"}` | Container Image |
| clearml_serving_triton.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_triton.image | object | `{"repository":"allegroai/clearml-serving-triton","tag":"1.3.0"}` | Container Image |
| clearml_serving_triton.ingress | object | `{"annotations":{},"enabled":false,"hostName":"serving-grpc.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""}` | Ingress exposing configurations |
| clearml_serving_triton.ingress.annotations | object | `{}` | Ingress annotations |
| clearml_serving_triton.ingress.enabled | bool | `false` | Enable/Disable ingress |
| clearml_serving_triton.ingress.hostName | string | `"serving-grpc.clearml.127-0-0-1.nip.io"` | Ingress hostname domain |
| clearml_serving_triton.ingress.ingressClassName | string | `""` | ClassName (must be defined if no default ingressClassName is available) |
| clearml_serving_triton.ingress.path | string | `"/"` | Ingress root path url |
| clearml_serving_triton.ingress.tlsSecretName | string | `""` | Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule. |
| clearml_serving_triton.nodeSelector | object | `{}` | Node Selector configuration |
| clearml_serving_triton.resources | object | `{}` | Pod resources definition |
| clearml_serving_triton.tolerations | list | `[]` | Tolerations configuration |
| grafana | object | `{"affinity":{},"image":{"repository":"grafana/grafana","tag":"8.4.4-ubuntu"},"ingress":{"annotations":{},"enabled":false,"hostName":"serving-grafana.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"resources":{},"tolerations":[]}` | Grafana generic configigurations |
| kafka | object | `{"affinity":{},"image":{"repository":"bitnami/kafka","tag":"3.1.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Kafka generic configigurations |
| prometheus | object | `{"affinity":{},"image":{"repository":"prom/prometheus","tag":"v2.34.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Prometheus generic configigurations |
| zookeeper | object | `{"affinity":{},"image":{"repository":"bitnami/zookeeper","tag":"3.7.0"},"nodeSelector":{},"resources":{},"tolerations":[]}` | Zookeeper generic configigurations |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.11.0](https://github.com/norwoodj/helm-docs/releases/v1.11.0)
| grafana | object | `{"adminPassword":"clearml","adminUser":"admin","datasources":{"datasources.yaml":{"apiVersion":1,"datasources":[{"access":"proxy","isDefault":true,"name":"Prometheus","type":"prometheus","url":"http://{{ .Release.Name }}-prometheus-server"}]}},"enabled":true}` | Configuration from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml |
| imageCredentials | object | `{"email":"someone@host.com","enabled":false,"existingSecret":"","password":"pwd","registry":"docker.io","username":"someone"}` | Private image registry configuration |
| imageCredentials.email | string | `"someone@host.com"` | Email |
| imageCredentials.enabled | bool | `false` | Use private authentication mode |
| imageCredentials.existingSecret | string | `""` | If this is set, chart will not generate a secret but will use what is defined here |
| imageCredentials.password | string | `"pwd"` | Registry password |
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
| kafka | object | `{"enabled":true}` | Configuration from https://github.com/bitnami/charts/blob/main/bitnami/kafka/values.yaml |
| prometheus | object | `{"enabled":true,"extraScrapeConfigs":"- job_name: \"{{ .Release.Name }}-stats\"\n static_configs:\n - targets:\n - \"{{ .Release.Name }}-statistics:9999\"\n","kube-state-metrics":{"enabled":false},"prometheus-node-exporter":{"enabled":false},"prometheus-pushgateway":{"enabled":false},"serverFiles":{"prometheus.yml":{"scrape_configs":[{"job_name":"prometheus","static_configs":[{"targets":["localhost:9090"]}]}]}}}` | Configuration from https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus/values.yaml |

View File

@@ -0,0 +1,46 @@
# ClearML Kubernetes Serving
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
{{ template "chart.homepageLine" . }}
{{ template "chart.maintainersSection" . }}
## Introduction
The **clearml-serving** is the Kubernetes serving for for [ClearML](https://github.com/clearml/clearml-serving).
It allows you to serve models on a Kubernetes cluster.
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
# Upgrading Chart
## Upgrades/ Values upgrades
Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml-serving clearml/clearml-serving
```
Changing values on existing installation can be done with:
```
helm upgrade clearml-serving clearml/clearml-serving --version <CURRENT CHART VERSION> -f custom_values.yaml
```
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,7 +1,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "clearml-serving.name" -}}
{{- define "clearmlServing.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
@@ -10,7 +10,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "clearml-serving.fullname" -}}
{{- define "clearmlServing.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
@@ -26,16 +26,16 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "clearml-serving.chart" -}}
{{- define "clearmlServing.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "clearml-serving.labels" -}}
helm.sh/chart: {{ include "clearml-serving.chart" . }}
{{ include "clearml-serving.selectorLabels" . }}
{{- define "clearmlServing.labels" -}}
helm.sh/chart: {{ include "clearmlServing.chart" . }}
{{ include "clearmlServing.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
@@ -45,22 +45,31 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{/*
Selector labels
*/}}
{{- define "clearml-serving.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml-serving.name" . }}
{{- define "clearmlServing.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearmlServing.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "clearml-serving.serviceAccountName" -}}
{{- define "clearmlServing.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "clearml-serving.fullname" .) .Values.serviceAccount.name }}
{{- default (include "clearmlServing.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create secret to access docker registry
*/}}
{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
{{/*
Return the target Kubernetes version
*/}}

View File

@@ -1,27 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: alertmanager
name: alertmanager
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: alertmanager
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: alertmanager
spec:
containers:
- image: "{{ .Values.alertmanager.image.repository }}:{{ .Values.alertmanager.image.tag }}"
name: clearml-serving-alertmanager
ports:
- containerPort: 9093
resources: {}
restartPolicy: Always

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: alertmanager
name: clearml-serving-alertmanager
spec:
ports:
- name: "9093"
port: 9093
targetPort: 9093
selector:
clearml.serving.service: alertmanager

View File

@@ -0,0 +1,11 @@
{{- if .Values.imageCredentials.enabled }}
{{- if not .Values.imageCredentials.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "clearmlServing.fullname" . }}-ark
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end }}
{{- end }}

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: clearml-serving-backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
clearml.serving.network/clearml-serving-backend: "true"
podSelector:
matchLabels:
clearml.serving.network/clearml-serving-backend: "true"

View File

@@ -0,0 +1,13 @@
{{- if .Values.clearml_serving_inference.additionalConfigs }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ include "clearmlServing.fullname" . }}-inference-configmap"
labels:
{{- include "clearmlServing.labels" . | nindent 4 }}
data:
{{- range $key, $val := .Values.clearml_serving_inference.additionalConfigs }}
{{ $key }}: |
{{- $val | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -3,21 +3,47 @@ kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-inference
name: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
name: {{ include "clearmlServing.fullname" . }}-inference
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{- if or .Values.clearml_serving_inference.additionalConfigs .Values.clearml_serving_inference.existingAdditionalConfigsConfigMap .Values.clearml_serving_inference.existingAdditionalConfigsSecret }}
volumes:
- name: additional-config
{{- if or .Values.clearml_serving_inference.existingAdditionalConfigsConfigMap }}
configMap:
name: {{ .Values.clearml_serving_inference.existingAdditionalConfigsConfigMap }}
{{- else if or .Values.clearml_serving_inference.existingAdditionalConfigsSecret }}
secret:
secretName: {{ .Values.clearml_serving_inference.existingAdditionalConfigsSecret }}
{{- else if or .Values.clearml_serving_inference.additionalConfigs }}
configMap:
name: "{{ include "clearmlServing.fullname" . }}-inference-configmap"
{{- end }}
{{- end }}
{{- with .Values.clearml_serving_inference.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- env:
- name: CLEARML_API_ACCESS_KEY
@@ -30,15 +56,21 @@ spec:
value: "{{ .Values.clearml.filesHost }}"
- name: CLEARML_WEB_HOST
value: "{{ .Values.clearml.webHost }}"
{{- if .Values.clearml_serving_statistics.enabled }}
- name: CLEARML_DEFAULT_KAFKA_SERVE_URL
value: clearml-serving-kafka:9092
{{- if .Values.clearml.kafkaServeUrl }}
value: {{ .Values.clearml.kafkaServeUrl }}
{{- else }}
value: {{ include "clearmlServing.fullname" . }}-kafka:9092
{{- end }}
{{- end }}
- name: CLEARML_SERVING_POLL_FREQ
value: "1.0"
- name: CLEARML_DEFAULT_BASE_SERVE_URL
value: "{{ .Values.clearml.defaultBaseServeUrl }}"
- name: CLEARML_DEFAULT_TRITON_GRPC_ADDR
{{- if .Values.clearml_serving_triton.enabled }}
value: "clearml-serving-triton:8001"
value: "{{ include "clearmlServing.fullname" . }}-triton:8001"
{{- else }}
value: ""
{{- end }}
@@ -51,12 +83,29 @@ spec:
- name: CLEARML_USE_GUNICORN
value: "true"
{{- if .Values.clearml_serving_inference.extraPythonPackages }}
- name: EXTRA_PYTHON_PACKAGES
- name: CLEARML_EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_inference.extraPythonPackages }}'
{{- end }}
{{- with .Values.clearml_serving_inference.extraEnvironment }}
{{- toYaml . | nindent 12 }}
{{- end }}
image: "{{ .Values.clearml_serving_inference.image.repository }}:{{ .Values.clearml_serving_inference.image.tag }}"
name: clearml-serving-inference
name: {{ include "clearmlServing.fullname" . }}-inference
ports:
- containerPort: 8080
resources: {}
{{- if or .Values.clearml_serving_inference.additionalConfigs .Values.clearml_serving_inference.existingAdditionalConfigsConfigMap .Values.clearml_serving_inference.existingAdditionalConfigsSecret }}
volumeMounts:
- name: additional-config
mountPath: /opt/clearml/config
{{- end }}
{{- with .Values.clearml_serving_inference.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.clearml_serving_inference.tolerations }}
tolerations:
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.clearml_serving_inference.resources | nindent 12 }}
restartPolicy: Always

View File

@@ -2,16 +2,16 @@
apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: clearml-serving-inference-hpa
name: {{ include "clearmlServing.fullname" . }}-inference-hpa
namespace: {{ .Release.Namespace | quote }}
annotations: {}
labels:
clearml.serving.service: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
spec:
scaleTargetRef:
apiVersion: "apps/v1"
kind: Deployment
name: clearml-serving-inference
name: {{ include "clearmlServing.fullname" . }}-inference
minReplicas: {{ .Values.clearml_serving_inference.autoscaling.minReplicas }}
maxReplicas: {{ .Values.clearml_serving_inference.autoscaling.maxReplicas }}
metrics:

View File

@@ -8,12 +8,15 @@ apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-inference
name: {{ include "clearmlServing.fullname" . }}-inference
labels:
clearml.serving.service: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
annotations:
{{- toYaml .Values.clearml_serving_inference.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.clearml_serving_inference.ingress.ingressClassName }}
ingressClassName: {{ .Values.clearml_serving_inference.ingress.ingressClassName }}
{{- end }}
{{- if .Values.clearml_serving_inference.ingress.tlsSecretName }}
tls:
- hosts:
@@ -29,12 +32,12 @@ spec:
pathType: Prefix
backend:
service:
name: clearml-serving-inference
name: {{ include "clearmlServing.fullname" . }}-inference
port:
number: 8080
{{ else }}
backend:
serviceName: clearml-serving-inference
servicename: {{ include "clearmlServing.fullname" . }}-inference
servicePort: 8080
{{ end }}
{{- end }}

View File

@@ -3,12 +3,12 @@ kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-inference
name: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference
name: {{ include "clearmlServing.fullname" . }}-inference
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
clearml.serving.service: clearml-serving-inference
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-inference

View File

@@ -0,0 +1,15 @@
{{- if .Values.clearml_serving_statistics.enabled }}
{{- if .Values.clearml_serving_statistics.additionalConfigs }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ include "clearmlServing.fullname" . }}-statistics-configmap"
labels:
{{- include "clearmlServing.labels" . | nindent 4 }}
data:
{{- range $key, $val := .Values.clearml_serving_statistics.additionalConfigs }}
{{ $key }}: |
{{- $val | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,23 +1,50 @@
{{- if .Values.clearml_serving_statistics.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-statistics
name: clearml-serving-statistics
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-statistics
name: {{ include "clearmlServing.fullname" . }}-statistics
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: clearml-serving-statistics
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-statistics
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: clearml-serving-statistics
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-statistics
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{- if or .Values.clearml_serving_statistics.additionalConfigs .Values.clearml_serving_statistics.existingAdditionalConfigsConfigMap .Values.clearml_serving_statistics.existingAdditionalConfigsSecret }}
volumes:
- name: additional-config
{{- if or .Values.clearml_serving_statistics.existingAdditionalConfigsConfigMap }}
configMap:
name: {{ .Values.clearml_serving_statistics.existingAdditionalConfigsConfigMap }}
{{- else if or .Values.clearml_serving_statistics.existingAdditionalConfigsSecret }}
secret:
secretName: {{ .Values.clearml_serving_statistics.existingAdditionalConfigsSecret }}
{{- else if or .Values.clearml_serving_statistics.additionalConfigs }}
configMap:
name: "{{ include "clearmlServing.fullname" . }}-statistics-configmap"
{{- end }}
{{- end }}
{{- with .Values.clearml_serving_statistics.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- env:
- name: CLEARML_API_ACCESS_KEY
@@ -31,18 +58,40 @@ spec:
- name: CLEARML_WEB_HOST
value: "{{ .Values.clearml.webHost }}"
- name: CLEARML_DEFAULT_KAFKA_SERVE_URL
value: clearml-serving-kafka:9092
{{- if .Values.clearml.kafkaServeUrl }}
value: {{ .Values.clearml.kafkaServeUrl }}
{{- else }}
value: {{ include "clearmlServing.fullname" . }}-kafka:9092
{{- end }}
- name: CLEARML_SERVING_POLL_FREQ
value: "1.0"
- name: CLEARML_SERVING_TASK_ID
value: "{{ .Values.clearml.servingTaskId }}"
{{- if .Values.clearml_serving_statistics.extraPythonPackages }}
- name: EXTRA_PYTHON_PACKAGES
- name: CLEARML_EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_statistics.extraPythonPackages }}'
{{- end }}
{{- with .Values.clearml_serving_statistics.extraEnvironment }}
{{- toYaml . | nindent 12 }}
{{- end }}
image: "{{ .Values.clearml_serving_statistics.image.repository }}:{{ .Values.clearml_serving_statistics.image.tag }}"
name: clearml-serving-statistics
name: {{ include "clearmlServing.fullname" . }}-statistics
ports:
- containerPort: 9999
resources: {}
{{- if or .Values.clearml_serving_statistics.additionalConfigs .Values.clearml_serving_statistics.existingAdditionalConfigsConfigMap .Values.clearml_serving_statistics.existingAdditionalConfigsSecret }}
volumeMounts:
- name: additional-config
mountPath: /opt/clearml/config
{{- end }}
{{- with .Values.clearml_serving_statistics.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.clearml_serving_statistics.tolerations }}
tolerations:
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.clearml_serving_statistics.resources | nindent 12 }}
restartPolicy: Always
{{- end }}

View File

@@ -1,14 +1,16 @@
{{- if .Values.clearml_serving_statistics.enabled }}
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-statistics
name: clearml-serving-statistics
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-statistics
name: {{ include "clearmlServing.fullname" . }}-statistics
spec:
ports:
- name: "9999"
port: 9999
targetPort: 9999
selector:
clearml.serving.service: clearml-serving-statistics
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-statistics
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.clearml_serving_triton.enabled }}
{{- if .Values.clearml_serving_triton.additionalConfigs }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ include "clearmlServing.fullname" . }}-triton-configmap"
labels:
{{- include "clearmlServing.labels" . | nindent 4 }}
data:
{{- range $key, $val := .Values.clearml_serving_triton.additionalConfigs }}
{{ $key }}: |
{{- $val | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -4,21 +4,54 @@ kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-triton
name: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
name: {{ include "clearmlServing.fullname" . }}-triton
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{ if .Values.clearml_serving_triton.runtimeClassName}}
runtimeClassName: {{ .Values.clearml_serving_triton.runtimeClassName }}
{{- end}}
{{- if or .Values.clearml_serving_triton.additionalConfigs .Values.clearml_serving_triton.existingAdditionalConfigsConfigMap .Values.clearml_serving_triton.existingAdditionalConfigsSecret }}
volumes:
- name: additional-config
{{- if or .Values.clearml_serving_triton.existingAdditionalConfigsConfigMap }}
configMap:
name: {{ .Values.clearml_serving_triton.existingAdditionalConfigsConfigMap }}
{{- else if or .Values.clearml_serving_triton.existingAdditionalConfigsSecret }}
secret:
secretName: {{ .Values.clearml_serving_triton.existingAdditionalConfigsSecret }}
{{- else if or .Values.clearml_serving_triton.additionalConfigs }}
configMap:
name: "{{ include "clearmlServing.fullname" . }}-triton-configmap"
{{- end }}
{{- end }}
{{- with .Values.clearml_serving_triton.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.clearml_serving_triton.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- env:
- name: CLEARML_API_ACCESS_KEY
@@ -38,14 +71,27 @@ spec:
- name: CLEARML_TRITON_METRIC_FREQ
value: "1.0"
{{- if .Values.clearml_serving_triton.extraPythonPackages }}
- name: EXTRA_PYTHON_PACKAGES
- name: CLEARML_EXTRA_PYTHON_PACKAGES
value: '{{ join " " .Values.clearml_serving_triton.extraPythonPackages }}'
{{- end }}
{{- with .Values.clearml_serving_triton.extraEnvironment }}
{{- toYaml . | nindent 12 }}
{{- end }}
image: "{{ .Values.clearml_serving_triton.image.repository }}:{{ .Values.clearml_serving_triton.image.tag }}"
name: clearml-serving-triton
name: {{ include "clearmlServing.fullname" . }}-triton
ports:
- containerPort: 8001
resources: {}
{{- if or .Values.clearml_serving_triton.additionalConfigs .Values.clearml_serving_triton.existingAdditionalConfigsConfigMap .Values.clearml_serving_triton.existingAdditionalConfigsSecret }}
volumeMounts:
- name: additional-config
mountPath: /opt/clearml/config
{{- end }}
resources:
{{- toYaml .Values.clearml_serving_triton.resources | nindent 12 }}
restartPolicy: Always
{{- with .Values.clearml_serving_triton.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{ end }}

View File

@@ -1,17 +1,18 @@
{{- if .Values.clearml_serving_triton.enabled }}
{{- if .Values.clearml_serving_triton.autoscaling.enabled }}
apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
kind: HorizontalPodAutoscaler
metadata:
name: clearml-serving-triton-hpa
name: {{ include "clearmlServing.fullname" . }}-triton-hpa
namespace: {{ .Release.Namespace | quote }}
annotations: {}
labels:
clearml.serving.service: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
spec:
scaleTargetRef:
apiVersion: "apps/v1"
kind: Deployment
name: clearml-serving-triton
name: {{ include "clearmlServing.fullname" . }}-triton
minReplicas: {{ .Values.clearml_serving_triton.autoscaling.minReplicas }}
maxReplicas: {{ .Values.clearml_serving_triton.autoscaling.maxReplicas }}
metrics:
@@ -40,3 +41,4 @@ spec:
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -9,12 +9,15 @@ apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-triton
name: {{ include "clearmlServing.fullname" . }}-triton
labels:
clearml.serving.service: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
annotations:
{{- toYaml .Values.clearml_serving_triton.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.clearml_serving_triton.ingress.ingressClassName }}
ingressClassName: {{ .Values.clearml_serving_triton.ingress.ingressClassName }}
{{- end }}
{{- if .Values.clearml_serving_triton.ingress.tlsSecretName }}
tls:
- hosts:
@@ -30,12 +33,12 @@ spec:
pathType: Prefix
backend:
service:
name: clearml-serving-triton
name: {{ include "clearmlServing.fullname" . }}-triton
port:
number: 8001
{{ else }}
backend:
serviceName: clearml-serving-triton
servicename: {{ include "clearmlServing.fullname" . }}-triton
servicePort: 8001
{{ end }}
{{- end }}

View File

@@ -4,13 +4,13 @@ kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: clearml-serving-triton
name: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
name: {{ include "clearmlServing.fullname" . }}-triton
spec:
ports:
- name: "8001"
port: 8001
targetPort: 8001
selector:
clearml.serving.service: clearml-serving-triton
clearml.serving.service: {{ include "clearmlServing.fullname" . }}-triton
{{ end }}

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: grafana-config
stringData:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
url: http://clearml-serving-prometheus:9090

View File

@@ -1,35 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: grafana
strategy:
type: Recreate
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: grafana
spec:
containers:
- image: "{{ .Values.grafana.image.repository }}:{{ .Values.grafana.image.tag }}"
name: clearml-serving-grafana
ports:
- containerPort: 3000
resources: {}
volumeMounts:
- mountPath: /etc/grafana/provisioning/datasources/
name: grafana-conf
restartPolicy: Always
volumes:
- name: grafana-conf
secret:
secretName: grafana-config

View File

@@ -1,40 +0,0 @@
{{- if .Values.grafana.ingress.enabled -}}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: clearml-serving-grafana
labels:
clearml.serving.service: clearml-serving-grafana
annotations:
{{- toYaml .Values.grafana.ingress.annotations | nindent 4 }}
spec:
{{- if .Values.grafana.ingress.tlsSecretName }}
tls:
- hosts:
- {{ .Values.grafana.ingress.hostName }}
secretName: {{ .Values.grafana.ingress.tlsSecretName }}
{{- end }}
rules:
- host: {{ .Values.grafana.ingress.hostName }}
http:
paths:
- path: {{ .Values.grafana.ingress.path }}
{{ if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }}
pathType: Prefix
backend:
service:
name: clearml-serving-grafana
port:
number: 3000
{{ else }}
backend:
serviceName: clearml-serving-grafana
servicePort: 3000
{{ end }}
{{- end }}

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: grafana
name: clearml-serving-grafana
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
clearml.serving.service: grafana

View File

@@ -1,40 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: kafka
name: kafka
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: kafka
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: kafka
spec:
containers:
- env:
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CFG_ADVERTISED_LISTENERS
value: PLAINTEXT://clearml-serving-kafka:9092
- name: KAFKA_CFG_LISTENERS
value: PLAINTEXT://0.0.0.0:9092
- name: KAFKA_CFG_ZOOKEEPER_CONNECT
value: clearml-serving-zookeeper:2181
- name: KAFKA_CREATE_TOPICS
value: '"topic_test:1:1"'
image: "{{ .Values.kafka.image.repository }}:{{ .Values.kafka.image.tag }}"
name: clearml-serving-kafka
ports:
- containerPort: 9092
resources: {}
restartPolicy: Always

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: kafka
name: clearml-serving-kafka
spec:
ports:
- name: "9092"
port: 9092
targetPort: 9092
selector:
clearml.serving.service: kafka

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: prometheus-config
stringData:
prometheus.yml: |-
global:
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
external_labels:
monitor: 'clearml-serving'
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'clearml-inference-stats'
scrape_interval: 5s
static_configs:
- targets: ['clearml-serving-statistics:9999']

View File

@@ -1,42 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: prometheus
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: prometheus
strategy:
type: Recreate
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: prometheus
spec:
containers:
- args:
- --config.file=/mnt/prometheus.yml
- --storage.tsdb.path=/prometheus
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --storage.tsdb.retention.time=200h
- --web.enable-lifecycle
image: "{{ .Values.prometheus.image.repository }}:{{ .Values.prometheus.image.tag }}"
name: clearml-serving-prometheus
ports:
- containerPort: 9090
resources: {}
volumeMounts:
- mountPath: /mnt
name: prometheus-conf
restartPolicy: Always
volumes:
- name: prometheus-conf
secret:
secretName: prometheus-config

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: prometheus
name: clearml-serving-prometheus
spec:
ports:
- name: "9090"
port: 9090
targetPort: 9090
selector:
clearml.serving.service: prometheus

View File

@@ -1,30 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
clearml.serving.service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
clearml.serving.service: zookeeper
strategy: {}
template:
metadata:
annotations: {}
labels:
clearml.serving.network/clearml-serving-backend: "true"
clearml.serving.service: zookeeper
spec:
containers:
- env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: "{{ .Values.zookeeper.image.repository }}:{{ .Values.zookeeper.image.tag }}"
name: clearml-serving-zookeeper
ports:
- containerPort: 2181
resources: {}
restartPolicy: Always

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
clearml.serving.service: zookeeper
name: clearml-serving-zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
clearml.serving.service: zookeeper

View File

@@ -1,3 +1,18 @@
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: false
# -- If this is set, chart will not generate a secret but will use what is defined here
existingSecret: ""
# -- Registry name
registry: docker.io
# -- Registry username
username: someone
# -- Registry password
password: pwd
# -- Email
email: someone@host.com
# -- ClearMl generic configurations
clearml:
apiAccessKey: "ClearML API Access Key"
@@ -7,69 +22,16 @@ clearml:
webHost: http://clearml-server-webserver:80
defaultBaseServeUrl: http://127.0.0.1:8080/serve
servingTaskId: "ClearML Serving Task ID"
# -- Zookeeper generic configigurations
zookeeper:
image:
repository: "bitnami/zookeeper"
tag: "3.7.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Kafka generic configigurations
kafka:
image:
repository: "bitnami/kafka"
tag: "3.1.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Prometheus generic configigurations
prometheus:
image:
repository: "prom/prometheus"
tag: "v2.34.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# -- Grafana generic configigurations
grafana:
image:
repository: "grafana/grafana"
tag: "8.4.4-ubuntu"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
ingress:
enabled: false
hostName: "serving-grafana.clearml.127-0-0-1.nip.io"
tlsSecretName: ""
annotations: {}
path: "/"
# -- Alertmanager generic configigurations
alertmanager:
image:
repository: "prom/alertmanager"
tag: "v0.23.0"
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
kafkaServeUrl: ""
# -- ClearML serving statistics configurations
clearml_serving_statistics:
# -- Enable ClearML Serving Statistics
enabled: true
# -- Container Image
image:
repository: "allegroai/clearml-serving-statistics"
tag: "1.2.0"
tag: "1.3.0"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
@@ -78,17 +40,26 @@ clearml_serving_statistics:
affinity: {}
# -- Pod resources definition
resources: {}
extraEnvironment: []
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml)
existingAdditionalConfigsConfigMap: ""
# -- reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap
existingAdditionalConfigsSecret: ""
# -- files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret
additionalConfigs: {}
# additionalFile.conf: |
# <filecontent>
# -- ClearML serving inference configurations
clearml_serving_inference:
# -- Container Image
image:
repository: "allegroai/clearml-serving-inference"
tag: "1.2.0"
tag: "1.3.0"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
@@ -97,10 +68,20 @@ clearml_serving_inference:
affinity: {}
# -- Pod resources definition
resources: {}
# -- Extra environment variables
extraEnvironment: []
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml)
existingAdditionalConfigsConfigMap: ""
# -- reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap
existingAdditionalConfigsSecret: ""
# -- files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret
additionalConfigs: {}
# additionalFile.conf: |
# <filecontent>
# -- Autoscaling configuration
autoscaling:
enabled: false
@@ -110,10 +91,17 @@ clearml_serving_inference:
targetMemory: 50
# -- Ingress exposing configurations
ingress:
# -- Enable/Disable ingress
enabled: false
# -- ClassName (must be defined if no default ingressClassName is available)
ingressClassName: ""
# -- Ingress hostname domain
hostName: "serving.clearml.127-0-0-1.nip.io"
# -- Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule.
tlsSecretName: ""
# -- Ingress annotations
annotations: {}
# -- Ingress root path url
path: "/"
# -- ClearML serving Triton configurations
@@ -123,7 +111,10 @@ clearml_serving_triton:
# -- Container Image
image:
repository: "allegroai/clearml-serving-triton"
tag: "1.2.0-22.07"
tag: "1.3.0"
# -- Runtime Class configuration
# uncomment to use custom runtime class, eg. nvidia when using GPU operator
# runtimeClassName: "nvidia"
# -- Node Selector configuration
nodeSelector: {}
# -- Tolerations configuration
@@ -132,10 +123,20 @@ clearml_serving_triton:
affinity: {}
# -- Pod resources definition
resources: {}
# -- Extra environment variables
extraEnvironment: []
# -- Extra Python Packages to be installed in running pods
extraPythonPackages: []
# - numpy==1.22.4
# - pandas==1.4.2
# -- reference for files declared in existing ConfigMap will be mounted and read by pod (examples in values.yaml)
existingAdditionalConfigsConfigMap: ""
# -- reference for files declared in existing Secret will be mounted and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap
existingAdditionalConfigsSecret: ""
# -- files declared in this parameter will be mounted on internal folder /opt/clearml/config and read by pod (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret
additionalConfigs: {}
# additionalFile.conf: |
# <filecontent>
# -- Autoscaling configuration
autoscaling:
enabled: false
@@ -145,20 +146,56 @@ clearml_serving_triton:
targetMemory: 50
# -- Ingress exposing configurations
ingress:
# -- Enable/Disable ingress
enabled: false
# -- ClassName (must be defined if no default ingressClassName is available)
ingressClassName: ""
# -- Ingress hostname domain
hostName: "serving-grpc.clearml.127-0-0-1.nip.io"
# -- Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule.
tlsSecretName: ""
# -- Ingress annotations
annotations: {}
# # Example for AWS ALB
# kubernetes.io/ingress.class: alb
# alb.ingress.kubernetes.io/backend-protocol: HTTP
# alb.ingress.kubernetes.io/backend-protocol-version: GRPC
# alb.ingress.kubernetes.io/certificate-arn: <cerntificate arn>
# alb.ingress.kubernetes.io/ssl-redirect: '443'
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# alb.ingress.kubernetes.io/target-type: ip
#
# # Example for NNGINX ingress controller
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
# -- Ingress root path url
path: "/"
# -- Configuration from https://github.com/bitnami/charts/blob/main/bitnami/kafka/values.yaml
kafka:
enabled: true
# -- Configuration from https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus/values.yaml
prometheus:
enabled: true
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
prometheus-pushgateway:
enabled: false
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
extraScrapeConfigs: |
- job_name: "{{ .Release.Name }}-stats"
static_configs:
- targets:
- "{{ .Release.Name }}-statistics:9999"
# -- Configuration from https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
grafana:
enabled: true
adminUser: admin
adminPassword: clearml
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: "http://{{ .Release.Name }}-prometheus-server"
access: proxy
isDefault: true

View File

@@ -1,12 +1,12 @@
dependencies:
- name: redis
repository: file://../../dependency_charts/redis
version: 10.9.0
repository: https://charts.bitnami.com/bitnami
version: 17.8.3
- name: mongodb
repository: file://../../dependency_charts/mongodb
version: 10.3.4
repository: https://charts.bitnami.com/bitnami
version: 13.18.5
- name: elasticsearch
repository: file://../../dependency_charts/elasticsearch
version: 7.16.2
digest: sha256:149b5a49382d280b1e083f3c193d014d3d2eb7fcdf3ec1402008996960cc173a
generated: "2022-06-02T21:09:00.961174+02:00"
repository: https://helm.elastic.co
version: 7.17.3
digest: sha256:d5864444fa8c5cb66a83ec02cdbe4b0776f9f6538dfe67132be7eeb2781b50e4
generated: "2025-01-07T15:18:46.845769+01:00"

View File

@@ -2,31 +2,35 @@ apiVersion: v2
name: clearml
description: MLOps platform
type: application
version: "5.0.0"
appVersion: "1.9.0"
kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
version: "7.14.4"
appVersion: "2.0"
kubeVersion: ">= 1.21.0-0 < 1.33.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
icon: https://raw.githubusercontent.com/clearml/clearml/master/docs/clearml-logo.svg
sources:
- https://github.com/allegroai/clearml-helm-charts
- https://github.com/allegroai/clearml
- https://github.com/clearml/clearml-helm-charts
- https://github.com/clearml/clearml
maintainers:
- name: valeriano-manassero
url: https://github.com/valeriano-manassero
- name: filippo-clearml
url: https://github.com/filippo-clearml
keywords:
- clearml
- "machine learning"
- mlops
dependencies:
- name: redis
version: "10.9.0"
repository: "file://../../dependency_charts/redis"
version: "17.8.3"
repository: "https://charts.bitnami.com/bitnami"
condition: redis.enabled
- name: mongodb
version: "10.3.4"
repository: "file://../../dependency_charts/mongodb"
version: "13.18.5"
repository: "https://charts.bitnami.com/bitnami"
condition: mongodb.enabled
- name: elasticsearch
version: "7.16.2"
repository: "file://../../dependency_charts/elasticsearch"
version: "7.17.3"
repository: "https://helm.elastic.co"
condition: elasticsearch.enabled
annotations:
artifacthub.io/changes: |
- kind: fixed
description: "casted port to string before concatenation"

View File

@@ -1,6 +1,6 @@
# ClearML Ecosystem for Kubernetes
![Version: 5.0.0](https://img.shields.io/badge/Version-5.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.9.0](https://img.shields.io/badge/AppVersion-1.9.0-informational?style=flat-square)
![Version: 7.14.4](https://img.shields.io/badge/Version-7.14.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.0](https://img.shields.io/badge/AppVersion-2.0-informational?style=flat-square)
MLOps platform
@@ -10,11 +10,11 @@ MLOps platform
| Name | Email | Url |
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
| filippo-clearml | | <https://github.com/filippo-clearml> |
## Introduction
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/allegroai/clearml).
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/clearml/clearml).
It allows multiple users to collaborate and manage their experiments.
**clearml-server** contains the following components:
@@ -25,6 +25,14 @@ It allows multiple users to collaborate and manage their experiments.
* Querying experiments history, logs and results
* Locally-hosted file server for storing images and models making them easily accessible using the Web-App
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
## Local environment
For development/evaluation it's possible to use [kind](https://kind.sigs.k8s.io).
@@ -60,7 +68,7 @@ nodes:
containerPath: /var/local-path-provisioner
EOF
helm install clearml allegroai/clearml
helm install clearml clearml/clearml
```
After deployment, the services will be exposed on localhost on the following ports:
@@ -88,7 +96,7 @@ Just pointing the domain records to the IP where ingress controller is respondin
A production ready cluster should also have some different configuration like the one proposed in `values-production.yaml` that can be applied with:
```
helm install clearml allegroai/clearml -f values-production.yaml
helm install clearml clearml/clearml -f values-production.yaml
```
## Upgrades/ Values upgrades
@@ -97,22 +105,25 @@ Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml allegroai/clearml
helm upgrade clearml clearml/clearml
```
Changing values on existing installation can be done with:
```
helm upgrade clearml allegroai/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
helm upgrade clearml clearml/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
```
Please note: updating values only should always be done setting explicit chart version to avoid a possible chart update.
Keeping separate updates procedures between version and values can be a good practice to seprate potential concerns.
## ENTERPRISE Version
### Major upgrade from 5.* to 6.*
There are some specific Enterprise version features that can be enabled only with specific Enterprise licensed images.
Enabling this features on OSS version can cause the entire installation to break.
Before issuing helm upgrade:
* delete Redis statefulset(s)
* scale MongoDB deployment(s) replicas to 0
* if using securityContexts check for new value form in values.yaml (podSecurityContext and containerSecurityContext)
## Additional Configuration for ClearML Server
@@ -121,43 +132,50 @@ You can also configure the **clearml-server** for:
* fixed users (users with credentials)
* non-responsive experiment watchdog settings
For detailed instructions, see the [Optional Configuration](https://github.com/allegroai/clearml-server#optional-configuration) section in the **clearml-server** repository README file.
For detailed instructions, see the [Optional Configuration](https://github.com/clearml/clearml-server#optional-configuration) section in the **clearml-server** repository README file.
## Source Code
* <https://github.com/allegroai/clearml-helm-charts>
* <https://github.com/allegroai/clearml>
* <https://github.com/clearml/clearml-helm-charts>
* <https://github.com/clearml/clearml>
## Requirements
Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
Kubernetes: `>= 1.21.0-0 < 1.33.0-0`
| Repository | Name | Version |
|------------|------|---------|
| file://../../dependency_charts/elasticsearch | elasticsearch | 7.16.2 |
| file://../../dependency_charts/mongodb | mongodb | 10.3.4 |
| file://../../dependency_charts/redis | redis | 10.9.0 |
| https://charts.bitnami.com/bitnami | mongodb | 13.18.5 |
| https://charts.bitnami.com/bitnami | redis | 17.8.3 |
| https://helm.elastic.co | elasticsearch | 7.17.3 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| apiserver | object | `{"additionalConfigs":{},"affinity":{},"enabled":true,"extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"},"indexReplicas":0,"indexShards":1,"ingress":{"annotations":{},"enabled":false,"hostName":"api.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"podAnnotations":{},"prepopulateEnabled":true,"processes":{"count":8,"maxRequests":1000,"maxRequestsJitter":300,"timeout":24000},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"nodePort":30008,"port":8008,"type":"NodePort"},"tolerations":[]}` | Api Server configurations |
| apiserver.additionalConfigs | object | `{}` | files declared in this parameter will be mounted and read by apiserver (examples in values.yaml) |
| apiserver | object | `{"additionalConfigs":{},"additionalVolumeMounts":{},"additionalVolumes":{},"affinity":{},"containerSecurityContext":{},"deploymentAnnotations":null,"enabled":true,"existingAdditionalConfigsConfigMap":"","existingAdditionalConfigsSecret":"","extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"},"ingress":{"annotations":{},"enabled":false,"hostName":"api.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""},"initContainers":{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}},"nodeSelector":{},"podAnnotations":{},"podSecurityContext":{},"prepopulateEnabled":true,"processes":{"count":8,"maxRequests":1000,"maxRequestsJitter":300,"timeout":24000},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"annotations":{},"nodePort":30008,"port":8008,"type":"NodePort"},"serviceAccountAnnotations":{},"serviceAccountName":"clearml","tolerations":[]}` | Api Server configurations |
| apiserver.additionalConfigs | object | `{}` | files declared in this parameter will be mounted and read by apiserver (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret |
| apiserver.additionalVolumeMounts | object | `{}` | Specifies where and how the volumes defined in additionalVolumes. |
| apiserver.additionalVolumes | object | `{}` | # Defines extra Kubernetes volumes to be attached to the pod. |
| apiserver.affinity | object | `{}` | Api Server affinity setup |
| apiserver.containerSecurityContext | object | `{}` | Api Server containers security context |
| apiserver.deploymentAnnotations | string | `nil` | Add the provided map to the annotations for the Deployment resource created by this chart. |
| apiserver.enabled | bool | `true` | Enable/Disable component deployment |
| apiserver.existingAdditionalConfigsConfigMap | string | `""` | reference for files declared in existing ConfigMap will be mounted and read by apiserver (examples in values.yaml) |
| apiserver.existingAdditionalConfigsSecret | string | `""` | reference for files declared in existing Secret will be mounted and read by apiserver (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap |
| apiserver.extraEnvs | list | `[]` | Api Server extra envrinoment variables |
| apiserver.image | object | `{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"}` | Api Server image configuration |
| apiserver.indexReplicas | int | `0` | Number of additional replicas in Elasticsearch indexes |
| apiserver.indexShards | int | `1` | Number of shards in Elasticsearch indexes |
| apiserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"api.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress configuration for Api Server component |
| apiserver.image | object | `{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"}` | Api Server image configuration |
| apiserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"api.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""}` | Ingress configuration for Api Server component |
| apiserver.ingress.annotations | object | `{}` | Ingress annotations |
| apiserver.ingress.enabled | bool | `false` | Enable/Disable ingress |
| apiserver.ingress.hostName | string | `"api.clearml.127-0-0-1.nip.io"` | Ingress hostname domain |
| apiserver.ingress.ingressClassName | string | `""` | ClassName (must be defined if no default ingressClassName is available) |
| apiserver.ingress.path | string | `"/"` | Ingress root path url |
| apiserver.ingress.tlsSecretName | string | `""` | Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule. |
| apiserver.initContainers | object | `{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}}` | Api Server resources per initContainers pod |
| apiserver.nodeSelector | object | `{}` | Api Server nodeselector |
| apiserver.podAnnotations | object | `{}` | specific annotation for Api Server pods |
| apiserver.podSecurityContext | object | `{}` | Api Server pod security context |
| apiserver.prepopulateEnabled | bool | `true` | Enable/Disable example data load |
| apiserver.processes | object | `{"count":8,"maxRequests":1000,"maxRequestsJitter":300,"timeout":24000}` | Api Server internal processes configuration |
| apiserver.processes.count | int | `8` | Api Server internal listing processes |
@@ -166,10 +184,13 @@ Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
| apiserver.processes.timeout | int | `24000` | Api timeout (ms) |
| apiserver.replicaCount | int | `1` | Api Server number of pods |
| apiserver.resources | object | `{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}}` | Api Server resources per pod; these are minimal requirements, it's suggested to increase these values in production environments |
| apiserver.service | object | `{"nodePort":30008,"port":8008,"type":"NodePort"}` | Api Server internal service configuration |
| apiserver.service | object | `{"annotations":{},"nodePort":30008,"port":8008,"type":"NodePort"}` | Api Server internal service configuration |
| apiserver.service.annotations | object | `{}` | specific annotation for Api Server service |
| apiserver.service.nodePort | int | `30008` | If service.type set to NodePort, this will be set to service's nodePort field. If service.type is set to others, this field will be ignored |
| apiserver.serviceAccountAnnotations | object | `{}` | Add the provided map to the annotations for the ServiceAccount resource created by this chart. |
| apiserver.serviceAccountName | string | `"clearml"` | The default serviceAccountName to be used |
| apiserver.tolerations | list | `[]` | Api Server tolerations setup |
| clearml | object | `{"apiserverKey":"GGS9F4M6XB2DXJ5AFT9F","apiserverSecret":"2oGujVFhPfaozhpuz2GzQfA5OyxmMsR3WVJpsCR5hrgHFs20PO","clientConfigurationApiUrl":"","clientConfigurationFilesUrl":"","cookieDomain":"","cookieName":"clearml-token-k8s","defaultCompany":"d1bd92a3b039400cbafc60a7a5b1e52b","fileserverKey":"XXCRJ123CEE2KSQ068WO","fileserverSecret":"YIy8EVAC7QCT4FtgitxAQGyW7xRHDZ4jpYlTE7HKiscpORl1hG","readinessprobeKey":"GK4PRTVT3706T25K6BA1","readinessprobeSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","secureAuthTokenSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","testUserKey":"ENP39EQM4SLACGD5FXB7","testUserSecret":"lPcm0imbcBZ8mwgO7tpadutiS3gnJD05x9j7afwXPS35IKbpiQ"}` | ClearMl generic configurations |
| clearml | object | `{"apiserverKey":"GGS9F4M6XB2DXJ5AFT9F","apiserverSecret":"2oGujVFhPfaozhpuz2GzQfA5OyxmMsR3WVJpsCR5hrgHFs20PO","clientConfigurationApiUrl":"","clientConfigurationFilesUrl":"","cookieDomain":"","cookieName":"clearml-token-k8s","defaultCompany":"d1bd92a3b039400cbafc60a7a5b1e52b","existingSecret":"","fileserverKey":"XXCRJ123CEE2KSQ068WO","fileserverSecret":"YIy8EVAC7QCT4FtgitxAQGyW7xRHDZ4jpYlTE7HKiscpORl1hG","readinessprobeKey":"GK4PRTVT3706T25K6BA1","readinessprobeSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","secureAuthTokenSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","testUserKey":"ENP39EQM4SLACGD5FXB7","testUserSecret":"lPcm0imbcBZ8mwgO7tpadutiS3gnJD05x9j7afwXPS35IKbpiQ"}` | ClearMl generic configurations |
| clearml.apiserverKey | string | `"GGS9F4M6XB2DXJ5AFT9F"` | Api Server basic auth key |
| clearml.apiserverSecret | string | `"2oGujVFhPfaozhpuz2GzQfA5OyxmMsR3WVJpsCR5hrgHFs20PO"` | Api Server basic auth secret |
| clearml.clientConfigurationApiUrl | string | `""` | Override the API Urls displayed when showing an example of the SDK's clearml.conf configuration |
@@ -177,6 +198,7 @@ Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
| clearml.cookieDomain | string | `""` | Cookie domain to be left empty if not exposed with an ingress |
| clearml.cookieName | string | `"clearml-token-k8s"` | Name fo the UI cookie |
| clearml.defaultCompany | string | `"d1bd92a3b039400cbafc60a7a5b1e52b"` | Company name |
| clearml.existingSecret | string | `""` | Pass Clearml secrets using an existing secret must contain the keys: apiserver_key, apiserver_secret, secure_auth_token_secret, test_user_key, test_user_secret |
| clearml.fileserverKey | string | `"XXCRJ123CEE2KSQ068WO"` | File Server basic auth key |
| clearml.fileserverSecret | string | `"YIy8EVAC7QCT4FtgitxAQGyW7xRHDZ4jpYlTE7HKiscpORl1hG"` | File Server basic auth secret |
| clearml.readinessprobeKey | string | `"GK4PRTVT3706T25K6BA1"` | Readiness probe basic auth key |
@@ -184,58 +206,48 @@ Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
| clearml.secureAuthTokenSecret | string | `"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2"` | Secure Auth secret |
| clearml.testUserKey | string | `"ENP39EQM4SLACGD5FXB7"` | Test Server basic auth key |
| clearml.testUserSecret | string | `"lPcm0imbcBZ8mwgO7tpadutiS3gnJD05x9j7afwXPS35IKbpiQ"` | Test File Server basic auth secret |
| elasticsearch | object | `{"clusterHealthCheckParams":"wait_for_status=yellow&timeout=1s","clusterName":"clearml-elastic","enabled":true,"esConfig":{"elasticsearch.yml":"xpack.security.enabled: false\n"},"esJavaOpts":"-Xmx2g -Xms2g","extraEnvs":[{"name":"bootstrap.memory_lock","value":"false"},{"name":"cluster.routing.allocation.node_initial_primaries_recoveries","value":"500"},{"name":"cluster.routing.allocation.disk.watermark.low","value":"500mb"},{"name":"cluster.routing.allocation.disk.watermark.high","value":"500mb"},{"name":"cluster.routing.allocation.disk.watermark.flood_stage","value":"500mb"},{"name":"http.compression_level","value":"7"},{"name":"reindex.remote.whitelist","value":"*.*"},{"name":"xpack.monitoring.enabled","value":"false"},{"name":"xpack.security.enabled","value":"false"}],"httpPort":9200,"minimumMasterNodes":1,"persistence":{"enabled":true},"replicas":1,"resources":{"limits":{"cpu":"2000m","memory":"4Gi"},"requests":{"cpu":"100m","memory":"2Gi"}},"roles":{"data":"true","ingest":"true","master":"true","remote_cluster_client":"true"},"volumeClaimTemplate":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":null}}` | Configuration from https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/values.yaml |
| enterpriseFeatures | object | `{"airGappedDocumentation":{"enabled":false,"image":{"repository":"","tag":""}},"clearmlApplications":{"affinity":{},"agentKey":"GK4PRTVT3706T25K6BA1","agentSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","basePodImage":{"repository":"","tag":""},"enabled":true,"extraEnvs":[],"gitAgentPass":"git_password","gitAgentUser":"git_user","image":{"pullPolicy":"IfNotPresent","repository":"","tag":""},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"tolerations":[]},"defaultCompanyGuid":"d1bd92a3b039400cbafc60a7a5b1e52b","enabled":false,"extraIndexUrl":"","overrideReferenceApiUrl":"","overrideReferenceFileUrl":""}` | Enterprise features (work only with an Enterprise license) |
| enterpriseFeatures.airGappedDocumentation | object | `{"enabled":false,"image":{"repository":"","tag":""}}` | Air gapped documentation configurations |
| enterpriseFeatures.airGappedDocumentation.enabled | bool | `false` | Enable/Disable air gapped documentation deployment |
| enterpriseFeatures.airGappedDocumentation.image | object | `{"repository":"","tag":""}` | Air gapped documentation image configuration |
| enterpriseFeatures.clearmlApplications | object | `{"affinity":{},"agentKey":"GK4PRTVT3706T25K6BA1","agentSecret":"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2","basePodImage":{"repository":"","tag":""},"enabled":true,"extraEnvs":[],"gitAgentPass":"git_password","gitAgentUser":"git_user","image":{"pullPolicy":"IfNotPresent","repository":"","tag":""},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"tolerations":[]}` | APPS configurations |
| enterpriseFeatures.clearmlApplications.affinity | object | `{}` | APPS affinity setup |
| enterpriseFeatures.clearmlApplications.agentKey | string | `"GK4PRTVT3706T25K6BA1"` | Apps Server basic auth key |
| enterpriseFeatures.clearmlApplications.agentSecret | string | `"ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2"` | Apps Server basic auth secret |
| enterpriseFeatures.clearmlApplications.basePodImage | object | `{"repository":"","tag":""}` | APPS base spawning pods image |
| enterpriseFeatures.clearmlApplications.enabled | bool | `true` | Enable/Disable component deployment |
| enterpriseFeatures.clearmlApplications.extraEnvs | list | `[]` | APPS extra envrinoment variables |
| enterpriseFeatures.clearmlApplications.gitAgentPass | string | `"git_password"` | Apps Server Git password |
| enterpriseFeatures.clearmlApplications.gitAgentUser | string | `"git_user"` | Apps Server Git user |
| enterpriseFeatures.clearmlApplications.image | object | `{"pullPolicy":"IfNotPresent","repository":"","tag":""}` | APPS image configuration |
| enterpriseFeatures.clearmlApplications.nodeSelector | object | `{}` | APPS nodeselector |
| enterpriseFeatures.clearmlApplications.podAnnotations | object | `{}` | specific annotation for APPS pods |
| enterpriseFeatures.clearmlApplications.replicaCount | int | `1` | APPS number of pods |
| enterpriseFeatures.clearmlApplications.resources | object | `{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}}` | APPS resources per pod; these are minimal requirements, it's suggested to increase these values in production environments |
| enterpriseFeatures.clearmlApplications.tolerations | list | `[]` | APPS tolerations setup |
| enterpriseFeatures.defaultCompanyGuid | string | `"d1bd92a3b039400cbafc60a7a5b1e52b"` | Company ID |
| enterpriseFeatures.enabled | bool | `false` | Enable/Disable Enterprise features |
| enterpriseFeatures.extraIndexUrl | string | `""` | extra index URL for Enterprise packages |
| enterpriseFeatures.overrideReferenceApiUrl | string | `""` | set this value AND overrideReferenceFileUrl if external endpoint exposure is in place (like a LoadBalancer) example: "https://api.clearml.local" |
| enterpriseFeatures.overrideReferenceFileUrl | string | `""` | set this value AND overrideReferenceAPIUrl if external endpoint exposure is in place (like a LoadBalancer) example: "https://files.clearml.local" |
| externalServices | object | `{"elasticsearchHost":"","elasticsearchPort":9200,"mongodbConnectionString":"","redisHost":"","redisPort":6379}` | Definition of external services to use if not enabled as dependency charts here |
| externalServices.elasticsearchHost | string | `""` | Existing ElasticSearch Hostname to use if elasticsearch.enabled is false |
| externalServices.elasticsearchPort | int | `9200` | Existing ElasticSearch Port to use if elasticsearch.enabled is false |
| externalServices.mongodbConnectionString | string | `""` | Existing MongoDB connection string to use if mongodb.enabled is false |
| externalServices.redisHost | string | `""` | Existing Redis Hostname to use if redis.enabled is false |
| elasticsearch | object | `{"clusterHealthCheckParams":"wait_for_status=yellow&timeout=1s","clusterName":"clearml-elastic","enabled":true,"esConfig":{"elasticsearch.yml":"xpack.security.enabled: false\n"},"esJavaOpts":"-Xmx2g -Xms2g","extraEnvs":[{"name":"bootstrap.memory_lock","value":"false"},{"name":"cluster.routing.allocation.node_initial_primaries_recoveries","value":"500"},{"name":"cluster.routing.allocation.disk.watermark.low","value":"500mb"},{"name":"cluster.routing.allocation.disk.watermark.high","value":"500mb"},{"name":"cluster.routing.allocation.disk.watermark.flood_stage","value":"500mb"},{"name":"http.compression_level","value":"7"},{"name":"reindex.remote.whitelist","value":"*.*"},{"name":"xpack.monitoring.enabled","value":"false"},{"name":"xpack.security.enabled","value":"false"}],"httpPort":9200,"minimumMasterNodes":1,"persistence":{"enabled":true},"rbac":{"create":true},"replicas":1,"resources":{"limits":{"cpu":"2000m","memory":"4Gi"},"requests":{"cpu":"100m","memory":"2Gi"}},"roles":{"data":"true","ingest":"true","master":"true","remote_cluster_client":"true"},"volumeClaimTemplate":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"50Gi"}},"storageClassName":null}}` | Configuration from https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/values.yaml |
| externalServices | object | `{"elasticsearchConnectionString":"[{\"host\":\"es_hostname1\",\"port\":9200},{\"host\":\"es_hostname2\",\"port\":9200},{\"host\":\"es_hostname3\",\"port\":9200}]","mongodbConnectionStringAuth":"mongodb://mongodb_hostname:27017/auth","mongodbConnectionStringBackend":"mongodb://mongodb_hostnamehostname:27017/backend","redisHost":"redis_hostname","redisPort":6379}` | Definition of external services to use if not enabled as dependency charts here |
| externalServices.elasticsearchConnectionString | string | `"[{\"host\":\"es_hostname1\",\"port\":9200},{\"host\":\"es_hostname2\",\"port\":9200},{\"host\":\"es_hostname3\",\"port\":9200}]"` | Existing ElasticSearch connectionstring if elasticsearch.enabled is false (example in values.yaml) |
| externalServices.mongodbConnectionStringAuth | string | `"mongodb://mongodb_hostname:27017/auth"` | Existing MongoDB connection string for BACKEND to use if mongodb.enabled is false (example in values.yaml) |
| externalServices.mongodbConnectionStringBackend | string | `"mongodb://mongodb_hostnamehostname:27017/backend"` | Existing MongoDB connection string for AUTH to use if mongodb.enabled is false (example in values.yaml) |
| externalServices.redisHost | string | `"redis_hostname"` | Existing Redis Hostname to use if redis.enabled is false (example in values.yaml) |
| externalServices.redisPort | int | `6379` | Existing Redis Port to use if redis.enabled is false |
| fileserver | object | `{"affinity":{},"enabled":true,"extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"},"ingress":{"annotations":{},"enabled":false,"hostName":"files.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"nodePort":30081,"port":8081,"type":"NodePort"},"storage":{"data":{"accessMode":"ReadWriteOnce","class":"","size":"50Gi"}},"tolerations":[]}` | File Server configurations |
| fileserver | object | `{"additionalVolumeMounts":{},"additionalVolumes":{},"affinity":{},"containerSecurityContext":{},"deploymentAnnotations":{},"enabled":true,"extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"},"ingress":{"annotations":{},"enabled":false,"hostName":"files.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""},"initContainers":{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}},"nodeSelector":{},"podAnnotations":{},"podSecurityContext":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"annotations":{},"nodePort":30081,"port":8081,"type":"NodePort"},"serviceAccountAnnotations":{},"serviceAccountName":"clearml","storage":{"data":{"accessMode":"ReadWriteOnce","class":"","existingPVC":"","size":"50Gi"},"enabled":true},"tolerations":[]}` | File Server configurations |
| fileserver.additionalVolumeMounts | object | `{}` | Specifies where and how the volumes defined in additionalVolumes. |
| fileserver.additionalVolumes | object | `{}` | # Defines extra Kubernetes volumes to be attached to the pod. |
| fileserver.affinity | object | `{}` | File Server affinity setup |
| fileserver.containerSecurityContext | object | `{}` | File Server containers security context |
| fileserver.deploymentAnnotations | object | `{}` | Add the provided map to the annotations for the Deployment resource created by this chart. |
| fileserver.enabled | bool | `true` | Enable/Disable component deployment |
| fileserver.extraEnvs | list | `[]` | File Server extra envrinoment variables |
| fileserver.image | object | `{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"}` | File Server image configuration |
| fileserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"files.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress configuration for File Server component |
| fileserver.image | object | `{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"}` | File Server image configuration |
| fileserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"files.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""}` | Ingress configuration for File Server component |
| fileserver.ingress.annotations | object | `{}` | Ingress annotations |
| fileserver.ingress.enabled | bool | `false` | Enable/Disable ingress |
| fileserver.ingress.hostName | string | `"files.clearml.127-0-0-1.nip.io"` | Ingress hostname domain |
| fileserver.ingress.ingressClassName | string | `""` | ClassName (must be defined if no default ingressClassName is available) |
| fileserver.ingress.path | string | `"/"` | Ingress root path url |
| fileserver.ingress.tlsSecretName | string | `""` | Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule. |
| fileserver.initContainers | object | `{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}}` | File Server resources per initContainers pod |
| fileserver.nodeSelector | object | `{}` | File Server nodeselector |
| fileserver.podAnnotations | object | `{}` | specific annotation for File Server pods |
| fileserver.podSecurityContext | object | `{}` | File Server pod security context |
| fileserver.replicaCount | int | `1` | File Server number of pods |
| fileserver.resources | object | `{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}}` | File Server resources per pod; these are minimal requirements, it's suggested to increase these values in production environments |
| fileserver.service | object | `{"nodePort":30081,"port":8081,"type":"NodePort"}` | File Server internal service configuration |
| fileserver.service | object | `{"annotations":{},"nodePort":30081,"port":8081,"type":"NodePort"}` | File Server internal service configuration |
| fileserver.service.annotations | object | `{}` | specific annotation for File Server service |
| fileserver.service.nodePort | int | `30081` | If service.type set to NodePort, this will be set to service's nodePort field. If service.type is set to others, this field will be ignored |
| fileserver.storage | object | `{"data":{"accessMode":"ReadWriteOnce","class":"","size":"50Gi"}}` | File server persistence settings |
| fileserver.serviceAccountAnnotations | object | `{}` | Add the provided map to the annotations for the ServiceAccount resource created by this chart. |
| fileserver.serviceAccountName | string | `"clearml"` | The default serviceAccountName to be used |
| fileserver.storage | object | `{"data":{"accessMode":"ReadWriteOnce","class":"","existingPVC":"","size":"50Gi"},"enabled":true}` | File server persistence settings |
| fileserver.storage.data.accessMode | string | `"ReadWriteOnce"` | Access mode (must be ReadWriteMany if fileserver replica > 1) |
| fileserver.storage.data.class | string | `""` | Storage class (use default if empty) |
| fileserver.storage.data.existingPVC | string | `""` | If set, it uses an already existing PVC instead of dynamic provisioning |
| fileserver.storage.enabled | bool | `true` | If set to false no PVC is created and emptyDir is used |
| fileserver.tolerations | list | `[]` | File Server tolerations setup |
| global | object | `{"imageRegistry":"docker.io"}` | Global parameters section |
| global.imageRegistry | string | `"docker.io"` | Images registry |
| imageCredentials | object | `{"email":"someone@host.com","enabled":false,"existingSecret":"","password":"pwd","registry":"docker.io","username":"someone"}` | Container registry configuration |
| imageCredentials.email | string | `"someone@host.com"` | Email |
| imageCredentials.enabled | bool | `false` | Use private authentication mode |
@@ -243,24 +255,34 @@ Kubernetes: `>= 1.21.0-0 < 1.26.0-0`
| imageCredentials.password | string | `"pwd"` | Registry password |
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
| mongodb | object | `{"architecture":"standalone","auth":{"enabled":false},"enabled":true,"persistence":{"accessModes":["ReadWriteOnce"],"enabled":true,"size":"50Gi","storageClass":null},"replicaCount":1}` | Configuration from https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml |
| redis | object | `{"cluster":{"enabled":false},"databaseNumber":0,"enabled":true,"master":{"name":"{{ .Release.Name }}-redis-master","persistence":{"accessModes":["ReadWriteOnce"],"enabled":true,"size":"5Gi","storageClass":null},"port":6379},"usePassword":false}` | Configuration from https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml |
| webserver | object | `{"additionalConfigs":{},"affinity":{},"enabled":true,"extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"},"ingress":{"annotations":{},"enabled":false,"hostName":"app.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"nodePort":30080,"port":8080,"type":"NodePort"},"tolerations":[]}` | Web Server configurations |
| mongodb | object | `{"architecture":"standalone","auth":{"enabled":false},"enabled":true,"persistence":{"accessModes":["ReadWriteOnce"],"enabled":true,"size":"50Gi","storageClass":null},"replicaCount":1,"updateStrategy":{"rollingUpdate":{"maxSurge":0,"maxUnavailable":1},"type":"RollingUpdate"}}` | Configuration from https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml |
| redis | object | `{"architecture":"standalone","auth":{"enabled":false},"databaseNumber":0,"enabled":true,"master":{"name":"{{ .Release.Name }}-redis-master","persistence":{"accessModes":["ReadWriteOnce"],"enabled":true,"size":"5Gi","storageClass":null},"port":6379}}` | Configuration from https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml |
| webserver | object | `{"additionalConfigs":{},"additionalVolumeMounts":{},"additionalVolumes":{},"affinity":{},"containerSecurityContext":{},"deploymentAnnotations":{},"enabled":true,"extraEnvs":[],"image":{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"},"ingress":{"annotations":{},"enabled":false,"hostName":"app.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""},"initContainers":{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}},"nodeSelector":{},"podAnnotations":{},"podSecurityContext":{},"replicaCount":1,"resources":{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"service":{"annotations":{},"nodePort":30080,"port":8080,"type":"NodePort"},"serviceAccountAnnotations":{},"serviceAccountName":"clearml","tolerations":[]}` | Web Server configurations |
| webserver.additionalConfigs | object | `{}` | Additional specific webserver configurations |
| webserver.additionalVolumeMounts | object | `{}` | Specifies where and how the volumes defined in additionalVolumes. |
| webserver.additionalVolumes | object | `{}` | # Defines extra Kubernetes volumes to be attached to the pod. |
| webserver.affinity | object | `{}` | Web Server affinity setup |
| webserver.containerSecurityContext | object | `{}` | Web Server containers security context |
| webserver.deploymentAnnotations | object | `{}` | Add the provided map to the annotations for the Deployment resource created by this chart. |
| webserver.enabled | bool | `true` | Enable/Disable component deployment |
| webserver.extraEnvs | list | `[]` | Web Server extra envrinoment variables |
| webserver.image | object | `{"pullPolicy":"IfNotPresent","repository":"allegroai/clearml","tag":"1.9.1-312"}` | Web Server image configuration |
| webserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"app.clearml.127-0-0-1.nip.io","path":"/","tlsSecretName":""}` | Ingress configuration for Web Server component |
| webserver.image | object | `{"pullPolicy":"IfNotPresent","registry":"","repository":"allegroai/clearml","tag":"2.0.0-613"}` | Web Server image configuration |
| webserver.ingress | object | `{"annotations":{},"enabled":false,"hostName":"app.clearml.127-0-0-1.nip.io","ingressClassName":"","path":"/","tlsSecretName":""}` | Ingress configuration for Web Server component |
| webserver.ingress.annotations | object | `{}` | Ingress annotations |
| webserver.ingress.enabled | bool | `false` | Enable/Disable ingress |
| webserver.ingress.hostName | string | `"app.clearml.127-0-0-1.nip.io"` | Ingress hostname domain |
| webserver.ingress.ingressClassName | string | `""` | ClassName (must be defined if no default ingressClassName is available) |
| webserver.ingress.path | string | `"/"` | Ingress root path url |
| webserver.ingress.tlsSecretName | string | `""` | Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule. |
| webserver.initContainers | object | `{"resources":{"limits":{"cpu":"10m","memory":"64Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}}` | Web Server resources per initContainers pod |
| webserver.nodeSelector | object | `{}` | Web Server nodeselector |
| webserver.podAnnotations | object | `{}` | specific annotation for Web Server pods |
| webserver.podSecurityContext | object | `{}` | Web Server pod security context |
| webserver.replicaCount | int | `1` | Web Server number of pods |
| webserver.resources | object | `{"limits":{"cpu":"2000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}}` | Web Server resources per pod; these are minimal requirements, it's suggested to increase these values in production environments |
| webserver.service | object | `{"nodePort":30080,"port":8080,"type":"NodePort"}` | Web Server internal service configuration |
| webserver.service | object | `{"annotations":{},"nodePort":30080,"port":8080,"type":"NodePort"}` | Web Server internal service configuration |
| webserver.service.annotations | object | `{}` | specific annotation for Web Server service |
| webserver.service.nodePort | int | `30080` | If service.type set to NodePort, this will be set to service's nodePort field. If service.type is set to others, this field will be ignored |
| webserver.serviceAccountAnnotations | object | `{}` | Add the provided map to the annotations for the ServiceAccount resource created by this chart. |
| webserver.serviceAccountName | string | `"clearml"` | The default serviceAccountName to be used |
| webserver.tolerations | list | `[]` | Web Server tolerations setup |

View File

@@ -11,7 +11,7 @@
## Introduction
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/allegroai/clearml).
The **clearml-server** is the backend service infrastructure for [ClearML](https://github.com/clearml/clearml).
It allows multiple users to collaborate and manage their experiments.
**clearml-server** contains the following components:
@@ -22,6 +22,14 @@ It allows multiple users to collaborate and manage their experiments.
* Querying experiments history, logs and results
* Locally-hosted file server for storing images and models making them easily accessible using the Web-App
## Add to local Helm repository
To add this chart to your local Helm repository:
```
helm repo add clearml https://clearml.github.io/clearml-helm-charts
```
## Local environment
For development/evaluation it's possible to use [kind](https://kind.sigs.k8s.io).
@@ -57,7 +65,7 @@ nodes:
containerPath: /var/local-path-provisioner
EOF
helm install clearml allegroai/clearml
helm install clearml clearml/clearml
```
After deployment, the services will be exposed on localhost on the following ports:
@@ -85,7 +93,7 @@ Just pointing the domain records to the IP where ingress controller is respondin
A production ready cluster should also have some different configuration like the one proposed in `values-production.yaml` that can be applied with:
```
helm install clearml allegroai/clearml -f values-production.yaml
helm install clearml clearml/clearml -f values-production.yaml
```
## Upgrades/ Values upgrades
@@ -94,22 +102,25 @@ Updating to latest version of this chart can be done in two steps:
```
helm repo update
helm upgrade clearml allegroai/clearml
helm upgrade clearml clearml/clearml
```
Changing values on existing installation can be done with:
```
helm upgrade clearml allegroai/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
helm upgrade clearml clearml/clearml --version <CURRENT CHART VERSION> -f custom_values.yaml
```
Please note: updating values only should always be done setting explicit chart version to avoid a possible chart update.
Keeping separate updates procedures between version and values can be a good practice to seprate potential concerns.
## ENTERPRISE Version
### Major upgrade from 5.* to 6.*
There are some specific Enterprise version features that can be enabled only with specific Enterprise licensed images.
Enabling this features on OSS version can cause the entire installation to break.
Before issuing helm upgrade:
* delete Redis statefulset(s)
* scale MongoDB deployment(s) replicas to 0
* if using securityContexts check for new value form in values.yaml (podSecurityContext and containerSecurityContext)
## Additional Configuration for ClearML Server
@@ -118,7 +129,7 @@ You can also configure the **clearml-server** for:
* fixed users (users with credentials)
* non-responsive experiment watchdog settings
For detailed instructions, see the [Optional Configuration](https://github.com/allegroai/clearml-server#optional-configuration) section in the **clearml-server** repository README file.
For detailed instructions, see the [Optional Configuration](https://github.com/clearml/clearml-server#optional-configuration) section in the **clearml-server** repository README file.
{{ template "chart.sourcesSection" . }}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -11,7 +11,7 @@
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "clearml.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.webserver.service.port }}
{{- else if contains "ClusterIP" .Values.webserver.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "clearml.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "clearml.fullname" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT

View File

@@ -46,22 +46,38 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
Selector labels
*/}}
{{- define "clearml.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearml.fullname" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Registry name
*/}}
{{- define "registryNamePrefix" -}}
{{- $registryName := "" -}}
{{- if .globalValues }}
{{- if .globalValues.imageRegistry }}
{{- $registryName = printf "%s/" .globalValues.imageRegistry -}}
{{- end -}}
{{- end -}}
{{- if .imageRegistryValue }}
{{- $registryName = printf "%s/" .imageRegistryValue -}}
{{- end -}}
{{- printf "%s" $registryName }}
{{- end }}
{{/*
Reference Name (apiserver)
*/}}
{{- define "apiserver.referenceName" -}}
{{- include "clearml.name" . }}-apiserver
{{- include "clearml.fullname" . }}-apiserver
{{- end }}
{{/*
Selector labels (apiserver)
*/}}
{{- define "apiserver.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearml.fullname" . }}
app.kubernetes.io/instance: {{ include "apiserver.referenceName" . }}
{{- end }}
@@ -69,14 +85,14 @@ app.kubernetes.io/instance: {{ include "apiserver.referenceName" . }}
Reference Name (fileserver)
*/}}
{{- define "fileserver.referenceName" -}}
{{- include "clearml.name" . }}-fileserver
{{- include "clearml.fullname" . }}-fileserver
{{- end }}
{{/*
Selector labels (fileserver)
*/}}
{{- define "fileserver.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearml.fullname" . }}
app.kubernetes.io/instance: {{ include "fileserver.referenceName" . }}
{{- end }}
@@ -84,14 +100,14 @@ app.kubernetes.io/instance: {{ include "fileserver.referenceName" . }}
Reference Name (webserver)
*/}}
{{- define "webserver.referenceName" -}}
{{- include "clearml.name" . }}-webserver
{{- include "clearml.fullname" . }}-webserver
{{- end }}
{{/*
Selector labels (webserver)
*/}}
{{- define "webserver.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearml.fullname" . }}
app.kubernetes.io/instance: {{ include "webserver.referenceName" . }}
{{- end }}
@@ -99,14 +115,14 @@ app.kubernetes.io/instance: {{ include "webserver.referenceName" . }}
Reference Name (apps)
*/}}
{{- define "clearmlApplications.referenceName" -}}
{{- include "clearml.name" . }}-apps
{{- include "clearml.fullname" . }}-apps
{{- end }}
{{/*
Selector labels (apps)
*/}}
{{- define "clearmlApplications.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/name: {{ include "clearml.fullname" . }}
app.kubernetes.io/instance: {{ include "clearmlApplications.referenceName" . }}
{{- end }}
@@ -137,25 +153,59 @@ Create readiness probe auth token
{{- printf "%s:%s" .Values.clearml.readinessprobeKey .Values.clearml.readinessprobeSecret | b64enc }}
{{- end }}
{{/*
Create configuration secret name
*/}}
{{- define "clearml.confSecretName" }}
{{- if .Values.clearml.existingSecret -}} {{ default "clearml-conf" .Values.clearml.existingSecret | quote }} {{- else -}} "clearml-conf" {{- end }}
{{- end }}
{{/*
compose file url
*/}}
{{- define "clearml.fileUrl" -}}
{{- if .Values.clearml.clientConfigurationFilesUrl }}
{{- .Values.clearml.clientConfigurationFilesUrl }}
{{- else if .Values.fileserver.ingress.enabled }}
{{- $protocol := "http" }}
{{- if .Values.fileserver.ingress.tlsSecretName }}
{{- $protocol = "https" }}
{{- end }}
{{- printf "%s%s%s" $protocol "://" .Values.fileserver.ingress.hostName }}
{{- else }}
{{- printf "%s%s%s%s" "http://" (include "fileserver.referenceName" .) ":" ( .Values.fileserver.service.port | toString ) }}
{{- end }}
{{- end }}
{{/*
Elasticsearch Service name
*/}}
{{- define "elasticsearch.servicename" -}}
{{- if .Values.elasticsearch.enabled }}
{{- .Values.elasticsearch.clusterName }}-master
{{- else }}
{{- .Values.externalServices.elasticsearchHost }}
{{- end }}
{{- end }}
{{/*
Elasticsearch Service port
*/}}
{{- define "elasticsearch.serviceport" -}}
{{- if .Values.elasticsearch.enabled }}
{{- .Values.elasticsearch.httpPort }}
{{- end }}
{{/*
Elasticsearch Service schema
*/}}
{{- define "elasticsearch.servicescheme" -}}
{{- .Values.elasticsearch.httpScheme }}
{{- end }}
{{/*
Elasticsearch Comnnection string
*/}}
{{- define "elasticsearch.connectionstring" -}}
{{- if .Values.elasticsearch.enabled }}
{{- printf "[{\"host\":\"%s\",\"port\":%s,\"scheme\":\"%s\"}]" (include "elasticsearch.servicename" .) (include "elasticsearch.serviceport" .) (include "elasticsearch.servicescheme" .) | quote }}
{{- else }}
{{- .Values.externalServices.elasticsearchPort }}
{{- .Values.externalServices.elasticsearchConnectionString | quote }}
{{- end }}
{{- end }}
@@ -163,7 +213,6 @@ Elasticsearch Service port
MongoDB Comnnection string
*/}}
{{- define "mongodb.connectionstring" -}}
{{- if .Values.mongodb.enabled }}
{{- if eq .Values.mongodb.architecture "standalone" }}
{{- printf "%s%s%s" "mongodb://" .Release.Name "-mongodb:27017" }}
{{- else }}
@@ -173,8 +222,16 @@ MongoDB Comnnection string
{{- end }}
{{- printf "%s" ( trimSuffix "," $connectionString ) }}
{{- end }}
{{- end }}
{{/*
MongoDB hostname
*/}}
{{- define "mongodb.hostname" -}}
{{- if eq .Values.mongodb.architecture "standalone" }}
{{- printf "%s" "mongodb" }}
{{- else }}
{{- .Values.externalServices.mongodbConnectionString }}
{{- printf "%s" "mongodb-headless" }}
{{- end }}
{{- end }}
@@ -206,11 +263,11 @@ clientConfiguration string compose
{{- define "clearml.clientConfiguration" -}}
{{- $clientConfiguration := "" }}
{{- if and (.Values.clearml.clientConfigurationApiUrl) .Values.clearml.clientConfigurationFilesUrl }}
{{- $clientConfiguration = "{\"apiServer\":\"{{ .Values.clearml.clientConfigurationApiUrl }}\",\"filesServer\":\"{{ .Values.clearml.clientConfigurationFilesUrl }}\"}" }}
{{- $clientConfiguration = printf "%s%s%s%s%s" "{\"apiServer\":\"" .Values.clearml.clientConfigurationApiUrl "\",\"filesServer\":\"" .Values.clearml.clientConfigurationFilesUrl "\"}" }}
{{- else if .Values.clearml.clientConfigurationApiUrl }}
{{- $clientConfiguration = "{\"apiServer\":\"{{ .Values.clearml.clientConfigurationApiUrl }}\"}" }}
{{- $clientConfiguration = printf "%s%s%s" "{\"apiServer\":\"" .Values.clearml.clientConfigurationApiUrl "\"}" }}
{{- else if .Values.clearml.clientConfigurationFilesUrl }}
{{- $clientConfiguration = "{\"filesServer\":\"{{ .Values.clearml.clientConfigurationFilesUrl }}\"}" }}
{{- $clientConfiguration = printf "%s%s%s" "{\"filesServer\":\"" .Values.clearml.clientConfigurationFilesUrl "\"}" }}
{{- end }}
{{- $clientConfiguration }}
{{- end }}

View File

@@ -0,0 +1,146 @@
{{- if .Values.apiserver.enabled }}
{{- if (include "clearml.fileUrl" .) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "apiserver.referenceName" . }}-asyncdelete
labels:
{{- include "clearml.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "clearml.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.apiserver.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clearml.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ .Values.apiserver.serviceAccountName }}-apiserver
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{- if or .Values.apiserver.additionalConfigs .Values.apiserver.existingAdditionalConfigsConfigMap .Values.apiserver.existingAdditionalConfigsSecret }}
volumes:
- name: apiserver-config
{{- if or .Values.apiserver.existingAdditionalConfigsConfigMap }}
configMap:
name: {{ .Values.apiserver.existingAdditionalConfigsConfigMap }}
{{- else if or .Values.apiserver.existingAdditionalConfigsSecret }}
secret:
secretName: {{ .Values.apiserver.existingAdditionalConfigsSecret }}
{{- else if or .Values.apiserver.additionalConfigs }}
configMap:
name: "{{ include "apiserver.referenceName" . }}-configmap"
{{- end }}
{{- end }}
securityContext:
{{ toYaml .Values.apiserver.podSecurityContext | nindent 8 }}
initContainers:
- name: init-apiserver
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.apiserver.image.registry) }}{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
{{- if .Values.elasticsearch.enabled }}
while [ $(curl -sw '%{http_code}' "http://{{ include "elasticsearch.servicename" . }}:{{ include "elasticsearch.serviceport" . }}/_cluster/health" -o /dev/null) -ne 200 ] ; do
echo "waiting for elasticsearch" ;
sleep 5 ;
done ;
{{- end }}
{{- if .Values.mongodb.enabled }}
while [ $(curl --telnet-option BOGUS --connect-timeout 2 -s "telnet://{{ .Release.Name }}-{{ include "mongodb.hostname" . }}:27017" -o /dev/null; echo $?) -ne 49 ] ; do
echo "waiting for mongodb" ;
sleep 5 ;
done ;
{{- end }}
{{- if .Values.redis.enabled }}
while [ $(curl --telnet-option BOGUS --connect-timeout 2 -s "telnet://{{ include "redis.servicename" . }}:{{ include "redis.serviceport" . }}" -o /dev/null; echo $?) -ne 49 ] ; do
echo "waiting for redis" ;
sleep 5 ;
done ;
{{- end }}
securityContext:
{{ toYaml .Values.apiserver.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.apiserver.initContainers.resources | nindent 12 }}
containers:
- name: clearml-apiserver
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.apiserver.image.registry) }}{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag }}"
imagePullPolicy: {{ .Values.apiserver.image.pullPolicy }}
command:
- /bin/sh
- -c
- >
python3 -m jobs.async_urls_delete --fileserver-host http://{{ include "fileserver.referenceName" . }}:{{ .Values.fileserver.service.port }}
env:
- name: CLEARML_REDIS_SERVICE_HOST
value: {{ include "redis.servicename" . }}
- name: CLEARML_REDIS_SERVICE_PORT
value: "{{ include "redis.serviceport" . }}"
{{- if .Values.mongodb.enabled }}
- name: CLEARML_MONGODB_SERVICE_CONNECTION_STRING
value: {{ include "mongodb.connectionstring" . | quote }}
{{- else }}
- name: CLEARML__HOSTS__MONGO__BACKEND__HOST
value: {{ .Values.externalServices.mongodbConnectionStringBackend | quote }}
- name: CLEARML__HOSTS__MONGO__AUTH__HOST
value: {{ .Values.externalServices.mongodbConnectionStringAuth | quote }}
{{- end }}
- name: CLEARML__HOSTS__ELASTIC__WORKERS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__EVENTS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__DATASETS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__LOGS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__secure__auth__token_secret
valueFrom:
secretKeyRef:
name: {{ include "clearml.confSecretName" .}}
key: secure_auth_token_secret
- name: CLEARML__apiserver__default_company_name
value: "{{ .Values.clearml.defaultCompany }}"
- name: CLEARML__logging__handlers__text_file__filename
value: "/dev/null"
- name: PYTHONPATH
value: /opt/clearml/apiserver
- name: CLEARML__apiserver__default_company
value: "{{ .Values.clearml.defaultCompanyGuid }}"
- name: CLEARML__services__async_urls_delete__fileserver__url_prefixes
value: "[\"{{ include "clearml.fileUrl" . }}\"]"
{{- if or .Values.apiserver.additionalConfigs .Values.apiserver.existingAdditionalConfigsConfigMap .Values.apiserver.existingAdditionalConfigsSecret }}
volumeMounts:
- name: apiserver-config
mountPath: /opt/clearml/config
{{- end }}
resources:
{{- toYaml .Values.apiserver.resources | nindent 12 }}
securityContext:
{{ toYaml .Values.apiserver.containerSecurityContext | nindent 12 }}
{{- with .Values.apiserver.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.apiserver.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.apiserver.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -5,6 +5,10 @@ metadata:
name: {{ include "apiserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.apiserver.deploymentAnnotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.apiserver.replicaCount }}
selector:
@@ -19,95 +23,120 @@ spec:
labels:
{{- include "apiserver.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ .Values.apiserver.serviceAccountName }}-apiserver
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{- if .Values.apiserver.additionalConfigs }}
volumes:
{{- if or .Values.apiserver.additionalConfigs .Values.apiserver.existingAdditionalConfigsConfigMap .Values.apiserver.existingAdditionalConfigsSecret .Values.apiserver.additionalVolumes }}
volumes:
{{- if .Values.apiserver.existingAdditionalConfigsConfigMap }}
- name: apiserver-config
configMap:
name: {{ .Values.apiserver.existingAdditionalConfigsConfigMap }}
{{- else if .Values.apiserver.existingAdditionalConfigsSecret }}
- name: apiserver-config
secret:
secretName: {{ .Values.apiserver.existingAdditionalConfigsSecret }}
{{- else if .Values.apiserver.additionalConfigs }}
- name: apiserver-config
configMap:
name: "{{ include "apiserver.referenceName" . }}-configmap"
{{- end }}
{{- if .Values.apiserver.additionalVolumes }}
{{- toYaml .Values.apiserver.additionalVolumes | nindent 8 }}
{{- end }}
{{- end }}
securityContext:
{{ toYaml .Values.apiserver.podSecurityContext | nindent 8 }}
initContainers:
- name: init-apiserver
image: "{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag | default .Chart.AppVersion }}"
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.apiserver.image.registry) }}{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
{{- if .Values.elasticsearch.enabled }}
while [ $(curl -sw '%{http_code}' "http://{{ include "elasticsearch.servicename" . }}:{{ include "elasticsearch.serviceport" . }}/_cluster/health" -o /dev/null) -ne 200 ] ; do
echo "waiting for elasticsearch" ;
sleep 5 ;
done
done ;
{{- end }}
{{- if .Values.mongodb.enabled }}
while [ $(curl --telnet-option BOGUS --connect-timeout 2 -s "telnet://{{ .Release.Name }}-{{ include "mongodb.hostname" . }}:27017" -o /dev/null; echo $?) -ne 49 ] ; do
echo "waiting for mongodb" ;
sleep 5 ;
done ;
{{- end }}
{{- if .Values.redis.enabled }}
while [ $(curl --telnet-option BOGUS --connect-timeout 2 -s "telnet://{{ include "redis.servicename" . }}:{{ include "redis.serviceport" . }}" -o /dev/null; echo $?) -ne 49 ] ; do
echo "waiting for redis" ;
sleep 5 ;
done ;
{{- end }}
securityContext:
{{ toYaml .Values.apiserver.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.apiserver.initContainers.resources | nindent 12 }}
containers:
- name: clearml-apiserver
image: "{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag | default .Chart.AppVersion }}"
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.apiserver.image.registry) }}{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag }}"
imagePullPolicy: {{ .Values.apiserver.image.pullPolicy }}
ports:
- name: http
containerPort: 8008
protocol: TCP
env:
- name: CLEARML_ELASTIC_SERVICE_HOST
value: {{ include "elasticsearch.servicename" . }}
- name: CLEARML_ELASTIC_SERVICE_PORT
value: "{{ include "elasticsearch.serviceport" . }}"
- name: CLEARML__HOSTS__ELASTIC__WORKERS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__EVENTS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__DATASETS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
- name: CLEARML__HOSTS__ELASTIC__LOGS__HOSTS
value: {{ include "elasticsearch.connectionstring" . }}
{{- if .Values.mongodb.enabled }}
- name: CLEARML_MONGODB_SERVICE_CONNECTION_STRING
value: {{ include "mongodb.connectionstring" . | quote }}
{{- else }}
- name: CLEARML__HOSTS__MONGO__BACKEND__HOST
value: {{ .Values.externalServices.mongodbConnectionStringBackend | quote }}
- name: CLEARML__HOSTS__MONGO__AUTH__HOST
value: {{ .Values.externalServices.mongodbConnectionStringAuth | quote }}
{{- end }}
- name: CLEARML_REDIS_SERVICE_HOST
value: {{ include "redis.servicename" . }}
- name: CLEARML_REDIS_SERVICE_PORT
value: "{{ include "redis.serviceport" . }}"
- name: CLEARML_CONFIG_PATH
value: /opt/clearml/config/default
- name: CLEARML__apiserver__default_company
value: /opt/clearml/config
- name: CLEARML__apiserver__default_company_name
value: "{{ .Values.clearml.defaultCompany }}"
{{- if not (eq .Values.clearml.cookieDomain "") }}
- name: CLEARML__APISERVER__AUTH__SESSION_AUTH_COOKIE_NAME
value: {{ .Values.clearml.cookieName }}
{{- if .Values.clearml.cookieDomain }}
- name: CLEARML__APISERVER__AUTH__COOKIES__DOMAIN
value: ".{{ .Values.clearml.cookieDomain }}"
{{- end }}
- name: CLEARML__secure__credentials__apiserver__user_key
valueFrom:
secretKeyRef:
name: clearml-conf
name: {{ include "clearml.confSecretName" .}}
key: apiserver_key
- name: CLEARML__secure__credentials__apiserver__user_secret
valueFrom:
secretKeyRef:
name: clearml-conf
name: {{ include "clearml.confSecretName" .}}
key: apiserver_secret
- name: CLEARML__secure__credentials__fileserver__user_key
valueFrom:
secretKeyRef:
name: clearml-conf
key: fileserver_key
- name: CLEARML__secure__credentials__fileserver__user_secret
valueFrom:
secretKeyRef:
name: clearml-conf
key: fileserver_secret
- name: CLEARML__secure__applications__agents_credentials__apps_agent__user_key
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_agent_key
- name: CLEARML__secure__applications__agents_credentials__apps_agent__user_secret
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_agent_secret
- name: CLEARML__secure__auth__token_secret
valueFrom:
secretKeyRef:
name: clearml-conf
name: {{ include "clearml.confSecretName" .}}
key: secure_auth_token_secret
{{- if .Values.apiserver.prepopulateEnabled }}
- name: CLEARML__APISERVER__PRE_POPULATE__ENABLED
@@ -115,72 +144,23 @@ spec:
- name: CLEARML__APISERVER__PRE_POPULATE__ZIP_FILES
value: "/opt/clearml/db-pre-populate"
{{- end }}
{{- if .Values.enterpriseFeatures.enabled }}
- name: CLEARML__apiserver__default_company
value: "{{ .Values.enterpriseFeatures.defaultCompanyGuid }}"
- name: APPLY_ES_MAPPINGS
value: "false"
- name: CLEARML__HOSTS__ELASTIC__LOGS__HOSTS
value: "[\"http://{{ include "elasticsearch.servicename" . }}:{{ include "elasticsearch.serviceport" . }}\"]"
- name: NUMBER_OF_GUNICORN_WORKERS
value: "{{ .Values.apiserver.processes.count }}"
- name: GUNICORN_TIMEOUT
value: "{{ .Values.apiserver.processes.timeout }}"
- name: GUNICORN_MAX_REQUESTS
value: "{{ .Values.apiserver.processes.maxRequests }}"
- name: GUNICORN_MAX_REQUESTS_JITTER
value: "{{ .Values.apiserver.processes.maxRequestsJitter }}"
- name: CLEARML_CONFIG_VERBOSE
value: "0"
- name: CLEARML__SERVICES__APPLICATIONS__TEMPLATES__FOLDER
value: "/opt/allegro/config/applications"
- name: CLEARML__apiserver__apilog__prefix
value: "fluentd."
- name: CLEARML__apiserver__apilog__index_name_prefix__default
value: "allegro.apiserver.api-logs."
- name: CLEARML__apiserver__apilog__adapter
value: "logging"
- name: CLEARML__apiserver__apilog__rotation__index_size
value: "225000"
- name: CLEARML__services__tasks__non_responsive_tasks_watchdog__enabled
value: "false"
- name: CLEARML__APISERVER__AUTH__COOKIES__MAX_AGE
value: "2678400"
- name: CLEARML__services__frames__scroll_state_expiration_hours
value: "6"
- name: CLEARML__services__organization__features__applications
value: "true"
- name: CLEARML__services__organization__features__app_management
value: "true"
- name: CLEARML__SERVICES___ELASTIC__MAPPINGS__EVENTS__NUMBER_OF_REPLICAS
value: {{ .Values.apiserver.indexReplicas | quote }}
- name: CLEARML__SERVICES___ELASTIC__MAPPINGS__EVENTS__NUMBER_OF_SHARDS
value: {{ .Values.apiserver.indexShards | quote }}
- name: CLEARML__APISERVER__LOG_CALLS
value: "false"
- name: CLEARML_ENV
value: "onprem_k8s"
{{- else }}
- name: CLEARML__SECURE__CREDENTIALS__TESTS__USER_KEY
valueFrom:
secretKeyRef:
name: "clearml-conf"
name: {{ include "clearml.confSecretName" .}}
key: test_user_key
- name: CLEARML__SECURE__CREDENTIALS__TESTS__USER_SECRET
valueFrom:
secretKeyRef:
name: "clearml-conf"
name: {{ include "clearml.confSecretName" .}}
key: test_user_secret
- name: CLEARML_ENV
value: "helm-cloud"
{{- end }}
{{- if .Values.apiserver.extraEnvs }}
{{ toYaml .Values.apiserver.extraEnvs | nindent 10 }}
{{- end }}
{{- if not .Values.enterpriseFeatures.enabled }}
args:
- apiserver
{{- end }}
livenessProbe:
initialDelaySeconds: 60
httpGet:
@@ -190,22 +170,25 @@ spec:
initialDelaySeconds: 60
failureThreshold: 8
httpGet:
{{- if .Values.enterpriseFeatures.enabled }}
path: /server.health_check
{{- else }}
path: /debug.ping
{{- end }}
port: 8008
httpHeaders:
- name: Authorization
value: Basic {{ include "readinessProbeAuth" . }}
{{- if .Values.apiserver.additionalConfigs }}
{{- if or .Values.apiserver.additionalConfigs .Values.apiserver.existingAdditionalConfigsConfigMap .Values.apiserver.existingAdditionalConfigsSecret .Values.apiserver.additionalVolumeMounts }}
volumeMounts:
{{- if or .Values.apiserver.additionalConfigs .Values.apiserver.existingAdditionalConfigsConfigMap .Values.apiserver.existingAdditionalConfigsSecret }}
- name: apiserver-config
mountPath: /opt/clearml/config/default
mountPath: /opt/clearml/config
{{- end }}
{{- if .Values.apiserver.additionalVolumeMounts }}
{{- toYaml .Values.apiserver.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- end }}
resources:
{{- toYaml .Values.apiserver.resources | nindent 12 }}
securityContext:
{{ toYaml .Values.apiserver.containerSecurityContext | nindent 12 }}
{{- with .Values.apiserver.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -20,6 +20,9 @@ metadata:
{{- toYaml $annotations | nindent 4 }}
spec:
{{- if .Values.apiserver.ingress.ingressClassName }}
ingressClassName: {{ .Values.apiserver.ingress.ingressClassName }}
{{- end }}
{{- if .Values.apiserver.ingress.tlsSecretName }}
tls:
- hosts:

View File

@@ -5,6 +5,10 @@ metadata:
name: {{ include "apiserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.apiserver.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.apiserver.service.type }}
ports:

View File

@@ -1,120 +0,0 @@
{{- if .Values.enterpriseFeatures.enabled }}
{{- if .Values.enterpriseFeatures.clearmlApplications.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clearmlApplications.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.enterpriseFeatures.clearmlApplications.replicaCount }}
selector:
matchLabels:
{{- include "clearmlApplications.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.enterpriseFeatures.clearmlApplications.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clearmlApplications.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
{{- if .Values.enterpriseFeatures.clearmlApplications.additionalConfigs }}
volumes:
- name: apps-config
configMap:
name: "{{ include "clearmlApplications.referenceName" . }}-configmap"
{{- end }}
serviceAccountName: "clearml-apps-sa"
initContainers:
- name: init-apps
image: "{{ .Values.enterpriseFeatures.clearmlApplications.image.repository }}:{{ .Values.enterpriseFeatures.clearmlApplications.image.tag | default .Chart.AppVersion }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://{{ include "apiserver.referenceName" . }}:{{ .Values.apiserver.service.port }}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done
containers:
- name: clearml-apps
image: "{{ .Values.enterpriseFeatures.clearmlApplications.image.repository }}:{{ .Values.enterpriseFeatures.clearmlApplications.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.enterpriseFeatures.clearmlApplications.image.pullPolicy }}
ports:
- name: http
containerPort: 8008
protocol: TCP
env:
- name: CLEARML_API_HOST
value: "http://{{ include "apiserver.referenceName" . }}:{{ .Values.apiserver.service.port }}"
- name: CLEARML_FILES_HOST
value: "http://{{ include "fileserver.referenceName" . }}:{{ .Values.fileserver.service.port }}"
- name: CLEARML_WEB_HOST
value: "http://{{ include "webserver.referenceName" . }}:{{ .Values.webserver.service.port }}"
- name: CLEARML_AGENT_DEFAULT_BASE_DOCKER
value: "{{ .Values.enterpriseFeatures.clearmlApplications.basePodImage.repository }}:{{ .Values.enterpriseFeatures.clearmlApplications.basePodImage.tag }}"
- name: CLEARML_WORKER_ID
value: "apps-agent-1"
- name: CLEARML_NO_DEFAULT_SERVER
value: "true"
- name: CLEARML_AGENT_DAEMON_OPTIONS
value: "--foreground --create-queue --use-owner-token --child-report-tags application --services-mode=5"
- name: K8S_GLUE_QUEUE
value: "apps_queue"
- name: CLEARML_AGENT_DISABLE_SSH_MOUNT
value: "1"
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_agent_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_agent_secret
- name: CLEARML_AGENT_GIT_USER
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_git_agent_user
- name: CLEARML_AGENT_GIT_PASS
valueFrom:
secretKeyRef:
name: clearml-conf
key: apps_git_agent_pass
{{- if .Values.enterpriseFeatures.clearmlApplications.extraEnvs }}
{{ toYaml .Values.enterpriseFeatures.clearmlApplications.extraEnvs | nindent 10 }}
{{- end }}
{{- if .Values.enterpriseFeatures.clearmlApplications.additionalConfigs }}
volumeMounts:
- name: apps-config
mountPath: /opt/clearml/config/default
{{- end }}
resources:
{{- toYaml .Values.enterpriseFeatures.clearmlApplications.resources | nindent 12 }}
{{- with .Values.enterpriseFeatures.clearmlApplications.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.enterpriseFeatures.clearmlApplications.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.enterpriseFeatures.clearmlApplications.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,58 +0,0 @@
{{- if .Values.enterpriseFeatures.enabled }}
{{- if .Values.enterpriseFeatures.clearmlApplications.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: "clearml-apps-sa"
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "clearmlApplications.referenceName" . }}-kpa
rules:
- apiGroups:
- ""
resources:
- pods
verbs: ["get", "list", "watch", "create", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "clearmlApplications.referenceName" . }}-kpa
subjects:
- kind: ServiceAccount
name: "clearml-apps-sa"
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "clearmlApplications.referenceName" . }}-kpa
{{- else }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "clearmlApplications.referenceName" . }}-kpa
rules:
- apiGroups:
- ""
resources:
- pods
verbs: ["get", "list", "watch", "create", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "clearmlApplications.referenceName" . }}-kpa
subjects:
- kind: ServiceAccount
name: "clearml-apps-sa"
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "clearmlApplications.referenceName" . }}-kpa
{{- end }}
{{- end }}

View File

@@ -7,11 +7,7 @@ data:
apiserver_secret: {{ .Values.clearml.apiserverSecret | b64enc }}
fileserver_key: {{ .Values.clearml.fileserverKey | b64enc }}
fileserver_secret: {{ .Values.clearml.fileserverSecret | b64enc }}
apps_agent_key: {{ .Values.enterpriseFeatures.clearmlApplications.agentKey | b64enc }}
apps_agent_secret: {{ .Values.enterpriseFeatures.clearmlApplications.agentSecret | b64enc }}
secure_auth_token_secret: {{ .Values.clearml.secureAuthTokenSecret | b64enc }}
apps_git_agent_user: {{ .Values.enterpriseFeatures.clearmlApplications.gitAgentUser | b64enc }}
apps_git_agent_pass: {{ .Values.enterpriseFeatures.clearmlApplications.gitAgentPass | b64enc }}
test_user_key: {{ .Values.clearml.testUserKey | b64enc }}
test_user_secret: {{ .Values.clearml.testUserSecret | b64enc }}
---

View File

@@ -5,6 +5,10 @@ metadata:
name: {{ include "fileserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.fileserver.deploymentAnnotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.fileserver.replicaCount }}
selector:
@@ -19,33 +23,54 @@ spec:
labels:
{{- include "fileserver.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ .Values.fileserver.serviceAccountName }}-fileserver
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
{{- end }}
volumes:
{{- if .Values.fileserver.storage.enabled }}
{{- if .Values.fileserver.storage.data.existingPVC }}
- name: fileserver-data
persistentVolumeClaim:
claimName: {{ .Values.fileserver.storage.data.existingPVC | quote }}
{{- else }}
- name: fileserver-data
persistentVolumeClaim:
claimName: {{ include "fileserver.referenceName" . }}-data
{{- end }}
{{- else }}
- name: fileserver-data
emptyDir: {}
{{- end }}
{{- if .Values.fileserver.additionalVolumes }}
{{- toYaml .Values.fileserver.additionalVolumes | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.fileserver.podSecurityContext | nindent 8 }}
initContainers:
- name: init-fileserver
image: "{{ .Values.fileserver.image.repository }}:{{ .Values.fileserver.image.tag | default .Chart.AppVersion }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://{{ include "apiserver.referenceName" . }}:{{ .Values.apiserver.service.port }}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done
- name: init-fileserver
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.fileserver.image.registry) }}{{ .Values.fileserver.image.repository }}:{{ .Values.fileserver.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://{{ include "apiserver.referenceName" . }}:{{ .Values.apiserver.service.port }}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done
securityContext:
{{ toYaml .Values.fileserver.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.fileserver.initContainers.resources | nindent 12 }}
containers:
- name: clearml-fileserver
image: "{{ .Values.fileserver.image.repository }}:{{ .Values.fileserver.image.tag | default .Chart.AppVersion }}"
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.fileserver.image.registry) }}{{ .Values.fileserver.image.repository }}:{{ .Values.fileserver.image.tag }}"
imagePullPolicy: {{ .Values.fileserver.image.pullPolicy }}
ports:
- name: http
@@ -65,20 +90,18 @@ spec:
- name: USER_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
name: {{ include "clearml.confSecretName" .}}
key: fileserver_key
- name: USER_SECRET
valueFrom:
secretKeyRef:
name: clearml-conf
secretKeyRef:
name: {{ include "clearml.confSecretName" .}}
key: fileserver_secret
{{- if .Values.fileserver.extraEnvs }}
{{ toYaml .Values.fileserver.extraEnvs | nindent 10 }}
{{- end }}
{{- if not .Values.enterpriseFeatures.enabled }}
args:
- fileserver
{{- end }}
livenessProbe:
exec:
command:
@@ -94,8 +117,13 @@ spec:
volumeMounts:
- name: fileserver-data
mountPath: /mnt/fileserver
{{- if .Values.fileserver.additionalVolumeMounts }}
{{- toYaml .Values.fileserver.additionalVolumeMounts | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.fileserver.resources | nindent 12 }}
securityContext:
{{ toYaml .Values.fileserver.containerSecurityContext | nindent 12 }}
{{- with .Values.fileserver.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -19,6 +19,9 @@ metadata:
annotations:
{{- toYaml $annotations | nindent 4 }}
spec:
{{- if .Values.fileserver.ingress.ingressClassName }}
ingressClassName: {{ .Values.fileserver.ingress.ingressClassName }}
{{- end }}
{{- if .Values.fileserver.ingress.tlsSecretName }}
tls:
- hosts:

View File

@@ -1,4 +1,6 @@
{{- if .Values.fileserver.enabled }}
{{- if .Values.fileserver.storage.enabled }}
{{- if not .Values.fileserver.storage.data.existingPVC }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
@@ -11,7 +13,9 @@ spec:
resources:
requests:
storage: {{ .Values.fileserver.storage.data.size | quote }}
{{- if .Values.fileserver.storage.data.class -}}
{{- if .Values.fileserver.storage.data.class }}
storageClassName: {{ .Values.fileserver.storage.data.class | quote }}
{{- end -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -5,6 +5,10 @@ metadata:
name: {{ include "fileserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.fileserver.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.fileserver.service.type }}
ports:

View File

@@ -0,0 +1,26 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.apiserver.serviceAccountName }}-apiserver
{{- if .Values.apiserver.serviceAccountAnnotations }}
annotations:
{{- toYaml .Values.apiserver.serviceAccountAnnotations | nindent 4 }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.fileserver.serviceAccountName }}-fileserver
{{- if .Values.fileserver.serviceAccountAnnotations }}
annotations:
{{- toYaml .Values.fileserver.serviceAccountAnnotations | nindent 4 }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.webserver.serviceAccountName }}-webserver
{{- if .Values.webserver.serviceAccountAnnotations }}
annotations:
{{- toYaml .Values.webserver.serviceAccountAnnotations | nindent 4 }}
{{- end }}

View File

@@ -6,24 +6,6 @@ metadata:
labels:
{{- include "clearml.labels" . | nindent 4 }}
data:
{{- if .Values.enterpriseFeatures.enabled }}
configuration.json: |
{
"gettingStartedContext": {
"install":"pip install -U --extra-index-url {{ .Values.enterpriseFeatures.extraIndexUrl }} allegroai",
"configure": "allegroai-init",
"packageName": "allegroai",
"agentName": "allegroai"
},
"docsLink": "https://clear.ml/docs/",
"applicationsBackground": "ui-assets/apps-message.svg"
{{- if and .Values.webserver.overrideReferenceApiUrl .Values.enterpriseFeatures.overrideReferenceFileUrl }}
,
"fileBaseUrl": "{{ .Values.enterpriseFeatures.overrideReferenceFileUrl }}",
"apiBaseUrl": "{{ .Values.enterpriseFeatures.overrideReferenceApiUrl }}"
{{- end }}
}
{{- end }}
{{- range $key, $val := .Values.webserver.additionalConfigs }}
{{ $key }}: |
{{- $val | nindent 4 }}

View File

@@ -5,6 +5,10 @@ metadata:
name: {{ include "webserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.webserver.deploymentAnnotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.webserver.replicaCount }}
selector:
@@ -19,10 +23,11 @@ spec:
labels:
{{- include "webserver.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ .Values.webserver.serviceAccountName }}-webserver
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
- name: {{ .Values.imageCredentials.existingSecret }}
{{- else }}
- name: clearml-registry-key
{{- end }}
@@ -31,28 +36,14 @@ spec:
- name: webserver-config
configMap:
name: "{{ include "webserver.referenceName" . }}-configmap"
{{- if .Values.enterpriseFeatures.airGappedDocumentation.enabled }}
- name: documentation
emptyDir: {}
{{- if .Values.webserver.additionalVolumes }}
{{- toYaml .Values.webserver.additionalVolumes | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.webserver.podSecurityContext | nindent 8 }}
initContainers:
{{- if .Values.enterpriseFeatures.airGappedDocumentation.enabled }}
- name: init-airgap-docs
image: "{{ .Values.enterpriseFeatures.airGappedDocumentation.image.repository }}:{{ .Values.enterpriseFeatures.airGappedDocumentation.image.tag | default .Chart.AppVersion }}"
command:
- /bin/sh
- -c
- cp -a /docs_site/* /usr/share/nginx/html/clearml
volumeMounts:
- name: webserver-config
mountPath: /mnt/external_files/configs
{{- if .Values.enterpriseFeatures.airGappedDocumentation.enabled }}
- mountPath: /usr/share/nginx/html/clearml
name: documentation
{{- end }}
{{- end }}
- name: init-webserver
image: "{{ .Values.webserver.image.repository }}:{{ .Values.webserver.image.tag | default .Chart.AppVersion }}"
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.webserver.image.registry) }}{{ .Values.webserver.image.repository }}:{{ .Values.webserver.image.tag }}"
command:
- /bin/sh
- -c
@@ -62,65 +53,35 @@ spec:
echo "waiting for apiserver" ;
sleep 5 ;
done
securityContext:
{{ toYaml .Values.webserver.containerSecurityContext | nindent 12 }}
resources:
{{- toYaml .Values.webserver.initContainers.resources | nindent 12 }}
containers:
- name: clearml-webserver
image: "{{ .Values.webserver.image.repository }}:{{ .Values.webserver.image.tag | default .Chart.AppVersion }}"
image: "{{ include "registryNamePrefix" (dict "globalValues" .Values.global "imageRegistryValue" .Values.webserver.image.registry) }}{{ .Values.webserver.image.repository }}:{{ .Values.webserver.image.tag }}"
imagePullPolicy: {{ .Values.webserver.image.pullPolicy }}
ports:
- name: http
{{- if .Values.enterpriseFeatures.enabled }}
containerPort: 8080
{{- else }}
containerPort: 80
{{- end }}
protocol: TCP
livenessProbe:
exec:
command:
- curl
- -X OPTIONS
{{- if .Values.enterpriseFeatures.enabled }}
- http://localhost:8080/
{{- else }}
- http://localhost:80/
{{- end }}
readinessProbe:
exec:
command:
- curl
- -X OPTIONS
{{- if .Values.enterpriseFeatures.enabled }}
- http://localhost:8080/
{{- else }}
- http://localhost:80/
{{- end }}
env:
- name: NGINX_APISERVER_ADDRESS
value: "http://{{ include "apiserver.referenceName" . }}:{{ .Values.apiserver.service.port }}"
- name: NGINX_FILESERVER_ADDRESS
value: "http://{{ include "fileserver.referenceName" . }}:{{ .Values.fileserver.service.port }}"
{{- if .Values.enterpriseFeatures.enabled }}
{{- if .Values.enterpriseFeatures.airGappedDocumentation.enabled }}
- name: WEBSERVER__docsLink
value: "\"clearml/docs/\""
{{- end }}
- name: COMPANY_ID
value: "{{ .Values.clearml.defaultCompany }}"
- name: WEBSERVER__appsYouTubeIntroLink
value: "\"https://www.youtube.com/embed/HACL60h1Z54\""
- name: WEBSERVER__appsYouTubeIntroVideoId
value: "\"HACL60h1Z54\""
- name: USER_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_key
- name: USER_SECRET
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_secret
{{- end }}
{{- if include "clearml.clientConfiguration" . }}
- name: WEBSERVER__displayedServerUrls
value: {{ include "clearml.clientConfiguration" . | quote }}
@@ -128,19 +89,18 @@ spec:
{{- if .Values.webserver.extraEnvs }}
{{ toYaml .Values.webserver.extraEnvs | nindent 10 }}
{{- end }}
{{- if not .Values.enterpriseFeatures.enabled }}
args:
- webserver
{{- end }}
volumeMounts:
- name: webserver-config
mountPath: /mnt/external_files/configs
{{- if .Values.enterpriseFeatures.airGappedDocumentation.enabled }}
- mountPath: /usr/share/nginx/html/clearml
name: documentation
{{- if .Values.webserver.additionalVolumeMounts }}
{{- toYaml .Values.webserver.additionalVolumeMounts | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.webserver.resources | nindent 12 }}
securityContext:
{{ toYaml .Values.webserver.containerSecurityContext | nindent 12 }}
{{- with .Values.webserver.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -19,6 +19,9 @@ metadata:
annotations:
{{- toYaml $annotations | nindent 4 }}
spec:
{{- if .Values.webserver.ingress.ingressClassName }}
ingressClassName: {{ .Values.webserver.ingress.ingressClassName }}
{{- end }}
{{- if .Values.webserver.ingress.tlsSecretName }}
tls:
- hosts:

View File

@@ -5,15 +5,15 @@ metadata:
name: {{ include "webserver.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
{{- with .Values.webserver.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.webserver.service.type }}
ports:
- port: {{ .Values.webserver.service.port }}
{{- if .Values.enterpriseFeatures.enabled }}
targetPort: 8080
{{- else }}
targetPort: 80
{{- end }}
{{- if eq .Values.webserver.service.type "NodePort" }}
nodePort: {{ .Values.webserver.service.nodePort }}
{{- end }}

View File

@@ -16,10 +16,22 @@ webserver:
ingress:
enabled: true
hostName: "app.clearml.127-0-0-1.nip.io"
redis:
architecture: replication
master:
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Gi
## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner
storageClass: null
replica:
replicaCount: 2
mongodb:
enabled: true
architecture: replicaset
replicaCount: 3
replicaCount: 2
arbiter:
enabled: false
pdb:

View File

@@ -1,3 +1,8 @@
# -- Global parameters section
global:
# -- Images registry
imageRegistry: "docker.io"
# -- Container registry configuration
imageCredentials:
# -- Use private authentication mode
@@ -43,20 +48,32 @@ clearml:
clientConfigurationApiUrl: ""
# -- Override the Files Urls displayed when showing an example of the SDK's clearml.conf configuration
clientConfigurationFilesUrl: ""
# -- Pass Clearml secrets using an existing secret
# must contain the keys: apiserver_key, apiserver_secret, secure_auth_token_secret, test_user_key, test_user_secret
existingSecret: ""
# -- Api Server configurations
apiserver:
# -- Enable/Disable component deployment
enabled: true
# -- Add the provided map to the annotations for the Deployment resource created by this chart.
deploymentAnnotations:
# -- Enable/Disable example data load
prepopulateEnabled: true
# -- The default serviceAccountName to be used
serviceAccountName: clearml
# -- Add the provided map to the annotations for the ServiceAccount resource created by this chart.
serviceAccountAnnotations: {}
# -- Api Server image configuration
image:
registry: ""
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.9.1-312"
tag: "2.0.0-613"
# -- Api Server internal service configuration
service:
# -- specific annotation for Api Server service
annotations: {}
type: NodePort
port: 8008
# -- If service.type set to NodePort, this will be set to service's nodePort field.
@@ -64,10 +81,21 @@ apiserver:
nodePort: 30008
# -- Api Server number of pods
replicaCount: 1
# -- Api Server resources per initContainers pod
initContainers:
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 10m
memory: 64Mi
# -- Ingress configuration for Api Server component
ingress:
# -- Enable/Disable ingress
enabled: false
# -- ClassName (must be defined if no default ingressClassName is available)
ingressClassName: ""
# -- Ingress hostname domain
hostName: "api.clearml.127-0-0-1.nip.io"
# -- Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule.
@@ -88,10 +116,6 @@ apiserver:
maxRequestsJitter: 300
# -- Api Server extra envrinoment variables
extraEnvs: []
# -- Number of additional replicas in Elasticsearch indexes
indexReplicas: 0
# -- Number of shards in Elasticsearch indexes
indexShards: 1
# -- specific annotation for Api Server pods
podAnnotations: {}
# -- Api Server resources per pod; these are minimal requirements, it's suggested to increase
@@ -109,7 +133,17 @@ apiserver:
tolerations: []
# -- Api Server affinity setup
affinity: {}
# -- files declared in this parameter will be mounted and read by apiserver (examples in values.yaml)
# -- Api Server pod security context
podSecurityContext: {}
# -- Api Server containers security context
containerSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- reference for files declared in existing ConfigMap will be mounted and read by apiserver (examples in values.yaml)
existingAdditionalConfigsConfigMap: ""
# -- reference for files declared in existing Secret will be mounted and read by apiserver (examples in values.yaml) if not overridden by existingAdditionalConfigsConfigMap
existingAdditionalConfigsSecret: ""
# -- files declared in this parameter will be mounted and read by apiserver (examples in values.yaml) if not overridden by existingAdditionalConfigsSecret
additionalConfigs: {}
# services.conf: |
# tasks {
@@ -139,18 +173,37 @@ apiserver:
# ]
# }
# }
# -- # Defines extra Kubernetes volumes to be attached to the pod.
additionalVolumes: {}
# - name: ramdisk
# emptyDir:
# medium: Memory
# sizeLimit: 32Gi
# -- Specifies where and how the volumes defined in additionalVolumes.
additionalVolumeMounts: {}
# - mountPath: /dev/shm
# name: ramdisk
# -- File Server configurations
fileserver:
# -- Enable/Disable component deployment
enabled: true
# -- Add the provided map to the annotations for the Deployment resource created by this chart.
deploymentAnnotations: {}
# -- The default serviceAccountName to be used
serviceAccountName: clearml
# -- Add the provided map to the annotations for the ServiceAccount resource created by this chart.
serviceAccountAnnotations: {}
# -- File Server image configuration
image:
registry: ""
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.9.1-312"
tag: "2.0.0-613"
# -- File Server internal service configuration
service:
# -- specific annotation for File Server service
annotations: {}
type: NodePort
port: 8081
# -- If service.type set to NodePort, this will be set to service's nodePort field.
@@ -158,10 +211,21 @@ fileserver:
nodePort: 30081
# -- File Server number of pods
replicaCount: 1
# -- File Server resources per initContainers pod
initContainers:
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 10m
memory: 64Mi
# -- Ingress configuration for File Server component
ingress:
# -- Enable/Disable ingress
enabled: false
# -- ClassName (must be defined if no default ingressClassName is available)
ingressClassName: ""
# -- Ingress hostname domain
hostName: "files.clearml.127-0-0-1.nip.io"
# -- Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule.
@@ -189,26 +253,55 @@ fileserver:
tolerations: []
# -- File Server affinity setup
affinity: {}
# -- File Server pod security context
podSecurityContext: {}
# -- File Server containers security context
containerSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- File server persistence settings
storage:
# -- If set to false no PVC is created and emptyDir is used
enabled: true
data:
# -- If set, it uses an already existing PVC instead of dynamic provisioning
existingPVC: ""
# -- Storage class (use default if empty)
class: ""
# -- Access mode (must be ReadWriteMany if fileserver replica > 1)
accessMode: ReadWriteOnce
size: 50Gi
# -- # Defines extra Kubernetes volumes to be attached to the pod.
additionalVolumes: {}
# - name: ramdisk
# emptyDir:
# medium: Memory
# sizeLimit: 32Gi
# -- Specifies where and how the volumes defined in additionalVolumes.
additionalVolumeMounts: {}
# - mountPath: /dev/shm
# name: ramdisk
# -- Web Server configurations
webserver:
# -- Enable/Disable component deployment
enabled: true
# -- Add the provided map to the annotations for the Deployment resource created by this chart.
deploymentAnnotations: {}
# -- The default serviceAccountName to be used
serviceAccountName: clearml
# -- Add the provided map to the annotations for the ServiceAccount resource created by this chart.
serviceAccountAnnotations: {}
# -- Web Server image configuration
image:
registry: ""
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.9.1-312"
tag: "2.0.0-613"
# -- Web Server internal service configuration
service:
# -- specific annotation for Web Server service
annotations: {}
type: NodePort
port: 8080
# -- If service.type set to NodePort, this will be set to service's nodePort field.
@@ -216,10 +309,21 @@ webserver:
nodePort: 30080
# -- Web Server number of pods
replicaCount: 1
# -- Web Server resources per initContainers pod
initContainers:
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 10m
memory: 64Mi
# -- Ingress configuration for Web Server component
ingress:
# -- Enable/Disable ingress
enabled: false
# -- ClassName (must be defined if no default ingressClassName is available)
ingressClassName: ""
# -- Ingress hostname domain
hostName: "app.clearml.127-0-0-1.nip.io"
# -- Reference to secret containing TLS certificate. If set, it enables HTTPS on ingress rule.
@@ -247,26 +351,43 @@ webserver:
tolerations: []
# -- Web Server affinity setup
affinity: {}
# -- Web Server pod security context
podSecurityContext: {}
# -- Web Server containers security context
containerSecurityContext: {}
# runAsUser: 1001
# fsGroup: 1001
# -- Additional specific webserver configurations
additionalConfigs: {}
# -- # Defines extra Kubernetes volumes to be attached to the pod.
additionalVolumes: {}
# - name: ramdisk
# emptyDir:
# medium: Memory
# sizeLimit: 32Gi
# -- Specifies where and how the volumes defined in additionalVolumes.
additionalVolumeMounts: {}
# - mountPath: /dev/shm
# name: ramdisk
# -- Definition of external services to use if not enabled as dependency charts here
externalServices:
# -- Existing ElasticSearch Hostname to use if elasticsearch.enabled is false
elasticsearchHost: ""
# -- Existing ElasticSearch Port to use if elasticsearch.enabled is false
elasticsearchPort: 9200
# -- Existing MongoDB connection string to use if mongodb.enabled is false
mongodbConnectionString: ""
# -- Existing Redis Hostname to use if redis.enabled is false
redisHost: ""
# -- Existing ElasticSearch connectionstring if elasticsearch.enabled is false (example in values.yaml)
elasticsearchConnectionString: "[{\"host\":\"es_hostname1\",\"port\":9200},{\"host\":\"es_hostname2\",\"port\":9200},{\"host\":\"es_hostname3\",\"port\":9200}]"
# -- Existing MongoDB connection string for BACKEND to use if mongodb.enabled is false (example in values.yaml)
mongodbConnectionStringAuth: "mongodb://mongodb_hostname:27017/auth"
# -- Existing MongoDB connection string for AUTH to use if mongodb.enabled is false (example in values.yaml)
mongodbConnectionStringBackend: "mongodb://mongodb_hostnamehostname:27017/backend"
# -- Existing Redis Hostname to use if redis.enabled is false (example in values.yaml)
redisHost: "redis_hostname"
# -- Existing Redis Port to use if redis.enabled is false
redisPort: 6379
# -- Configuration from https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml
redis:
enabled: true
usePassword: false
auth:
enabled: false
databaseNumber: 0
master:
name: "{{ .Release.Name }}-redis-master"
@@ -278,12 +399,16 @@ redis:
size: 5Gi
## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner
storageClass: null
cluster:
enabled: false
architecture: standalone
# -- Configuration from https://github.com/bitnami/charts/blob/master/bitnami/mongodb/values.yaml
mongodb:
enabled: true
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
architecture: standalone
auth:
enabled: false
@@ -308,7 +433,8 @@ elasticsearch:
replicas: 1
# Readiness probe hack for a single-node cluster (where status will never be green). Should be removed if using replicas > 1
clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"
rbac:
create: true
minimumMasterNodes: 1
clusterName: clearml-elastic
esJavaOpts: "-Xmx2g -Xms2g"
@@ -350,68 +476,3 @@ elasticsearch:
esConfig:
elasticsearch.yml: |
xpack.security.enabled: false
# -- Enterprise features (work only with an Enterprise license)
enterpriseFeatures:
# -- Enable/Disable Enterprise features
enabled: false
# -- Company ID
defaultCompanyGuid: "d1bd92a3b039400cbafc60a7a5b1e52b"
# -- Air gapped documentation configurations
airGappedDocumentation:
# -- Enable/Disable air gapped documentation deployment
enabled: false
# -- Air gapped documentation image configuration
image:
repository: ""
tag: ""
# -- set this value AND overrideReferenceFileUrl if external endpoint exposure is in place (like a LoadBalancer)
# example: "https://api.clearml.local"
overrideReferenceApiUrl: ""
# -- set this value AND overrideReferenceAPIUrl if external endpoint exposure is in place (like a LoadBalancer)
# example: "https://files.clearml.local"
overrideReferenceFileUrl: ""
# -- extra index URL for Enterprise packages
extraIndexUrl: ""
# -- APPS configurations
clearmlApplications:
# -- Apps Server basic auth key
agentKey: GK4PRTVT3706T25K6BA1
# -- Apps Server basic auth secret
agentSecret: ymLh1ok5k5xNUQfS944Xdx9xjf0wueokqKM2dMZfHuH9ayItG2
# -- Apps Server Git user
gitAgentUser: "git_user"
# -- Apps Server Git password
gitAgentPass: "git_password"
# -- Enable/Disable component deployment
enabled: true
# -- APPS image configuration
image:
repository: ""
pullPolicy: IfNotPresent
tag: ""
# -- APPS base spawning pods image
basePodImage:
repository: ""
tag: ""
# -- APPS number of pods
replicaCount: 1
# -- APPS extra envrinoment variables
extraEnvs: []
# -- specific annotation for APPS pods
podAnnotations: {}
# -- APPS resources per pod; these are minimal requirements, it's suggested to increase
# these values in production environments
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 2000m
memory: 1Gi
# -- APPS nodeselector
nodeSelector: {}
# -- APPS tolerations setup
tolerations: []
# -- APPS affinity setup
affinity: {}

View File

@@ -1,2 +0,0 @@
tests/
.pytest_cache/

View File

@@ -1,12 +0,0 @@
apiVersion: v1
appVersion: 7.16.2
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
icon: https://helm.elastic.co/icons/elasticsearch.png
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
sources:
- https://github.com/elastic/elasticsearch
version: 7.16.2

View File

@@ -1 +0,0 @@
include ../helpers/common.mk

View File

@@ -1,457 +0,0 @@
# Elasticsearch Helm Chart
[![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+master.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/elastic)](https://artifacthub.io/packages/search?repo=elastic)
This Helm chart is a lightweight way to configure and run our official
[Elasticsearch Docker image][].
<!-- development warning placeholder -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Requirements](#requirements)
- [Installing](#installing)
- [Install released version using Helm repository](#install-released-version-using-helm-repository)
- [Install development version from a branch](#install-development-version-from-a-branch)
- [Upgrading](#upgrading)
- [Usage notes](#usage-notes)
- [Configuration](#configuration)
- [Deprecated](#deprecated)
- [FAQ](#faq)
- [How to deploy this chart on a specific K8S distribution?](#how-to-deploy-this-chart-on-a-specific-k8s-distribution)
- [How to deploy dedicated nodes types?](#how-to-deploy-dedicated-nodes-types)
- [Clustering and Node Discovery](#clustering-and-node-discovery)
- [How to deploy clusters with security (authentication and TLS) enabled?](#how-to-deploy-clusters-with-security-authentication-and-tls-enabled)
- [How to migrate from helm/charts stable chart?](#how-to-migrate-from-helmcharts-stable-chart)
- [How to install plugins?](#how-to-install-plugins)
- [How to use the keystore?](#how-to-use-the-keystore)
- [Basic example](#basic-example)
- [Multiple keys](#multiple-keys)
- [Custom paths and keys](#custom-paths-and-keys)
- [How to enable snapshotting?](#how-to-enable-snapshotting)
- [How to configure templates post-deployment?](#how-to-configure-templates-post-deployment)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- Use this to update TOC: -->
<!-- docker run --rm -it -v $(pwd):/usr/src jorgeandrada/doctoc --github -->
## Requirements
* Kubernetes >= 1.14
* [Helm][] >= 2.17.0
* Minimum cluster requirements include the following to run this chart with
default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
See [supported configurations][] for more details.
## Installing
This chart is tested with the latest 7.16.2 version.
### Install released version using Helm repository
* Add the Elastic Helm charts repo:
`helm repo add elastic https://helm.elastic.co`
* Install it:
- with Helm 3: `helm install elasticsearch --version <version> elastic/elasticsearch`
- with Helm 2 (deprecated): `helm install --name elasticsearch --version <version> elastic/elasticsearch`
### Install development version from a branch
* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git`
* Checkout the branch : `git checkout 7.16`
* Install it:
- with Helm 3: `helm install elasticsearch ./helm-charts/elasticsearch --set imageTag=7.16.2`
- with Helm 2 (deprecated): `helm install --name elasticsearch ./helm-charts/elasticsearch --set imageTag=7.16.2`
## Upgrading
Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before
upgrading to a new chart version.
## Usage notes
* This repo includes a number of [examples][] configurations which can be used
as a reference. They are also used in the automated testing of this chart.
* Automated testing of this chart is currently only run against GKE (Google
Kubernetes Engine).
* The chart deploys a StatefulSet and by default will do an automated rolling
update of your cluster. It does this by waiting for the cluster health to become
green after each instance is updated. If you prefer to update manually you can
set `OnDelete` [updateStrategy][].
* It is important to verify that the JVM heap size in `esJavaOpts` and to set
the CPU/Memory `resources` to something suitable for your cluster.
* To simplify chart and maintenance each set of node groups is deployed as a
separate Helm release. Take a look at the [multi][] example to get an idea for
how this works. Without doing this it isn't possible to resize persistent
volumes in a StatefulSet. By setting it up this way it makes it possible to add
more nodes with a new storage size then drain the old ones. It also solves the
problem of allowing the user to determine which node groups to update first when
doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure
Elasticsearch. It exposes ways to set environment variables and mount secrets
inside of the container. Doing this makes it much easier for this chart to
support multiple versions with minimal changes.
## Configuration
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` |
| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` |
| `clusterDeprecationIndexing` | Enable or disable deprecation logs to be indexed (should be disabled when deploying master only node groups) | `false` |
| `enableServiceLinks` | Set to false to disabling service links, which can cause slow pod startup times when there are many services in the current namespace. | `true` |
| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` |
| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you could configure the [jvm heap size][] | `""` |
| `esMajorVersion` | Deprecated. Instead, use the version of the chart corresponding to your ES minor version. Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` |
| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` |
| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` |
| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` |
| `healthNameOverride` | Overrides `test-elasticsearch-health` pod name | `""` |
| `hostAliases` | Configurable [hostAliases][] | `[]` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` |
| `imageTag` | The Elasticsearch Docker image tag | `7.16.2` |
| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] |
| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` |
| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` |
| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` |
| `lifecycle` | Allows you to add [lifecycle hooks][]. See [values.yaml][] for an example of the formatting | `{}` |
| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` |
| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` |
| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` |
| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` |
| `networkPolicy` | The [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to set. See [`values.yaml`](./values.yaml) for an example | `{http.enabled: false,transport.enabled: false}` |
| `nodeAffinity` | Value for the [node affinity settings][] | `{}` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` |
| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] |
| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` |
| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] |
| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] |
| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` |
| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"`, or automount the service account token | see [values.yaml][] |
| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] |
| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` |
| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] |
| `roles` | A hash map with the specific [roles][] for the `nodeGroup` | see [values.yaml][] |
| `schedulerName` | Name of the [alternate scheduler][] | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` |
| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] |
| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` |
| `service.enabled` | Enable non-headless service | `true` |
| `service.externalTrafficPolicy` | Some cloud providers allow you to specify the [LoadBalancer externalTrafficPolicy][]. Kubernetes will use this to preserve the client source IP. This will configure load balancer if `service.type` is `LoadBalancer` | `""` |
| `service.httpPortName` | The name of the http port within the service | `http` |
| `service.labelsHeadless` | Labels to be added to headless service | `{}` |
| `service.labels` | Labels to be added to non-headless service | `{}` |
| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` |
| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` |
| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` |
| `service.transportPortName` | The name of the transport port within the service | `transport` |
| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` |
| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` |
| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` |
| `tests.enabled` | Enable creating test related resources when running `helm template` or `helm test` | `true` |
| `tolerations` | Configurable [tolerations][] | `[]` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` |
| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] |
### Deprecated
| Parameter | Description | Default |
|-----------|---------------------------------------------------------------------------------------------------------------|---------|
| `fsGroup` | The Group ID (GID) for [securityContext][] so that the Elasticsearch user can read from the persistent volume | `""` |
## FAQ
### How to deploy this chart on a specific K8S distribution?
This chart is designed to run on production scale Kubernetes clusters with
multiple nodes, lots of memory and persistent storage. For that reason it can be
a bit tricky to run them against local Kubernetes environments such as
[Minikube][].
This chart is highly tested with [GKE][], but some K8S distribution also
requires specific configurations.
We provide examples of configuration for the following K8S providers:
- [Docker for Mac][]
- [KIND][]
- [Minikube][]
- [MicroK8S][]
- [OpenShift][]
### How to deploy dedicated nodes types?
All the Elasticsearch pods deployed share the same configuration. If you need to
deploy dedicated [nodes types][] (for example dedicated master and data nodes),
you can deploy multiple releases of this chart with different configurations
while they share the same `clusterName` value.
For each Helm release, the nodes types can then be defined using `roles` value.
An example of Elasticsearch cluster using 2 different Helm releases for master
and data nodes can be found in [examples/multi][].
#### Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two
`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup`
and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all
pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can
just add new groups of nodes with a different `nodeGroup` and they will
automatically discover the correct master. If your master nodes have a different
`nodeGroup` name then you will need to set `masterService` to
`$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate
`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to
contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting
`masterService` to the desired `Service` name of the related cluster is
sufficient.
### How to deploy clusters with security (authentication and TLS) enabled?
This Helm chart can use existing [Kubernetes secrets][] to setup
credentials or certificates for examples. These secrets should be created
outside of this chart and accessed using [environment variables][] and volumes.
An example of Elasticsearch cluster using security can be found in
[examples/security][].
### How to migrate from helm/charts stable chart?
If you currently have a cluster deployed with the [helm/charts stable][] chart
you can follow the [migration guide][].
### How to install plugins?
The recommended way to install plugins into our Docker images is to create a
[custom Docker image][].
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install
plugins is not a great idea or something that we recommend. Especially in
Kubernetes where it is normal and expected for a container to be moved to
another host at random times.
2. Mutating the state of a running Docker image (by installing plugins) goes
against best practices of containers and immutable infrastructure.
### How to use the keystore?
#### Basic example
Create the secret, the key name needs to be the keystore key path. In this
example we will create a secret from a file and from a literal string.
```
kubectl create secret generic encryption-key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic slack-hook --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
To add these secrets to the keystore:
```
keystore:
- secretName: encryption-key
- secretName: slack-hook
```
#### Multiple keys
All keys in the secret will be added to the keystore. To create the previous
example in one secret you could also do:
```
kubectl create secret generic keystore-secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
```
keystore:
- secretName: keystore-secrets
```
#### Custom paths and keys
If you are using these secrets for other applications (besides the Elasticsearch
keystore) then it is also possible to specify the keystore path and which keys
you want to add. Everything specified under each `keystore` item will be passed
through to the `volumeMounts` section for mounting the [secret][]. In this
example we will only add the `slack_hook` key from a secret that also has other
keys. Our secret looks like this:
```
kubectl create secret generic slack-secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
We only want to add the `slack_hook` key to the keystore at path
`xpack.notification.slack.account.monitoring.secure_url`:
```
keystore:
- secretName: slack-secrets
items:
- key: slack_hook
path: xpack.notification.slack.account.monitoring.secure_url
```
You can also take a look at the [config example][] which is used as part of the
automated testing pipeline.
### How to enable snapshotting?
1. Install your [snapshot plugin][] into a custom Docker image following the
[how to install plugins guide][].
2. Add any required secrets or credentials into an Elasticsearch keystore
following the [how to use the keystore][] guide.
3. Configure the [snapshot repository][] as you normally would.
4. To automate snapshots you can use [Snapshot Lifecycle Management][] or a tool
like [curator][].
### How to configure templates post-deployment?
You can use `postStart` [lifecycle hooks][] to run code triggered after a
container is created.
Here is an example of `postStart` hook to configure templates:
```yaml
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas
TEMPLATE_NAME=my_template
INDEX_PATTERN="logstash-*"
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
```
## Contributing
Please check [CONTRIBUTING.md][] before any contribution or for any questions
about our development and testing process.
[7.16]: https://github.com/elastic/helm-charts/releases
[#63]: https://github.com/elastic/helm-charts/issues/63
[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md
[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md
[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md
[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/important-settings.html#cluster-name
[clustering and node discovery]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#clustering-and-node-discovery
[config example]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/values.yaml
[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/7.9/snapshot.html
[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docker.html#_c_customized_image
[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/discovery-settings.html#minimum_master_nodes
[docker for mac]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/docker-for-mac
[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/cluster-health.html#request-params
[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docker.html
[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config
[environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
[examples]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/
[examples/multi]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi
[examples/security]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/security
[gke]: https://cloud.google.com/kubernetes-engine
[helm]: https://helm.sh
[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/
[how to install plugins guide]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#how-to-install-plugins
[how to use the keystore]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#how-to-use-the-keystore
[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-http.html#_settings
[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images
[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/jvm-options.html
[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/heap-size.html
[hostAliases]: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
[kind]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/kubernetes-kind
[kubernetes secrets]: https://kubernetes.io/docs/concepts/configuration/secret/
[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
[loadBalancer externalTrafficPolicy]: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
[migration guide]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/migration/README.md
[minikube]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/minikube
[microk8s]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/microk8s
[multi]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/
[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/network.host.html
[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
[node-certificates]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-tls.html#node-certificates
[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
[nodes types]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html
[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
[openshift]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/openshift
[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html
[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[snapshot lifecycle management]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/snapshot-lifecycle-management.html
[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/7.16/repository.html
[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-snapshots.html
[supported configurations]: https://github.com/elastic/helm-charts/tree/7.16/README.md#supported-configurations
[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/vm-max-map-count.html#vm-max-map-count
[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-transport.html#_transport_settings
[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[values.yaml]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/values.yaml
[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage

View File

@@ -1,21 +0,0 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-config
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
secrets:
kubectl delete secret elastic-config-credentials elastic-config-secret elastic-config-slack elastic-config-custom-path || true
kubectl create secret generic elastic-config-credentials --from-literal=password=changeme --from-literal=username=elastic
kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
kubectl create secret generic elastic-config-secret --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic elastic-config-custom-path --from-literal=slack_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' --from-literal=thing_i_don_tcare_about=test
test: secrets install goss
purge:
helm del $(RELEASE)

View File

@@ -1,27 +0,0 @@
# Config
This example deploy a single node Elasticsearch 7.16.2 with authentication and
custom [values][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/config-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/test/goss.yaml
[values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/values.yaml

View File

@@ -1,29 +0,0 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":1'
- '"number_of_data_nodes":1'
http://localhost:9200:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "config"'
- "You Know, for Search"
command:
"elasticsearch-keystore list":
exit-status: 0
stdout:
- keystore.seed
- bootstrap.password
- xpack.notification.slack.account.monitoring.secure_url
- xpack.notification.slack.account.otheraccount.secure_url
- xpack.watcher.encryption_key

View File

@@ -1,27 +0,0 @@
---
clusterName: "config"
replicas: 1
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: password
# This is just a dummy file to make sure that
# the keystore can be mounted at the same time
# as a custom elasticsearch.yml
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
path.data: /usr/share/elasticsearch/data
keystore:
- secretName: elastic-config-secret
- secretName: elastic-config-slack
- secretName: elastic-config-custom-path
items:
- key: slack_url
path: xpack.notification.slack.account.otheraccount.secure_url

View File

@@ -1,14 +0,0 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-default
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -1,25 +0,0 @@
# Default
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster using
[default values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/default/test/goss.yaml
[default values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/values.yaml

Some files were not shown because too many files have changed in this diff Show More