Compare commits

...

7 Commits

Author SHA1 Message Date
Valeriano Manassero
e22bd30764 Upgrade to app version 1.5.0 (#81)
* Changed: upgrade to 1.5.0

* Fixed: inject after ct check

* Fixed: list changd

* Fixed: typo
2022-06-23 07:49:45 +02:00
Valeriano Manassero
84a003b7bc Fix fileserver check on Agent (#82)
* Fixed: fileserver check

* Changed: version bump
2022-06-22 16:52:51 +02:00
Valeriano Manassero
1d95f0c27f Pullsecrets pod template (#80)
* Added: pullsecrets management for pod template

* Changed: version bump
2022-06-22 15:52:14 +02:00
Valeriano Manassero
562815e97a ClearML standalone agent chart (#79)
* Added: agent chart

* Changed: base image tag

* Changed: updated helm-docs

* Fixed: maintainers

* Changed: updated radme

* Fixed: http code to check

* Fixed: default values to check

* Changed: updated helm-docs

* Added: default values to be substituted by GH action

* Added: sed on the fly for testing

* Changed: updated CI images for Kind
2022-06-08 10:01:33 +02:00
Leon Rotim
e317610397 Fix linebreak formatting (#77)
* fix wrong line break delete

* version, bugfix inc

* regenerate README.md

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2022-06-08 08:16:52 +02:00
Luca Cerone
16f172fc1c Allowing extraEnv to be added to agentservices and agent deployments. (#76)
* Allowing extraEnv to be added to agentservices and agent deployments.

* Bumped version and generated documentation chart

* Lint fix

* Chart version update

* helm-docs update

Co-authored-by: Valeriano Manassero <14011549+valeriano-manassero@users.noreply.github.com>
2022-06-08 08:06:20 +02:00
Valeriano Manassero
69048b5c96 Fix glue agent image (#78)
* Changed: avoid latest image

* Changed: version bump

* Fixed: pull policy

* Removed: specific ci for glue since now it's on by default

* Fixed: don't refresh dependencies

* Changed: testing chart action version update

* Fixed: action

* Changed: dependency updates required

* Fixed: lint and install

* Revert "Changed: dependency updates required"

This reverts commit 34ee22d7d0.

* Changed: use copy of dep charts because ththey may become unavailable

* Changed: updated readme
2022-06-02 21:20:00 +02:00
178 changed files with 14952 additions and 611 deletions

View File

@@ -21,29 +21,32 @@ jobs:
strategy:
matrix:
k8s:
- v1.21.2
- v1.22.1
- v1.23.0
- v1.22.7
- v1.23.6
- v1.24.0
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Create kind ${{ matrix.k8s }} cluster
uses: helm/kind-action@v1.1.0
uses: helm/kind-action@v1.2.0
with:
version: v0.11.1
version: v0.13.0
node_image: kindest/node:${{ matrix.k8s }}
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.0.1
- name: Add bitnami repo
run: helm repo add bitnami https://charts.bitnami.com/bitnami
- name: Add elastic repo
run: helm repo add elastic https://helm.elastic.co
uses: helm/chart-testing-action@v2.2.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --chart-dirs=charts --target-branch=main)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "::set-output name=changed_charts::\"${changed//$'\n'/,}\""
fi
- name: Inject secrets
run: |
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUEKEY/${{ secrets.agentk8sglueKey }}/g" {} \;
find ./charts/*/ci/*.yaml -type f -exec sed -i "s/AGENTK8SGLUESECRET/${{ secrets.agentk8sglueSecret }}/g" {} \;
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (lint and install)
run: ct lint-and-install --chart-dirs=charts --target-branch=main --helm-extra-args="--timeout=15m" --debug=true
run: ct lint-and-install --chart-dirs=charts --target-branch=main --helm-extra-args="--timeout=15m" --charts=${{steps.list-changed.outputs.changed_charts}} --debug=true
if: steps.list-changed.outputs.changed == 'true'

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,19 @@
apiVersion: v2
name: clearml-agent
description: MLOps platform
type: application
version: "1.0.2"
appVersion: "1.21"
kubeVersion: ">= 1.19.0-0 < 1.25.0-0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
sources:
- https://github.com/allegroai/clearml-helm-charts
- https://github.com/allegroai/clearml
maintainers:
- name: valeriano-manassero
url: https://github.com/valeriano-manassero
keywords:
- clearml
- "machine learning"
- mlops

View File

@@ -0,0 +1,57 @@
# clearml-agent
![Version: 1.0.2](https://img.shields.io/badge/Version-1.0.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.21](https://img.shields.io/badge/AppVersion-1.21-informational?style=flat-square)
MLOps platform
**Homepage:** <https://clear.ml>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| valeriano-manassero | | <https://github.com/valeriano-manassero> |
## Source Code
* <https://github.com/allegroai/clearml-helm-charts>
* <https://github.com/allegroai/clearml>
## Requirements
Kubernetes: `>= 1.19.0-0 < 1.25.0-0`
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| agentk8sglue | object | `{"apiServerUrlReference":"https://api.clear.ml","defaultContainerImage":"ubuntu:18.04","fileServerUrlReference":"https://files.clear.ml","id":"k8s-agent","image":{"repository":"allegroai/clearml-agent-k8s","tag":"base-1.21"},"maxPods":10,"podTemplate":{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumes":[]},"queue":"default","replicaCount":1,"serviceAccountName":"default","webServerUrlReference":"https://app.clear.ml"}` | This agent will spawn queued experiments in new pods, a good use case is to combine this with GPU autoscaling nodes. https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue |
| agentk8sglue.apiServerUrlReference | string | `"https://api.clear.ml"` | Reference to Api server url |
| agentk8sglue.defaultContainerImage | string | `"ubuntu:18.04"` | default container image for ClearML Task pod |
| agentk8sglue.fileServerUrlReference | string | `"https://files.clear.ml"` | Reference to File server url |
| agentk8sglue.id | string | `"k8s-agent"` | ClearML worker ID (must be unique across the entire ClearMLenvironment) |
| agentk8sglue.image | object | `{"repository":"allegroai/clearml-agent-k8s","tag":"base-1.21"}` | Glue Agent image configuration |
| agentk8sglue.maxPods | int | `10` | maximum concurrent consume ClearML Task pod |
| agentk8sglue.podTemplate | object | `{"env":[],"nodeSelector":{},"resources":{},"tolerations":[],"volumes":[]}` | template for pods spawned to consume ClearML Task |
| agentk8sglue.podTemplate.env | list | `[]` | environment variables for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.nodeSelector | object | `{}` | nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.resources | object | `{}` | resources declaration for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.tolerations | list | `[]` | tolerations setup for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.podTemplate.volumes | list | `[]` | volumes definition for pods spawned to consume ClearML Task (example in values.yaml comments) |
| agentk8sglue.queue | string | `"default"` | ClearML queue this agent will consume |
| agentk8sglue.replicaCount | int | `1` | Glue Agent number of pods |
| agentk8sglue.serviceAccountName | string | `"default"` | serviceAccountName for pods spawned to consume ClearML Task |
| agentk8sglue.webServerUrlReference | string | `"https://app.clear.ml"` | Reference to Web server url |
| clearml | object | `{"agentk8sglueKey":"ACCESSKEY","agentk8sglueSecret":"SECRETKEY"}` | ClearMl generic configurations |
| clearml.agentk8sglueKey | string | `"ACCESSKEY"` | Agent k8s Glue basic auth key |
| clearml.agentk8sglueSecret | string | `"SECRETKEY"` | Agent k8s Glue basic auth secret |
| imageCredentials | object | `{"email":"someone@host.com","enabled":false,"existingSecret":"","password":"pwd","registry":"docker.io","username":"someone"}` | Private image registry configuration |
| imageCredentials.email | string | `"someone@host.com"` | Email |
| imageCredentials.enabled | bool | `false` | Use private authentication mode |
| imageCredentials.existingSecret | string | `""` | If this is set, chart will not generate a secret but will use what is defined here |
| imageCredentials.password | string | `"pwd"` | Registry password |
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
----------------------------------------------
Autogenerated from chart metadata using [helm-docs v1.10.0](https://github.com/norwoodj/helm-docs/releases/v1.10.0)

View File

@@ -0,0 +1,3 @@
clearml:
agentk8sglueKey: "AGENTK8SGLUEKEY"
agentk8sglueSecret: "AGENTK8SGLUESECRET"

View File

@@ -0,0 +1 @@
Glue Agent deployed.

View File

@@ -0,0 +1,86 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "clearml.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "clearml.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "clearml.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "clearml.labels" -}}
helm.sh/chart: {{ include "clearml.chart" . }}
{{ include "clearml.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "clearml.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Reference Name (agentk8sglue)
*/}}
{{- define "agentk8sglue.referenceName" -}}
{{- include "clearml.name" . }}-agentk8sglue
{{- end }}
{{/*
Selector labels (agentk8sglue)
*/}}
{{- define "agentk8sglue.selectorLabels" -}}
app.kubernetes.io/name: {{ include "clearml.name" . }}
app.kubernetes.io/instance: {{ include "agentk8sglue.referenceName" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "clearml.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "clearml.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create secret to access docker registry
*/}}
{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}

View File

@@ -1,4 +1,3 @@
{{- if .Values.agentk8sglue.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -9,6 +8,14 @@ data:
metadata:
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-agent-registry-key
{{- end }}
{{- end }}
serviceAccountName: {{ .Values.agentk8sglue.serviceAccountName }}
volumes:
{{- range .Values.agentk8sglue.podTemplate.volumes }}
@@ -28,21 +35,21 @@ data:
{{- end }}
env:
- name: CLEARML_API_HOST
value: "http://{{ include "clearml.fullname" . }}-apiserver:{{ .Values.apiserver.service.port }}"
value: {{.Values.agentk8sglue.apiServerUrlReference}}
- name: CLEARML_WEB_HOST
value: "http://{{ include "clearml.fullname" . }}-webserver"
value: {{.Values.agentk8sglue.webServerUrlReference}}
- name: CLEARML_FILES_HOST
value: "http://{{ include "clearml.fullname" . }}-fileserver:{{ .Values.fileserver.service.port }}"
value: {{.Values.agentk8sglue.fileServerUrlReference}}
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_key
name: clearml-agent-conf
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_secret
name: clearml-agent-conf
key: agentk8sglue_secret
{{- if .Values.agentk8sglue.podTemplate.env }}
{{ toYaml .Values.agentk8sglue.podTemplate.env | nindent 8 }}
{{- end }}
@@ -54,4 +61,3 @@ data:
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,91 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "agentk8sglue.referenceName" . }}
labels:
{{- include "clearml.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.agentk8sglue.replicaCount }}
selector:
matchLabels:
{{- include "agentk8sglue.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ printf "%s" .Values.clearml | sha256sum }}
labels:
{{- include "agentk8sglue.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-agent-registry-key
{{- end }}
{{- end }}
initContainers:
- name: init-k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "{{.Values.agentk8sglue.apiServerUrlReference}}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done;
while [ $(curl -sw '%{http_code}' "{{.Values.agentk8sglue.fileServerUrlReference}}/" -o /dev/null) =~ 403|405 ]] ; do
echo "waiting for fileserver" ;
sleep 5 ;
done;
while [ $(curl -sw '%{http_code}' "{{.Values.agentk8sglue.webServerUrlReference}}/" -o /dev/null) -ne 200 ] ; do
echo "waiting for webserver" ;
sleep 5 ;
done
containers:
- name: k8s-glue
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
imagePullPolicy: Always
command: ["/bin/bash", "-c", "export PATH=$PATH:$HOME/bin; source /root/.bashrc && /root/entrypoint.sh"]
volumeMounts:
- name: k8sagent-pod-template
mountPath: /root/template
env:
- name: CLEARML_API_HOST
value: "{{.Values.agentk8sglue.apiServerUrlReference}}"
- name: CLEARML_WEB_HOST
value: "{{.Values.agentk8sglue.webServerUrlReference}}"
- name: CLEARML_FILES_HOST
value: "{{.Values.agentk8sglue.fileServerUrlReference}}"
- name: K8S_GLUE_MAX_PODS
value: "{{.Values.agentk8sglue.maxPods}}"
- name: K8S_GLUE_QUEUE
value: "{{.Values.agentk8sglue.queue}}"
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml"
- name: K8S_DEFAULT_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-agent-conf
key: agentk8sglue_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-agent-conf
key: agentk8sglue_secret
- name: CLEARML_WORKER_ID
value: "{{.Values.agentk8sglue.id}}"
- name: CLEARML_AGENT_UPDATE_REPO
value: ""
- name: FORCE_CLEARML_AGENT_REPO
value: ""
- name: CLEARML_DOCKER_IMAGE
value: "{{.Values.agentk8sglue.defaultContainerImage}}"
volumes:
- name: k8sagent-pod-template
configMap:
name: k8sagent-pod-template

View File

@@ -1,4 +1,3 @@
{{- if .Values.agentk8sglue.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -22,4 +21,3 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: k8sagent-pods-access
{{- end }}

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Secret
metadata:
name: clearml-agent-conf
data:
agentk8sglue_key: {{ .Values.clearml.agentk8sglueKey | b64enc }}
agentk8sglue_secret: {{ .Values.clearml.agentk8sglueSecret | b64enc }}
---
{{- if .Values.imageCredentials.enabled }}
{{- if not .Values.imageCredentials.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
name: clearml-agent-registry-key
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,81 @@
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: false
# -- If this is set, chart will not generate a secret but will use what is defined here
existingSecret: ""
# -- Registry name
registry: docker.io
# -- Registry username
username: someone
# -- Registry password
password: pwd
# -- Email
email: someone@host.com
# -- ClearMl generic configurations
clearml:
# -- Agent k8s Glue basic auth key
agentk8sglueKey: "ACCESSKEY"
# -- Agent k8s Glue basic auth secret
agentk8sglueSecret: "SECRETKEY"
# -- This agent will spawn queued experiments in new pods, a good use case is to combine this with
# GPU autoscaling nodes.
# https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue
agentk8sglue:
# -- Glue Agent image configuration
image:
repository: "allegroai/clearml-agent-k8s"
tag: "base-1.21"
# -- Glue Agent number of pods
replicaCount: 1
# -- Reference to Api server url
apiServerUrlReference: "https://api.clear.ml"
# -- Reference to File server url
fileServerUrlReference: "https://files.clear.ml"
# -- Reference to Web server url
webServerUrlReference: "https://app.clear.ml"
# -- serviceAccountName for pods spawned to consume ClearML Task
serviceAccountName: default
# -- maximum concurrent consume ClearML Task pod
maxPods: 10
# -- default container image for ClearML Task pod
defaultContainerImage: ubuntu:18.04
# -- ClearML queue this agent will consume
queue: default
# -- ClearML worker ID (must be unique across the entire ClearMLenvironment)
id: k8s-agent
# -- template for pods spawned to consume ClearML Task
podTemplate:
# -- volumes definition for pods spawned to consume ClearML Task (example in values.yaml comments)
volumes: []
# - name: "yourvolume"
# path: "/yourpath"
# -- environment variables for pods spawned to consume ClearML Task (example in values.yaml comments)
env: []
# # to setup access to private repo, setup secret with git credentials:
# - name: CLEARML_AGENT_GIT_USER
# value: mygitusername
# - name: CLEARML_AGENT_GIT_PASS
# valueFrom:
# secretKeyRef:
# name: git-password
# key: git-password
# -- resources declaration for pods spawned to consume ClearML Task (example in values.yaml comments)
resources: {}
# limits:
# nvidia.com/gpu: 1
# -- tolerations setup for pods spawned to consume ClearML Task (example in values.yaml comments)
tolerations: []
# - key: "nvidia.com/gpu"
# operator: Exists
# effect: "NoSchedule"
# -- nodeSelector setup for pods spawned to consume ClearML Task (example in values.yaml comments)
nodeSelector: {}
# fleet: gpu-nodes

View File

@@ -1,12 +1,12 @@
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami
repository: file://../../dependency_charts/redis
version: 10.9.0
- name: mongodb
repository: https://charts.bitnami.com/bitnami
repository: file://../../dependency_charts/mongodb
version: 10.3.4
- name: elasticsearch
repository: https://helm.elastic.co
repository: file://../../dependency_charts/elasticsearch
version: 7.16.2
digest: sha256:ac733cb02d50e8398c1d2832988333896f1c7b765c19a0f1eea5e9b24bdb8207
generated: "2022-01-05T07:52:34.913745+01:00"
digest: sha256:149b5a49382d280b1e083f3c193d014d3d2eb7fcdf3ec1402008996960cc173a
generated: "2022-06-02T21:09:00.961174+02:00"

View File

@@ -2,8 +2,8 @@ apiVersion: v2
name: clearml
description: MLOps platform
type: application
version: "3.10.2"
appVersion: "1.4.0"
version: "4.0.0"
appVersion: "1.5.0"
home: https://clear.ml
icon: https://raw.githubusercontent.com/allegroai/clearml/master/docs/clearml-logo.svg
sources:
@@ -19,13 +19,13 @@ keywords:
dependencies:
- name: redis
version: "10.9.0"
repository: "https://charts.bitnami.com/bitnami"
repository: "file://../../dependency_charts/redis"
condition: redis.enabled
- name: mongodb
version: "10.3.4"
repository: "https://charts.bitnami.com/bitnami"
repository: "file://../../dependency_charts/mongodb"
condition: mongodb.enabled
- name: elasticsearch
version: "7.16.2"
repository: "https://helm.elastic.co"
repository: "file://../../dependency_charts/elasticsearch"
condition: elasticsearch.enabled

View File

@@ -1,6 +1,6 @@
# ClearML Ecosystem for Kubernetes
![Version: 3.10.2](https://img.shields.io/badge/Version-3.10.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.4.0](https://img.shields.io/badge/AppVersion-1.4.0-informational?style=flat-square)
![Version: 4.0.0](https://img.shields.io/badge/Version-4.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.5.0](https://img.shields.io/badge/AppVersion-1.5.0-informational?style=flat-square)
MLOps platform
@@ -121,101 +121,14 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| Repository | Name | Version |
|------------|------|---------|
| https://charts.bitnami.com/bitnami | mongodb | 10.3.4 |
| https://charts.bitnami.com/bitnami | redis | 10.9.0 |
| https://helm.elastic.co | elasticsearch | 7.16.2 |
| file://../../dependency_charts/elasticsearch | elasticsearch | 7.16.2 |
| file://../../dependency_charts/mongodb | mongodb | 10.3.4 |
| file://../../dependency_charts/redis | redis | 10.9.0 |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| agentGroups.agent-group-cpu.affinity | object | `{}` | |
| agentGroups.agent-group-cpu.agentVersion | string | `""` | |
| agentGroups.agent-group-cpu.awsAccessKeyId | string | `nil` | |
| agentGroups.agent-group-cpu.awsDefaultRegion | string | `nil` | |
| agentGroups.agent-group-cpu.awsSecretAccessKey | string | `nil` | |
| agentGroups.agent-group-cpu.azureStorageAccount | string | `nil` | |
| agentGroups.agent-group-cpu.azureStorageKey | string | `nil` | |
| agentGroups.agent-group-cpu.clearmlAccessKey | string | `nil` | |
| agentGroups.agent-group-cpu.clearmlConfig | string | `"sdk {\n}"` | |
| agentGroups.agent-group-cpu.clearmlGitPassword | string | `nil` | |
| agentGroups.agent-group-cpu.clearmlGitUser | string | `nil` | |
| agentGroups.agent-group-cpu.clearmlSecretKey | string | `nil` | |
| agentGroups.agent-group-cpu.enabled | bool | `false` | |
| agentGroups.agent-group-cpu.image.pullPolicy | string | `"IfNotPresent"` | |
| agentGroups.agent-group-cpu.image.repository | string | `"ubuntu"` | |
| agentGroups.agent-group-cpu.image.tag | string | `"18.04"` | |
| agentGroups.agent-group-cpu.name | string | `"agent-group-cpu"` | |
| agentGroups.agent-group-cpu.nodeSelector | object | `{}` | |
| agentGroups.agent-group-cpu.nvidiaGpusPerAgent | int | `0` | |
| agentGroups.agent-group-cpu.podAnnotations | object | `{}` | |
| agentGroups.agent-group-cpu.queues | string | `"default"` | |
| agentGroups.agent-group-cpu.replicaCount | int | `1` | |
| agentGroups.agent-group-cpu.tolerations | list | `[]` | |
| agentGroups.agent-group-cpu.updateStrategy | string | `"Recreate"` | |
| agentGroups.agent-group-gpu.affinity | object | `{}` | |
| agentGroups.agent-group-gpu.agentVersion | string | `""` | |
| agentGroups.agent-group-gpu.awsAccessKeyId | string | `nil` | |
| agentGroups.agent-group-gpu.awsDefaultRegion | string | `nil` | |
| agentGroups.agent-group-gpu.awsSecretAccessKey | string | `nil` | |
| agentGroups.agent-group-gpu.azureStorageAccount | string | `nil` | |
| agentGroups.agent-group-gpu.azureStorageKey | string | `nil` | |
| agentGroups.agent-group-gpu.clearmlAccessKey | string | `nil` | |
| agentGroups.agent-group-gpu.clearmlConfig | string | `"sdk {\n}"` | |
| agentGroups.agent-group-gpu.clearmlGitPassword | string | `nil` | |
| agentGroups.agent-group-gpu.clearmlGitUser | string | `nil` | |
| agentGroups.agent-group-gpu.clearmlSecretKey | string | `nil` | |
| agentGroups.agent-group-gpu.enabled | bool | `false` | |
| agentGroups.agent-group-gpu.image.pullPolicy | string | `"IfNotPresent"` | |
| agentGroups.agent-group-gpu.image.repository | string | `"nvidia/cuda"` | |
| agentGroups.agent-group-gpu.image.tag | string | `"11.0-base-ubuntu18.04"` | |
| agentGroups.agent-group-gpu.name | string | `"agent-group-gpu"` | |
| agentGroups.agent-group-gpu.nodeSelector | object | `{}` | |
| agentGroups.agent-group-gpu.nvidiaGpusPerAgent | int | `1` | |
| agentGroups.agent-group-gpu.podAnnotations | object | `{}` | |
| agentGroups.agent-group-gpu.queues | string | `"default"` | |
| agentGroups.agent-group-gpu.replicaCount | int | `0` | |
| agentGroups.agent-group-gpu.tolerations | list | `[]` | |
| agentGroups.agent-group-gpu.updateStrategy | string | `"Recreate"` | |
| agentk8sglue.defaultDockerImage | string | `"nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04"` | |
| agentk8sglue.enabled | bool | `true` | |
| agentk8sglue.id | string | `"k8s-agent"` | |
| agentk8sglue.image.repository | string | `"allegroai/clearml-agent-k8s"` | |
| agentk8sglue.image.tag | string | `"latest"` | |
| agentk8sglue.maxPods | int | `10` | |
| agentk8sglue.podTemplate.env | list | `[]` | |
| agentk8sglue.podTemplate.nodeSelector | object | `{}` | |
| agentk8sglue.podTemplate.resources | object | `{}` | |
| agentk8sglue.podTemplate.tolerations | list | `[]` | |
| agentk8sglue.podTemplate.volumes | list | `[]` | |
| agentk8sglue.queue | string | `"default"` | |
| agentk8sglue.serviceAccountName | string | `"default"` | |
| agentservices.affinity | object | `{}` | |
| agentservices.agentVersion | string | `""` | |
| agentservices.awsAccessKeyId | string | `nil` | |
| agentservices.awsDefaultRegion | string | `nil` | |
| agentservices.awsSecretAccessKey | string | `nil` | |
| agentservices.azureStorageAccount | string | `nil` | |
| agentservices.azureStorageKey | string | `nil` | |
| agentservices.clearmlFilesHost | string | `nil` | |
| agentservices.clearmlGitPassword | string | `nil` | |
| agentservices.clearmlGitUser | string | `nil` | |
| agentservices.clearmlHostIp | string | `nil` | |
| agentservices.clearmlWebHost | string | `nil` | |
| agentservices.clearmlWorkerId | string | `"clearml-services"` | |
| agentservices.enabled | bool | `false` | |
| agentservices.extraEnvs | list | `[]` | |
| agentservices.googleCredentials | string | `nil` | |
| agentservices.image.pullPolicy | string | `"IfNotPresent"` | |
| agentservices.image.repository | string | `"allegroai/clearml-agent-services"` | |
| agentservices.image.tag | string | `"latest"` | |
| agentservices.nodeSelector | object | `{}` | |
| agentservices.podAnnotations | object | `{}` | |
| agentservices.replicaCount | int | `1` | |
| agentservices.resources | object | `{}` | |
| agentservices.storage.data.class | string | `""` | |
| agentservices.storage.data.size | string | `"50Gi"` | |
| agentservices.tolerations | list | `[]` | |
| apiserver.additionalConfigs | object | `{}` | additional configurations that can be used by api server; check examples in values.yaml file |
| apiserver.affinity | object | `{}` | |
| apiserver.authCookiesMaxAge | int | `864000` | Amount of seconds the authorization cookie will last in user browser |
@@ -223,7 +136,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| apiserver.extraEnvs | list | `[]` | |
| apiserver.image.pullPolicy | string | `"IfNotPresent"` | |
| apiserver.image.repository | string | `"allegroai/clearml"` | |
| apiserver.image.tag | string | `"1.4.0"` | |
| apiserver.image.tag | string | `"1.5.0"` | |
| apiserver.livenessDelay | int | `60` | |
| apiserver.nodeSelector | object | `{}` | |
| apiserver.podAnnotations | object | `{}` | |
@@ -237,7 +150,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| apiserver.service.port | int | `8008` | |
| apiserver.service.type | string | `"NodePort"` | This will set to service's spec.type field |
| apiserver.tolerations | list | `[]` | |
| clearml.defaultCompany | string | `"d1bd92a3b039400cbafc60a7a5b1e52b"` | |
| clearml | object | `{"defaultCompany":"d1bd92a3b039400cbafc60a7a5b1e52b"}` | ClearMl generic configurations |
| elasticsearch.clusterHealthCheckParams | string | `"wait_for_status=yellow&timeout=1s"` | |
| elasticsearch.clusterName | string | `"clearml-elastic"` | |
| elasticsearch.enabled | bool | `true` | |
@@ -283,7 +196,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| fileserver.extraEnvs | list | `[]` | |
| fileserver.image.pullPolicy | string | `"IfNotPresent"` | |
| fileserver.image.repository | string | `"allegroai/clearml"` | |
| fileserver.image.tag | string | `"1.4.0"` | |
| fileserver.image.tag | string | `"1.5.0"` | |
| fileserver.nodeSelector | object | `{}` | |
| fileserver.podAnnotations | object | `{}` | |
| fileserver.replicaCount | int | `1` | |
@@ -294,6 +207,13 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| fileserver.storage.data.class | string | `""` | |
| fileserver.storage.data.size | string | `"50Gi"` | |
| fileserver.tolerations | list | `[]` | |
| imageCredentials | object | `{"email":"someone@host.com","enabled":false,"existingSecret":"","password":"pwd","registry":"docker.io","username":"someone"}` | Private image registry configuration |
| imageCredentials.email | string | `"someone@host.com"` | Email |
| imageCredentials.enabled | bool | `false` | Use private authentication mode |
| imageCredentials.existingSecret | string | `""` | If this is set, chart will not generate a secret but will use what is defined here |
| imageCredentials.password | string | `"pwd"` | Registry password |
| imageCredentials.registry | string | `"docker.io"` | Registry name |
| imageCredentials.username | string | `"someone"` | Registry username |
| ingress.annotations | object | `{}` | |
| ingress.api.annotations | object | `{}` | |
| ingress.api.enabled | bool | `false` | |
@@ -342,7 +262,7 @@ For detailed instructions, see the [Optional Configuration](https://github.com/a
| webserver.extraEnvs | list | `[]` | |
| webserver.image.pullPolicy | string | `"IfNotPresent"` | |
| webserver.image.repository | string | `"allegroai/clearml"` | |
| webserver.image.tag | string | `"1.4.0"` | |
| webserver.image.tag | string | `"1.5.0"` | |
| webserver.nodeSelector | object | `{}` | |
| webserver.podAnnotations | object | `{}` | |
| webserver.replicaCount | int | `1` | |

View File

@@ -1,2 +0,0 @@
agentk8sglue:
enabled: true

View File

@@ -1,119 +0,0 @@
{{- range $key, $value := .Values.agentGroups }}
{{- with $value }}
{{- if .enabled }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clearml.fullname" $ }}-{{ .name }}-agent
labels:
{{- include "clearml.labels" $ | nindent 4 }}
spec:
replicas: {{ .replicaCount }}
strategy:
type: {{ .updateStrategy }}
selector:
matchLabels:
{{- include "clearml.selectorLabelsAgent" $ | nindent 6 }}
template:
metadata:
annotations:
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") $ | sha256sum }}
{{- with .podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clearml.selectorLabelsAgent" $ | nindent 8 }}
spec:
volumes:
{{ if .clearmlConfig }}
- name: agent-clearml-conf-volume
secret:
secretName: {{ .name }}-conf
items:
- key: clearml.conf
path: clearml.conf
{{ end }}
initContainers:
- name: init-agent-{{ .name }}
image: "{{ .image.repository }}:{{ .image.tag | default $.Chart.AppVersion }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "{{ include "clearml.serviceApi" $ }}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done
containers:
- name: {{ $.Chart.Name }}-{{ .name }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
securityContext:
privileged: true
resources:
limits:
nvidia.com/gpu:
{{ .nvidiaGpusPerAgent }}
env:
- name: CLEARML_API_HOST
value: {{ include "clearml.serviceApi" $ }}
- name: CLEARML_WEB_HOST
value: {{ include "clearml.serviceApp" $ }}
- name: CLEARML_FILES_HOST
value: {{ include "clearml.serviceFiles" $ }}
- name: CLEARML_AGENT_GIT_USER
value: {{ .clearmlGitUser}}
- name: CLEARML_AGENT_GIT_PASS
value: {{ .clearmlGitPassword}}
- name: AWS_ACCESS_KEY_ID
value: {{ .awsAccessKeyId}}
- name: AWS_SECRET_ACCESS_KEY
value: {{ .awsSecretAccessKey}}
- name: AWS_DEFAULT_REGION
value: {{ .awsDefaultRegion}}
- name: AZURE_STORAGE_ACCOUNT
value: {{ .azureStorageAccount}}
- name: AZURE_STORAGE_KEY
value: {{ .azureStorageKey}}
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: tests_user_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: tests_user_secret
command:
- /bin/sh
- -c
- "apt-get update ;
apt-get install -y curl python3-pip git;
python3 -m pip install -U pip ;
python3 -m pip install clearml-agent{{ .agentVersion}} ;
CLEARML_AGENT_K8S_HOST_MOUNT=/root/.clearml:/root/.clearml clearml-agent daemon --foreground --queue {{ .queues}}"
{{ if .clearmlConfig }}
volumeMounts:
- name: agent-clearml-conf-volume
mountPath: /root/clearml.conf
subPath: clearml.conf
readOnly: true
{{- end }}
{{- with .nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,64 +0,0 @@
{{- if .Values.agentk8sglue.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ include "clearml.fullname" . }}-k8sagent"
labels:
app: k8sagent
spec:
replicas: 1
selector:
matchLabels:
app: k8sagent
template:
metadata:
labels:
app: k8sagent
spec:
containers:
- name: k8s-glue-container
image: "{{ .Values.agentk8sglue.image.repository }}:{{ .Values.agentk8sglue.image.tag }}"
imagePullPolicy: Always
command: ["/bin/bash", "-c", "export PATH=$PATH:$HOME/bin; source /root/.bashrc && /root/entrypoint.sh"]
volumeMounts:
- name: k8sagent-pod-template
mountPath: /root/template
env:
- name: CLEARML_API_HOST
value: "http://{{ include "clearml.fullname" . }}-apiserver:{{ .Values.apiserver.service.port }}"
- name: CLEARML_WEB_HOST
value: "http://{{ include "clearml.fullname" . }}-webserver"
- name: CLEARML_FILES_HOST
value: "http://{{ include "clearml.fullname" . }}-fileserver:{{ .Values.fileserver.service.port }}"
- name: K8S_GLUE_MAX_PODS
value: "{{.Values.agentk8sglue.maxPods}}"
- name: K8S_GLUE_QUEUE
value: "{{.Values.agentk8sglue.queue}}"
- name: K8S_GLUE_EXTRA_ARGS
value: "--namespace {{ .Release.Namespace }} --template-yaml /root/template/template.yaml"
- name: K8S_DEFAULT_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: apiserver_secret
- name: CLEARML_WORKER_ID
value: "{{.Values.agentk8sglue.id}}"
- name: CLEARML_AGENT_UPDATE_REPO
value: ""
- name: FORCE_CLEARML_AGENT_REPO
value: ""
- name: CLEARML_DOCKER_IMAGE
value: "{{.Values.agentk8sglue.defaultDockerImage}}"
volumes:
- name: k8sagent-pod-template
configMap:
name: k8sagent-pod-template
{{- end }}

View File

@@ -1,103 +0,0 @@
{{- if .Values.agentservices.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "clearml.fullname" . }}-agentservices
labels:
{{- include "clearml.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.agentservices.replicaCount }}
selector:
matchLabels:
{{- include "clearml.selectorLabelsAgentServices" . | nindent 6 }}
template:
metadata:
annotations:
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{- with .Values.agentservices.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "clearml.selectorLabelsAgentServices" . | nindent 8 }}
spec:
volumes:
- name: agentservices-data
persistentVolumeClaim:
claimName: {{ include "clearml.fullname" . }}-agentservices-data
initContainers:
- name: init-agentservices
image: "{{ .Values.agentservices.image.repository }}:{{ .Values.agentservices.image.tag | default .Chart.AppVersion }}"
command:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "{{ include "clearml.serviceApi" $ }}/debug.ping" -o /dev/null) -ne 200 ] ; do
echo "waiting for apiserver" ;
sleep 5 ;
done
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.agentservices.image.repository }}:{{ .Values.agentservices.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.agentservices.image.pullPolicy }}
env:
- name: CLEARML_HOST_IP
value: {{ .Values.agentservices.clearmlHostIp }}
- name: CLEARML_API_HOST
value: {{ include "clearml.serviceApi" $ }}
- name: CLEARML_WEB_HOST
value: {{ .Values.agentservices.clearmlWebHost }}
- name: CLEARML_FILES_HOST
value: {{ .Values.agentservices.clearmlFilesHost }}
- name: CLEARML_AGENT_GIT_USER
value: {{ .Values.agentservices.clearmlGitUser }}
- name: CLEARML_AGENT_GIT_PASS
value: {{ .Values.agentservices.clearmlGitPassword }}
- name: CLEARML_AGENT_UPDATE_VERSION
value: {{ .Values.agentservices.agentVersion }}
- name: CLEARML_AGENT_DEFAULT_BASE_DOCKER
value: {{ .Values.agentservices.defaultBaseDocker }}
- name: AWS_ACCESS_KEY_ID
value: {{ .Values.agentservices.awsAccessKeyId }}
- name: AWS_SECRET_ACCESS_KEY
value: {{ .Values.agentservices.awsSecretAccessKey }}
- name: AWS_DEFAULT_REGION
value: {{ .Values.agentservices.awsDefaultRegion }}
- name: AZURE_STORAGE_ACCOUNT
value: {{ .Values.agentservices.azureStorageAccount }}
- name: AZURE_STORAGE_KEY
value: {{ .Values.agentservices.azureStorageKey }}
- name: GOOGLE_APPLICATION_CREDENTIALS
value: {{ .Values.agentservices.googleCredentials }}
- name: CLEARML_WORKER_ID
value: {{ .Values.agentservices.clearmlWorkerId }}
- name: CLEARML_API_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: tests_user_key
- name: CLEARML_API_SECRET_KEY
valueFrom:
secretKeyRef:
name: clearml-conf
key: tests_user_secret
args:
- agentservices
volumeMounts:
- name: agentservices-data
mountPath: /root/.clearml
resources:
{{- toYaml .Values.agentservices.resources | nindent 12 }}
{{- with .Values.agentservices.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.agentservices.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.agentservices.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -19,7 +19,14 @@ spec:
labels:
{{- include "clearml.selectorLabelsApiServer" . | nindent 8 }}
spec:
{{- include "clearml.imagePullSecrets" . | indent 6 }}
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-agent-registry-key
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.apiserver.image.repository }}:{{ .Values.apiserver.image.tag | default .Chart.AppVersion }}"

View File

@@ -22,7 +22,14 @@ spec:
- name: fileserver-data
persistentVolumeClaim:
claimName: {{ include "clearml.fullname" . }}-fileserver-data
{{- include "clearml.imagePullSecrets" . | indent 6 }}
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-agent-registry-key
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.fileserver.image.repository }}:{{ .Values.fileserver.image.tag | default .Chart.AppVersion }}"

View File

@@ -18,7 +18,14 @@ spec:
labels:
{{- include "clearml.selectorLabelsWebServer" . | nindent 8 }}
spec:
{{- include "clearml.imagePullSecrets" . | indent 6 }}
{{- if .Values.imageCredentials.enabled }}
imagePullSecrets:
{{- if .Values.imageCredentials.existingSecret }}
- name: .Values.imageCredentials.existingSecret
{{- else }}
- name: clearml-agent-registry-key
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.webserver.image.repository }}:{{ .Values.webserver.image.tag | default .Chart.AppVersion }}"

View File

@@ -1,17 +0,0 @@
{{- if .Values.agentservices.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ include "clearml.fullname" . }}-agentservices-data
labels:
{{- include "clearml.labels" . | nindent 4 }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.agentservices.storage.data.size | quote }}
{{- if .Values.agentservices.storage.data.class -}}
storageClassName: {{ .Values.agentservices.storage.data.class | quote }}
{{- end -}}
{{- end }}

View File

@@ -10,7 +10,7 @@ spec:
resources:
requests:
storage: {{ .Values.fileserver.storage.data.size | quote }}
{{- if .Values.fileserver.storage.data.class -}}
{{- if .Values.fileserver.storage.data.class }}
storageClassName: {{ .Values.fileserver.storage.data.class | quote }}
{{- end -}}

View File

@@ -1,13 +0,0 @@
{{- range $key, $value := .Values.agentGroups }}
{{- with $value }}
---
{{ if .clearmlConfig }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}-conf
data:
clearml.conf: {{ .clearmlConfig | b64enc }}
{{ end }}
{{- end }}
{{- end }}

View File

@@ -1,6 +1,19 @@
# global:
# imagePullSecrets:
# - docker-cfg
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: false
# -- If this is set, chart will not generate a secret but will use what is defined here
existingSecret: ""
# -- Registry name
registry: docker.io
# -- Registry username
username: someone
# -- Registry password
password: pwd
# -- Email
email: someone@host.com
# -- ClearMl generic configurations
clearml:
defaultCompany: "d1bd92a3b039400cbafc60a7a5b1e52b"
ingress:
@@ -67,7 +80,7 @@ apiserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.4.0"
tag: "1.5.0"
extraEnvs: []
@@ -136,7 +149,7 @@ fileserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.4.0"
tag: "1.5.0"
extraEnvs: []
@@ -181,7 +194,7 @@ webserver:
image:
repository: "allegroai/clearml"
pullPolicy: IfNotPresent
tag: "1.4.0"
tag: "1.5.0"
podAnnotations: {}
@@ -205,162 +218,6 @@ webserver:
additionalConfigs: {}
agentservices:
enabled: false
clearmlHostIp: null
agentVersion: ""
clearmlWebHost: null
clearmlFilesHost: null
clearmlGitUser: null
clearmlGitPassword: null
awsAccessKeyId: null
awsSecretAccessKey: null
awsDefaultRegion: null
azureStorageAccount: null
azureStorageKey: null
googleCredentials: null
clearmlWorkerId: "clearml-services"
replicaCount: 1
image:
repository: "allegroai/clearml-agent-services"
pullPolicy: IfNotPresent
tag: "latest"
extraEnvs: []
podAnnotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
storage:
data:
class: ""
size: 50Gi
agentGroups:
agent-group-cpu:
enabled: false
name: agent-group-cpu
replicaCount: 1
updateStrategy: Recreate
nvidiaGpusPerAgent: 0
agentVersion: "" # if set, it *MUST* include comparison operator (e.g. ">=0.16.1")
queues: "default" # multiple queues can be specified separated by a space (e.g. "important_jobs default")
clearmlGitUser: null
clearmlGitPassword: null
clearmlAccessKey: null
clearmlSecretKey: null
awsAccessKeyId: null
awsSecretAccessKey: null
awsDefaultRegion: null
azureStorageAccount: null
azureStorageKey: null
clearmlConfig: |-
sdk {
}
image:
repository: "ubuntu"
pullPolicy: IfNotPresent
tag: "18.04"
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
agent-group-gpu:
enabled: false
name: agent-group-gpu
replicaCount: 0
updateStrategy: Recreate
nvidiaGpusPerAgent: 1
agentVersion: "" # if set, it *MUST* include comparison operator (e.g. ">=0.16.1")
queues: "default" # multiple queues can be specified separated by a space (e.g. "important_jobs default")
clearmlGitUser: null
clearmlGitPassword: null
clearmlAccessKey: null
clearmlSecretKey: null
awsAccessKeyId: null
awsSecretAccessKey: null
awsDefaultRegion: null
azureStorageAccount: null
azureStorageKey: null
clearmlConfig: |-
sdk {
}
image:
repository: "nvidia/cuda"
pullPolicy: IfNotPresent
tag: "11.0-base-ubuntu18.04"
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
# This agent will spawn queued experiments in new pods, a good use case is to combine this with
# GPU autoscaling nodes.
# https://github.com/allegroai/clearml-agent/tree/master/docker/k8s-glue
agentk8sglue:
enabled: true
image:
repository: "allegroai/clearml-agent-k8s"
tag: "latest"
serviceAccountName: default
maxPods: 10
defaultDockerImage: nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04 # default docker image that is spawned as new pod
queue: default
id: k8s-agent
podTemplate:
volumes: []
# - name: "yourvolume"
# path: "/yourpath"
env: []
# # to setup access to private repo, setup secret with git credentials:
# - name: CLEARML_AGENT_GIT_USER
# value: mygitusername
# - name: CLEARML_AGENT_GIT_PASS
# valueFrom:
# secretKeyRef:
# name: git-password
# key: git-password
resources: {}
# limits:
# nvidia.com/gpu: 1
tolerations: []
# - key: "nvidia.com/gpu"
# operator: Exists
# effect: "NoSchedule"
nodeSelector: {}
# fleet: gpu-nodes
externalServices:
# -- Existing ElasticSearch Hostname to use if elasticsearch.enabled is false
elasticsearchHost: ""

View File

@@ -0,0 +1,2 @@
tests/
.pytest_cache/

View File

@@ -0,0 +1,12 @@
apiVersion: v1
appVersion: 7.16.2
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
icon: https://helm.elastic.co/icons/elasticsearch.png
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
sources:
- https://github.com/elastic/elasticsearch
version: 7.16.2

View File

@@ -0,0 +1 @@
include ../helpers/common.mk

View File

@@ -0,0 +1,457 @@
# Elasticsearch Helm Chart
[![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+master.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/elastic)](https://artifacthub.io/packages/search?repo=elastic)
This Helm chart is a lightweight way to configure and run our official
[Elasticsearch Docker image][].
<!-- development warning placeholder -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Requirements](#requirements)
- [Installing](#installing)
- [Install released version using Helm repository](#install-released-version-using-helm-repository)
- [Install development version from a branch](#install-development-version-from-a-branch)
- [Upgrading](#upgrading)
- [Usage notes](#usage-notes)
- [Configuration](#configuration)
- [Deprecated](#deprecated)
- [FAQ](#faq)
- [How to deploy this chart on a specific K8S distribution?](#how-to-deploy-this-chart-on-a-specific-k8s-distribution)
- [How to deploy dedicated nodes types?](#how-to-deploy-dedicated-nodes-types)
- [Clustering and Node Discovery](#clustering-and-node-discovery)
- [How to deploy clusters with security (authentication and TLS) enabled?](#how-to-deploy-clusters-with-security-authentication-and-tls-enabled)
- [How to migrate from helm/charts stable chart?](#how-to-migrate-from-helmcharts-stable-chart)
- [How to install plugins?](#how-to-install-plugins)
- [How to use the keystore?](#how-to-use-the-keystore)
- [Basic example](#basic-example)
- [Multiple keys](#multiple-keys)
- [Custom paths and keys](#custom-paths-and-keys)
- [How to enable snapshotting?](#how-to-enable-snapshotting)
- [How to configure templates post-deployment?](#how-to-configure-templates-post-deployment)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- Use this to update TOC: -->
<!-- docker run --rm -it -v $(pwd):/usr/src jorgeandrada/doctoc --github -->
## Requirements
* Kubernetes >= 1.14
* [Helm][] >= 2.17.0
* Minimum cluster requirements include the following to run this chart with
default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
See [supported configurations][] for more details.
## Installing
This chart is tested with the latest 7.16.2 version.
### Install released version using Helm repository
* Add the Elastic Helm charts repo:
`helm repo add elastic https://helm.elastic.co`
* Install it:
- with Helm 3: `helm install elasticsearch --version <version> elastic/elasticsearch`
- with Helm 2 (deprecated): `helm install --name elasticsearch --version <version> elastic/elasticsearch`
### Install development version from a branch
* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git`
* Checkout the branch : `git checkout 7.16`
* Install it:
- with Helm 3: `helm install elasticsearch ./helm-charts/elasticsearch --set imageTag=7.16.2`
- with Helm 2 (deprecated): `helm install --name elasticsearch ./helm-charts/elasticsearch --set imageTag=7.16.2`
## Upgrading
Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before
upgrading to a new chart version.
## Usage notes
* This repo includes a number of [examples][] configurations which can be used
as a reference. They are also used in the automated testing of this chart.
* Automated testing of this chart is currently only run against GKE (Google
Kubernetes Engine).
* The chart deploys a StatefulSet and by default will do an automated rolling
update of your cluster. It does this by waiting for the cluster health to become
green after each instance is updated. If you prefer to update manually you can
set `OnDelete` [updateStrategy][].
* It is important to verify that the JVM heap size in `esJavaOpts` and to set
the CPU/Memory `resources` to something suitable for your cluster.
* To simplify chart and maintenance each set of node groups is deployed as a
separate Helm release. Take a look at the [multi][] example to get an idea for
how this works. Without doing this it isn't possible to resize persistent
volumes in a StatefulSet. By setting it up this way it makes it possible to add
more nodes with a new storage size then drain the old ones. It also solves the
problem of allowing the user to determine which node groups to update first when
doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure
Elasticsearch. It exposes ways to set environment variables and mount secrets
inside of the container. Doing this makes it much easier for this chart to
support multiple versions with minimal changes.
## Configuration
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` |
| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` |
| `clusterDeprecationIndexing` | Enable or disable deprecation logs to be indexed (should be disabled when deploying master only node groups) | `false` |
| `enableServiceLinks` | Set to false to disabling service links, which can cause slow pod startup times when there are many services in the current namespace. | `true` |
| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` |
| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you could configure the [jvm heap size][] | `""` |
| `esMajorVersion` | Deprecated. Instead, use the version of the chart corresponding to your ES minor version. Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` |
| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` |
| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` |
| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` |
| `healthNameOverride` | Overrides `test-elasticsearch-health` pod name | `""` |
| `hostAliases` | Configurable [hostAliases][] | `[]` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` |
| `imageTag` | The Elasticsearch Docker image tag | `7.16.2` |
| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] |
| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` |
| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` |
| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` |
| `lifecycle` | Allows you to add [lifecycle hooks][]. See [values.yaml][] for an example of the formatting | `{}` |
| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` |
| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` |
| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` |
| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` |
| `networkPolicy` | The [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to set. See [`values.yaml`](./values.yaml) for an example | `{http.enabled: false,transport.enabled: false}` |
| `nodeAffinity` | Value for the [node affinity settings][] | `{}` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` |
| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] |
| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` |
| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] |
| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] |
| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` |
| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"`, or automount the service account token | see [values.yaml][] |
| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] |
| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` |
| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] |
| `roles` | A hash map with the specific [roles][] for the `nodeGroup` | see [values.yaml][] |
| `schedulerName` | Name of the [alternate scheduler][] | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` |
| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] |
| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` |
| `service.enabled` | Enable non-headless service | `true` |
| `service.externalTrafficPolicy` | Some cloud providers allow you to specify the [LoadBalancer externalTrafficPolicy][]. Kubernetes will use this to preserve the client source IP. This will configure load balancer if `service.type` is `LoadBalancer` | `""` |
| `service.httpPortName` | The name of the http port within the service | `http` |
| `service.labelsHeadless` | Labels to be added to headless service | `{}` |
| `service.labels` | Labels to be added to non-headless service | `{}` |
| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` |
| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` |
| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` |
| `service.transportPortName` | The name of the transport port within the service | `transport` |
| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` |
| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` |
| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` |
| `tests.enabled` | Enable creating test related resources when running `helm template` or `helm test` | `true` |
| `tolerations` | Configurable [tolerations][] | `[]` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` |
| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] |
### Deprecated
| Parameter | Description | Default |
|-----------|---------------------------------------------------------------------------------------------------------------|---------|
| `fsGroup` | The Group ID (GID) for [securityContext][] so that the Elasticsearch user can read from the persistent volume | `""` |
## FAQ
### How to deploy this chart on a specific K8S distribution?
This chart is designed to run on production scale Kubernetes clusters with
multiple nodes, lots of memory and persistent storage. For that reason it can be
a bit tricky to run them against local Kubernetes environments such as
[Minikube][].
This chart is highly tested with [GKE][], but some K8S distribution also
requires specific configurations.
We provide examples of configuration for the following K8S providers:
- [Docker for Mac][]
- [KIND][]
- [Minikube][]
- [MicroK8S][]
- [OpenShift][]
### How to deploy dedicated nodes types?
All the Elasticsearch pods deployed share the same configuration. If you need to
deploy dedicated [nodes types][] (for example dedicated master and data nodes),
you can deploy multiple releases of this chart with different configurations
while they share the same `clusterName` value.
For each Helm release, the nodes types can then be defined using `roles` value.
An example of Elasticsearch cluster using 2 different Helm releases for master
and data nodes can be found in [examples/multi][].
#### Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two
`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup`
and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all
pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can
just add new groups of nodes with a different `nodeGroup` and they will
automatically discover the correct master. If your master nodes have a different
`nodeGroup` name then you will need to set `masterService` to
`$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate
`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to
contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting
`masterService` to the desired `Service` name of the related cluster is
sufficient.
### How to deploy clusters with security (authentication and TLS) enabled?
This Helm chart can use existing [Kubernetes secrets][] to setup
credentials or certificates for examples. These secrets should be created
outside of this chart and accessed using [environment variables][] and volumes.
An example of Elasticsearch cluster using security can be found in
[examples/security][].
### How to migrate from helm/charts stable chart?
If you currently have a cluster deployed with the [helm/charts stable][] chart
you can follow the [migration guide][].
### How to install plugins?
The recommended way to install plugins into our Docker images is to create a
[custom Docker image][].
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install
plugins is not a great idea or something that we recommend. Especially in
Kubernetes where it is normal and expected for a container to be moved to
another host at random times.
2. Mutating the state of a running Docker image (by installing plugins) goes
against best practices of containers and immutable infrastructure.
### How to use the keystore?
#### Basic example
Create the secret, the key name needs to be the keystore key path. In this
example we will create a secret from a file and from a literal string.
```
kubectl create secret generic encryption-key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic slack-hook --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
To add these secrets to the keystore:
```
keystore:
- secretName: encryption-key
- secretName: slack-hook
```
#### Multiple keys
All keys in the secret will be added to the keystore. To create the previous
example in one secret you could also do:
```
kubectl create secret generic keystore-secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
```
keystore:
- secretName: keystore-secrets
```
#### Custom paths and keys
If you are using these secrets for other applications (besides the Elasticsearch
keystore) then it is also possible to specify the keystore path and which keys
you want to add. Everything specified under each `keystore` item will be passed
through to the `volumeMounts` section for mounting the [secret][]. In this
example we will only add the `slack_hook` key from a secret that also has other
keys. Our secret looks like this:
```
kubectl create secret generic slack-secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
We only want to add the `slack_hook` key to the keystore at path
`xpack.notification.slack.account.monitoring.secure_url`:
```
keystore:
- secretName: slack-secrets
items:
- key: slack_hook
path: xpack.notification.slack.account.monitoring.secure_url
```
You can also take a look at the [config example][] which is used as part of the
automated testing pipeline.
### How to enable snapshotting?
1. Install your [snapshot plugin][] into a custom Docker image following the
[how to install plugins guide][].
2. Add any required secrets or credentials into an Elasticsearch keystore
following the [how to use the keystore][] guide.
3. Configure the [snapshot repository][] as you normally would.
4. To automate snapshots you can use [Snapshot Lifecycle Management][] or a tool
like [curator][].
### How to configure templates post-deployment?
You can use `postStart` [lifecycle hooks][] to run code triggered after a
container is created.
Here is an example of `postStart` hook to configure templates:
```yaml
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas
TEMPLATE_NAME=my_template
INDEX_PATTERN="logstash-*"
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
```
## Contributing
Please check [CONTRIBUTING.md][] before any contribution or for any questions
about our development and testing process.
[7.16]: https://github.com/elastic/helm-charts/releases
[#63]: https://github.com/elastic/helm-charts/issues/63
[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md
[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md
[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md
[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/important-settings.html#cluster-name
[clustering and node discovery]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#clustering-and-node-discovery
[config example]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/values.yaml
[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/7.9/snapshot.html
[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docker.html#_c_customized_image
[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/discovery-settings.html#minimum_master_nodes
[docker for mac]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/docker-for-mac
[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/cluster-health.html#request-params
[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/docker.html
[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config
[environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
[examples]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/
[examples/multi]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi
[examples/security]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/security
[gke]: https://cloud.google.com/kubernetes-engine
[helm]: https://helm.sh
[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/
[how to install plugins guide]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#how-to-install-plugins
[how to use the keystore]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/README.md#how-to-use-the-keystore
[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-http.html#_settings
[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images
[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/jvm-options.html
[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/heap-size.html
[hostAliases]: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
[kind]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/kubernetes-kind
[kubernetes secrets]: https://kubernetes.io/docs/concepts/configuration/secret/
[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
[loadBalancer externalTrafficPolicy]: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
[migration guide]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/migration/README.md
[minikube]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/minikube
[microk8s]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/microk8s
[multi]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/
[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/network.host.html
[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
[node-certificates]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-tls.html#node-certificates
[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
[nodes types]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html
[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
[openshift]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/openshift
[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-node.html
[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[snapshot lifecycle management]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/snapshot-lifecycle-management.html
[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/7.16/repository.html
[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-snapshots.html
[supported configurations]: https://github.com/elastic/helm-charts/tree/7.16/README.md#supported-configurations
[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/vm-max-map-count.html#vm-max-map-count
[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-transport.html#_transport_settings
[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[values.yaml]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/values.yaml
[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage

View File

@@ -0,0 +1,21 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-config
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
secrets:
kubectl delete secret elastic-config-credentials elastic-config-secret elastic-config-slack elastic-config-custom-path || true
kubectl create secret generic elastic-config-credentials --from-literal=password=changeme --from-literal=username=elastic
kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
kubectl create secret generic elastic-config-secret --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic elastic-config-custom-path --from-literal=slack_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' --from-literal=thing_i_don_tcare_about=test
test: secrets install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,27 @@
# Config
This example deploy a single node Elasticsearch 7.16.2 with authentication and
custom [values][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/config-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/test/goss.yaml
[values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/config/values.yaml

View File

@@ -0,0 +1,29 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":1'
- '"number_of_data_nodes":1'
http://localhost:9200:
status: 200
timeout: 2000
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "config"'
- "You Know, for Search"
command:
"elasticsearch-keystore list":
exit-status: 0
stdout:
- keystore.seed
- bootstrap.password
- xpack.notification.slack.account.monitoring.secure_url
- xpack.notification.slack.account.otheraccount.secure_url
- xpack.watcher.encryption_key

View File

@@ -0,0 +1,27 @@
---
clusterName: "config"
replicas: 1
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: password
# This is just a dummy file to make sure that
# the keystore can be mounted at the same time
# as a custom elasticsearch.yml
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
path.data: /usr/share/elasticsearch/data
keystore:
- secretName: elastic-config-secret
- secretName: elastic-config-slack
- secretName: elastic-config-custom-path
items:
- key: slack_url
path: xpack.notification.slack.account.otheraccount.secure_url

View File

@@ -0,0 +1 @@
supersecret

View File

@@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-default
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,25 @@
# Default
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster using
[default values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/default/test/goss.yaml
[default values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/values.yaml

View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash -x
kubectl proxy || true &
make &
PROC_ID=$!
while kill -0 "$PROC_ID" >/dev/null 2>&1; do
echo "PROCESS IS RUNNING"
if curl --fail 'http://localhost:8001/api/v1/proxy/namespaces/default/services/elasticsearch-master:9200/_search' ; then
echo "cluster is healthy"
else
echo "cluster not healthy!"
exit 1
fi
sleep 1
done
echo "PROCESS TERMINATED"
exit 0

View File

@@ -0,0 +1,38 @@
kernel-param:
vm.max_map_count:
value: "262144"
http:
http://elasticsearch-master:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.16.2"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"
file:
/usr/share/elasticsearch/data:
exists: true
mode: "2775"
owner: root
group: elasticsearch
filetype: directory
mount:
/usr/share/elasticsearch/data:
exists: true
user:
elasticsearch:
exists: true
uid: 1000
gid: 1000

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-docker-for-mac
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,23 @@
# Docker for Mac
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster on [Docker for Mac][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/docker-for-mac/values.yaml
[docker for mac]: https://docs.docker.com/docker-for-mac/kubernetes/

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "hostpath"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,17 @@
default: test
RELEASE := helm-es-kind
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
install-local-path:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values-local-path.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,36 @@
# KIND
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster on [Kind][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
Note that Kind < 0.7.0 are affected by a [kind issue][] with mount points
created from PVCs not writable by non-root users. [kubernetes-sigs/kind#1157][]
fix it in Kind 0.7.0.
The workaround for Kind < 0.7.0 is to install manually
[Rancher Local Path Provisioner][] and use `local-path` storage class for
Elasticsearch volumes (see [Makefile][] instructions).
## Usage
* For Kind >= 0.7.0: Deploy Elasticsearch chart with the default values: `make install`
* For Kind < 0.7.0: Deploy Elasticsearch chart with `local-path` storage class: `make install-local-path`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/examples/kubernetes-kind/values.yaml
[kind]: https://kind.sigs.k8s.io/
[kind issue]: https://github.com/kubernetes-sigs/kind/issues/830
[kubernetes-sigs/kind#1157]: https://github.com/kubernetes-sigs/kind/pull/1157
[rancher local path provisioner]: https://github.com/rancher/local-path-provisioner
[Makefile]: https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/examples/kubernetes-kind/Makefile#L5

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-microk8s
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,32 @@
# MicroK8S
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster on [MicroK8S][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
The following MicroK8S [addons][] need to be enabled:
- `dns`
- `helm`
- `storage`
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[addons]: https://microk8s.io/docs/addons
[custom values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/microk8s/values.yaml
[MicroK8S]: https://microk8s.io

View File

@@ -0,0 +1,32 @@
---
# Disable privileged init Container creation.
sysctlInitContainer:
enabled: false
# Restrict the use of the memory-mapping when sysctlInitContainer is disabled.
esConfig:
elasticsearch.yml: |
node.store.allow_mmap: false
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "microk8s-hostpath"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,10 @@
PREFIX := helm-es-migration
data:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
master:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
client:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../

View File

@@ -0,0 +1,167 @@
# Migration Guide from helm/charts
There are two viable options for migrating from the community Elasticsearch Helm
chart from the [helm/charts][] repo.
1. Restoring from Snapshot to a fresh cluster
2. Live migration by joining a new cluster to the existing cluster.
## Restoring from Snapshot
This is the recommended and preferred option. The downside is that it will
involve a period of write downtime during the migration. If you have a way to
temporarily stop writes to your cluster then this is the way to go. This is also
a lot simpler as it just involves launching a fresh cluster and restoring a
snapshot following the [restoring to a different cluster guide][].
## Live migration
If restoring from a snapshot is not possible due to the write downtime then a
live migration is also possible. It is very important to first test this in a
testing environment to make sure you are comfortable with the process and fully
understand what is happening.
This process will involve joining a new set of master, data and client nodes to
an existing cluster that has been deployed using the [helm/charts][] community
chart. Nodes will then be replaced one by one in a controlled fashion to
decommission the old cluster.
This example will be using the default values for the existing helm/charts
release and for the Elastic helm-charts release. If you have changed any of the
default values then you will need to first make sure that your values are
configured in a compatible way before starting the migration.
The process will involve a re-sync and a rolling restart of all of your data
nodes. Therefore it is important to disable shard allocation and perform a synced
flush like you normally would during any other rolling upgrade. See the
[rolling upgrades guide][] for more information.
* The default image for this chart is
`docker.elastic.co/elasticsearch/elasticsearch` which contains the default
distribution of Elasticsearch with a [basic license][]. Make sure to update the
`image` and `imageTag` values to the correct Docker image and Elasticsearch
version that you currently have deployed.
* Convert your current helm/charts configuration into something that is
compatible with this chart.
* Take a fresh snapshot of your cluster. If something goes wrong you want to be
able to restore your data no matter what.
* Check that your clusters health is green. If not abort and make sure your
cluster is healthy before continuing:
```
curl localhost:9200/_cluster/health
```
* Deploy new data nodes which will join the existing cluster. Take a look at the
configuration in [data.yaml][]:
```
make data
```
* Check that the new nodes have joined the cluster (run this and any other curl
commands from within one of your pods):
```
curl localhost:9200/_cat/nodes
```
* Check that your cluster is still green. If so we can now start to scale down
the existing data nodes. Assuming you have the default amount of data nodes (2)
we now want to scale it down to 1:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=1
```
* Wait for your cluster to become green again:
```
watch 'curl -s localhost:9200/_cluster/health'
```
* Once the cluster is green we can scale down again:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=0
```
* Wait for the cluster to be green again.
* OK. We now have all data nodes running in the new cluster. Time to replace the
masters by firstly scaling down the masters from 3 to 2. Between each step make
sure to wait for the cluster to become green again, and check with
`curl localhost:9200/_cat/nodes` that you see the correct amount of master
nodes. During this process we will always make sure to keep at least 2 master
nodes as to not lose quorum:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=2
```
* Now deploy a single new master so that we have 3 masters again. See
[master.yaml][] for the configuration:
```
make master
```
* Scale down old masters to 1:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=1
```
* Edit the masters in [masters.yaml][] to 2 and redeploy:
```
make master
```
* Scale down the old masters to 0:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=0
```
* Edit the [masters.yaml][] to have 3 replicas and remove the
`discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the
masters. This will make sure all 3 masters are running in the new cluster and
are pointing at each other for discovery:
```
make master
```
* Remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then
redeploy the data nodes to make sure they are pointing at the new masters:
```
make data
```
* Deploy the client nodes:
```
make client
```
* Update any processes that are talking to the existing client nodes and point
them to the new client nodes. Once this is done you can scale down the old
client nodes:
```
kubectl scale deployment my-release-elasticsearch-client --replicas=0
```
* The migration should now be complete. After verifying that everything is
working correctly you can cleanup leftover resources from your old cluster.
[basic license]: https://www.elastic.co/subscriptions
[data.yaml]: https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/examples/migration/data.yaml
[helm/charts]: https://github.com/helm/charts/tree/7.16/stable/elasticsearch
[master.yaml]: https://github.com/elastic/helm-charts/blob/7.16/elasticsearch/examples/migration/master.yaml
[restoring to a different cluster guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/modules-snapshots.html#_restoring_to_a_different_cluster
[rolling upgrades guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/rolling-upgrades.html

View File

@@ -0,0 +1,23 @@
---
replicas: 2
clusterName: "elasticsearch"
nodeGroup: "client"
esMajorVersion: 6
roles:
master: "false"
ingest: "false"
data: "false"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi # Currently needed till pvcs are made optional
persistence:
enabled: false

View File

@@ -0,0 +1,17 @@
---
replicas: 2
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "data"
roles:
master: "false"
ingest: "false"
data: "true"

View File

@@ -0,0 +1,26 @@
---
# Temporarily set to 3 so we can scale up/down the old a new cluster
# one at a time whilst always keeping 3 masters running
replicas: 1
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 4Gi

View File

@@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-minikube
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,38 @@
# Minikube
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster on [Minikube][]
using [custom values][].
If helm or kubectl timeouts occur, you may consider creating a minikube VM with
more CPU cores or memory allocated.
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
In order to properly support the required persistent volume claims for the
Elasticsearch StatefulSet, the `default-storageclass` and `storage-provisioner`
minikube addons must be enabled.
```
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
```
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/minikube/values.yaml
[minikube]: https://minikube.sigs.k8s.io/docs/

View File

@@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100M

View File

@@ -0,0 +1,19 @@
default: test
include ../../../helpers/examples.mk
PREFIX := helm-es-multi
RELEASE := helm-es-multi-master
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../
test: install goss
purge:
helm del $(PREFIX)-master
helm del $(PREFIX)-data
helm del $(PREFIX)-client

View File

@@ -0,0 +1,29 @@
# Multi
This example deploy an Elasticsearch 7.16.2 cluster composed of 3 different Helm
releases:
- `helm-es-multi-master` for the 3 master nodes using [master values][]
- `helm-es-multi-data` for the 3 data nodes using [data values][]
- `helm-es-multi-client` for the 3 client nodes using [client values][]
## Usage
* Deploy the 3 Elasticsearch releases: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/multi-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[client values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/client.yaml
[data values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/data.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/test/goss.yaml
[master values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/multi/master.yaml

View File

@@ -0,0 +1,14 @@
---
clusterName: "multi"
nodeGroup: "client"
roles:
master: "false"
ingest: "false"
data: "false"
ml: "false"
remote_cluster_client: "false"
persistence:
enabled: false

View File

@@ -0,0 +1,11 @@
---
clusterName: "multi"
nodeGroup: "data"
roles:
master: "false"
ingest: "true"
data: "true"
ml: "false"
remote_cluster_client: "false"

View File

@@ -0,0 +1,11 @@
---
clusterName: "multi"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
ml: "false"
remote_cluster_client: "false"

View File

@@ -0,0 +1,9 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- 'green'
- '"cluster_name":"multi"'
- '"number_of_nodes":9'
- '"number_of_data_nodes":3'

View File

@@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-networkpolicy
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,37 @@
networkPolicy:
http:
enabled: true
explicitNamespacesSelector:
# Accept from namespaces with all those different rules (from whitelisted Pods)
matchLabels:
role: frontend-http
matchExpressions:
- {key: role, operator: In, values: [frontend-http]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-http
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-http
transport:
enabled: true
allowExternal: true
explicitNamespacesSelector:
matchLabels:
role: frontend-transport
matchExpressions:
- {key: role, operator: In, values: [frontend-transport]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-transport
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-transport

View File

@@ -0,0 +1,13 @@
default: test
include ../../../helpers/examples.mk
RELEASE := elasticsearch
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,24 @@
# OpenShift
This example deploy a 3 nodes Elasticsearch 7.16.2 cluster on [OpenShift][]
using [custom values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[custom values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/openshift/values.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/openshift/test/goss.yaml
[openshift]: https://www.openshift.com/

View File

@@ -0,0 +1,16 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.16.2"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"

View File

@@ -0,0 +1,11 @@
---
securityContext:
runAsUser: null
podSecurityContext:
fsGroup: null
runAsUser: null
sysctlInitContainer:
enabled: false

View File

@@ -0,0 +1,38 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-security
ELASTICSEARCH_IMAGE := docker.elastic.co/elasticsearch/elasticsearch:$(STACK_VERSION)
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: secrets install goss
purge:
kubectl delete secrets elastic-credentials elastic-certificates elastic-certificate-pem elastic-certificate-crt|| true
helm del $(RELEASE)
pull-elasticsearch-image:
docker pull $(ELASTICSEARCH_IMAGE)
secrets:
docker rm -f elastic-helm-charts-certs || true
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12 || true
password=$$([ ! -z "$$ELASTIC_PASSWORD" ] && echo $$ELASTIC_PASSWORD || echo $$(docker run --rm busybox:1.31.1 /bin/sh -c "< /dev/urandom tr -cd '[:alnum:]' | head -c20")) && \
docker run --name elastic-helm-charts-certs -i -w /app \
$(ELASTICSEARCH_IMAGE) \
/bin/sh -c " \
elasticsearch-certutil ca --out /app/elastic-stack-ca.p12 --pass '' && \
elasticsearch-certutil cert --name security-master --dns security-master --ca /app/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /app/elastic-certificates.p12" && \
docker cp elastic-helm-charts-certs:/app/elastic-certificates.p12 ./ && \
docker rm -f elastic-helm-charts-certs && \
openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem && \
openssl x509 -outform der -in elastic-certificate.pem -out elastic-certificate.crt && \
kubectl create secret generic elastic-certificates --from-file=elastic-certificates.p12 && \
kubectl create secret generic elastic-certificate-pem --from-file=elastic-certificate.pem && \
kubectl create secret generic elastic-certificate-crt --from-file=elastic-certificate.crt && \
kubectl create secret generic elastic-credentials --from-literal=password=$$password --from-literal=username=elastic && \
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12

View File

@@ -0,0 +1,29 @@
# Security
This example deploy a 3 nodes Elasticsearch 7.16.2 with authentication and
autogenerated certificates for TLS (see [values][]).
Note that this configuration should be used for test only. For a production
deployment you should generate SSL certificates following the [official docs][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/security-master 9200
curl -u elastic:changeme https://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/security/test/goss.yaml
[official docs]: https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-tls.html#node-certificates
[values]: https://github.com/elastic/helm-charts/tree/7.16/elasticsearch/examples/security/values.yaml

View File

@@ -0,0 +1,44 @@
http:
https://security-master:9200/_cluster/health:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200/:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "security"'
- "You Know, for Search"
https://localhost:9200/_xpack/license:
status: 200
timeout: 2000
allow-insecure: true
username: elastic
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "active"
- "basic"
file:
/usr/share/elasticsearch/config/elasticsearch.yml:
exists: true
contains:
- "xpack.security.enabled: true"
- "xpack.security.transport.ssl.enabled: true"
- "xpack.security.transport.ssl.verification_mode: certificate"
- "xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.enabled: true"
- "xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"

View File

@@ -0,0 +1,33 @@
---
clusterName: "security"
nodeGroup: "master"
roles:
master: "true"
ingest: "true"
data: "true"
protocol: https
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs

View File

@@ -0,0 +1,16 @@
default: test
include ../../../helpers/examples.mk
CHART := elasticsearch
RELEASE := helm-es-upgrade
FROM := 7.4.0 # versions before 7.4.O aren't compatible with Kubernetes >= 1.16.0
install:
../../../helpers/upgrade.sh --chart $(CHART) --release $(RELEASE) --from $(FROM)
kubectl rollout status statefulset upgrade-master
test: install goss
purge:
helm del $(RELEASE)

View File

@@ -0,0 +1,17 @@
# Upgrade
This example will deploy a 3 node Elasticsearch cluster chart using an old chart
version, then upgrade it.
## Usage
* Deploy and upgrade Elasticsearch chart with the default values: `make install`
## Testing
You can also run [goss integration tests][] using `make test`.
[goss integration tests]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/upgrade/test/goss.yaml

View File

@@ -0,0 +1,76 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<-EOF
USAGE:
$0 [--release <release-name>] [--from <elasticsearch-version>]
$0 --help
OPTIONS:
--release <release-name>
Name of the Helm release to install
--from <elasticsearch-version>
Elasticsearch version to use for first install
EOF
exit 1
}
RELEASE="helm-es-upgrade"
FROM=""
while [[ $# -gt 0 ]]
do
key="$1"
case $key in
--help)
usage
;;
--release)
RELEASE="$2"
shift 2
;;
--from)
FROM="$2"
shift 2
;;
*)
log "Unrecognized argument: '$key'"
usage
;;
esac
done
if ! command -v jq > /dev/null
then
echo 'jq is required to use this script'
echo 'please check https://stedolan.github.io/jq/download/ to install it'
exit 1
fi
# Elasticsearch chart < 7.4.0 are not compatible with K8S >= 1.16)
if [[ -z $FROM ]]
then
KUBE_MINOR_VERSION=$(kubectl version -o json | jq --raw-output --exit-status '.serverVersion.minor' | sed 's/[^0-9]*//g')
if [ "$KUBE_MINOR_VERSION" -lt 16 ]
then
FROM="7.0.0-alpha1"
else
FROM="7.4.0"
fi
fi
helm repo add elastic https://helm.elastic.co
# Initial install
printf "Installing Elasticsearch chart %s\n" "$FROM"
helm upgrade --wait --timeout=600s --install "$RELEASE" elastic/elasticsearch --version "$FROM" --set clusterName=upgrade
kubectl rollout status sts/upgrade-master --timeout=600s
# Upgrade
printf "Upgrading Elasticsearch chart\n"
helm upgrade --wait --timeout=600s --set terminationGracePeriod=121 --install "$RELEASE" ../../ --set clusterName=upgrade
kubectl rollout status sts/upgrade-master --timeout=600s

View File

@@ -0,0 +1,16 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.16.2"'
- '"cluster_name" : "upgrade"'
- "You Know, for Search"

View File

@@ -0,0 +1,2 @@
---
clusterName: upgrade

View File

@@ -0,0 +1,6 @@
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "elasticsearch.uname" . }} -w
{{- if .Values.tests.enabled -}}
2. Test cluster health using Helm test.
$ helm --namespace={{ .Release.Namespace }} test {{ .Release.Name }}
{{- end -}}

View File

@@ -0,0 +1,65 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "elasticsearch.uname" -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- else -}}
{{ .Values.nameOverride }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.masterService" -}}
{{- if empty .Values.masterService -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.nameOverride }}-master
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.endpoints" -}}
{{- $replicas := int (toString (.Values.replicas)) }}
{{- $uname := (include "elasticsearch.uname" .) }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
7
{{- end -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,16 @@
{{- if .Values.esConfig }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,64 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "elasticsearch.uname" . -}}
{{- $httpPort := .Values.httpPort -}}
{{- $pathtype := .Values.ingress.pathtype -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className | quote }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- if .ingressPath }}
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- else }}
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
{{- end}}
rules:
{{- range .Values.ingress.hosts }}
{{- if $ingressPath }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $httpPort }}
{{- else }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ $pathtype }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ .servicePort | default $httpPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,61 @@
{{- if (or .Values.networkPolicy.http.enabled .Values.networkPolicy.transport.enabled) }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
spec:
podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
ingress: # Allow inbound connections
{{- if .Values.networkPolicy.http.enabled }}
# For HTTP access
- ports:
- port: {{ .Values.httpPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-http-client: "true"
{{- with .Values.networkPolicy.http.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.http.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.networkPolicy.transport.enabled }}
# For transport access
- ports:
- port: {{ .Values.transportPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-transport-client: "true"
{{- with .Values.networkPolicy.transport.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.transport.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
# Or from other ElasticSearch Pods
- podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
---
{{- if .Values.maxUnavailable }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "{{ template "elasticsearch.uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.podSecurityPolicy.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ default $fullName .Values.podSecurityPolicy.name | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
spec:
{{ toYaml .Values.podSecurityPolicy.spec | indent 2 }}
{{- end -}}

View File

@@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}

View File

@@ -0,0 +1,24 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
subjects:
- kind: ServiceAccount
{{- if eq .Values.rbac.serviceAccountName "" }}
name: {{ $fullName | quote }}
{{- else }}
name: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
namespace: {{ .Release.Namespace | quote }}
roleRef:
kind: Role
name: {{ $fullName | quote }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@@ -0,0 +1,77 @@
---
{{- if .Values.service.enabled -}}
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}
{{- else }}
name: {{ template "elasticsearch.uname" . }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4}}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: {{ .Values.service.transportPortName | default "transport" }}
protocol: TCP
port: {{ .Values.transportPort }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}-headless
{{- else }}
name: {{ template "elasticsearch.uname" . }}-headless
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labelsHeadless }}
{{ toYaml .Values.service.labelsHeadless | indent 4 }}
{{- end }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
port: {{ .Values.httpPort }}
- name: {{ .Values.service.transportPortName | default "transport" }}
port: {{ .Values.transportPort }}

View File

@@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
{{- if eq .Values.rbac.serviceAccountName "" }}
name: {{ $fullName | quote }}
{{- else }}
name: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
annotations:
{{- with .Values.rbac.serviceAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
{{- end -}}

View File

@@ -0,0 +1,380 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
esMajorVersion: "{{ include "elasticsearch.esMajorVersion" . }}"
spec:
serviceName: {{ template "elasticsearch.uname" . }}-headless
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
replicas: {{ .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ template "elasticsearch.uname" . }}
{{- if .Values.persistence.labels.enabled }}
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{ toYaml .Values.volumeClaimTemplate | indent 6 }}
{{- end }}
template:
metadata:
name: "{{ template "elasticsearch.uname" . }}"
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.esConfig }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.fsGroup }}
fsGroup: {{ .Values.fsGroup }} # Deprecated value, please use .Values.podSecurityContext.fsGroup
{{- end }}
{{- if .Values.rbac.create }}
serviceAccountName: "{{ template "elasticsearch.uname" . }}"
{{- else if not (eq .Values.rbac.serviceAccountName "") }}
serviceAccountName: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
automountServiceAccountToken: {{ .Values.rbac.automountToken }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if or (eq .Values.antiAffinity "hard") (eq .Values.antiAffinity "soft") .Values.nodeAffinity }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
affinity:
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- if .defaultMode }}
defaultMode: {{ .defaultMode }}
{{- end }}
{{- end }}
{{- if .Values.esConfig }}
- name: esconfig
configMap:
name: {{ template "elasticsearch.uname" . }}-config
{{- end }}
{{- if .Values.keystore }}
- name: keystore
emptyDir: {}
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
secret: {{ toYaml . | nindent 12 }}
{{- end }}
{{ end }}
{{- if .Values.extraVolumes }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumes) }}
{{ tpl .Values.extraVolumes . | indent 8 }}
{{- else }}
{{ toYaml .Values.extraVolumes | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
enableServiceLinks: {{ .Values.enableServiceLinks }}
{{- if .Values.hostAliases }}
hostAliases: {{ toYaml .Values.hostAliases | nindent 8 }}
{{- end }}
{{- if or (.Values.extraInitContainers) (.Values.sysctlInitContainer.enabled) (.Values.keystore) }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
{{- end }}
{{ if .Values.keystore }}
- name: keystore
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- bash
- -c
- |
set -euo pipefail
elasticsearch-keystore create
for i in /tmp/keystoreSecrets/*/*; do
key=$(basename $i)
echo "Adding file $i to keystore key $key"
elasticsearch-keystore add-file "$key" "$i"
done
# Add the bootstrap password since otherwise the Elasticsearch entrypoint tries to do this on startup
if [ ! -z ${ELASTIC_PASSWORD+x} ]; then
echo 'Adding env $ELASTIC_PASSWORD to keystore as key bootstrap.password'
echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x bootstrap.password
fi
cp -a /usr/share/elasticsearch/config/elasticsearch.keystore /tmp/keystore/
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
envFrom: {{ toYaml .Values.envFrom | nindent 10 }}
resources: {{ toYaml .Values.initResources | nindent 10 }}
volumeMounts:
- name: keystore
mountPath: /tmp/keystore
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
mountPath: /tmp/keystoreSecrets/{{ .secretName }}
{{- end }}
{{ end }}
{{- if .Values.extraInitContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraInitContainers) }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraInitContainers | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
containers:
- name: "{{ template "elasticsearch.name" . }}"
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
exec:
command:
- bash
- -c
- |
set -e
# If the node is starting up wait for the cluster to be ready (request params: "{{ .Values.clusterHealthCheckParams }}" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$@" $args
fi
if [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$@" "{{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "{{ include "elasticsearch.esMajorVersion" . }}" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
if http "/_cluster/health?{{ .Values.clusterHealthCheckParams }}" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
exit 1
fi
fi
{{ toYaml .Values.readinessProbe | indent 10 }}
ports:
- name: http
containerPort: {{ .Values.httpPort }}
- name: transport
containerPort: {{ .Values.transportPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- if eq .Values.roles.master "true" }}
{{- if ge (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: cluster.initial_master_nodes
value: "{{ template "elasticsearch.endpoints" . }}"
{{- else }}
- name: discovery.zen.minimum_master_nodes
value: "{{ .Values.minimumMasterNodes }}"
{{- end }}
{{- end }}
{{- if lt (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: discovery.zen.ping.unicast.hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- else }}
- name: discovery.seed_hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- end }}
- name: cluster.name
value: "{{ .Values.clusterName }}"
- name: network.host
value: "{{ .Values.networkHost }}"
- name: cluster.deprecation_indexing.enabled
value: "{{ .Values.clusterDeprecationIndexing }}"
{{- if .Values.esJavaOpts }}
- name: ES_JAVA_OPTS
value: "{{ .Values.esJavaOpts }}"
{{- end }}
{{- range $role, $enabled := .Values.roles }}
- name: node.{{ $role }}
value: "{{ $enabled }}"
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
{{- if .Values.envFrom }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: "{{ template "elasticsearch.uname" . }}"
mountPath: /usr/share/elasticsearch/data
{{- end }}
{{ if .Values.keystore }}
- name: keystore
mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
{{ end }}
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.esConfig }}
- name: esconfig
mountPath: /usr/share/elasticsearch/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.extraVolumeMounts }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumeMounts) }}
{{ tpl .Values.extraVolumeMounts . | indent 10 }}
{{- else }}
{{ toYaml .Values.extraVolumeMounts | indent 10 }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{ toYaml .Values.lifecycle | indent 10 }}
{{- end }}
{{- if .Values.extraContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraContainers) }}
{{ tpl .Values.extraContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraContainers | indent 6 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,36 @@
---
{{- if .Values.tests.enabled -}}
apiVersion: v1
kind: Pod
metadata:
{{- if .Values.healthNameOverride }}
name: {{ .Values.healthNameOverride | quote }}
{{- else }}
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
securityContext:
{{ toYaml .Values.podSecurityContext | indent 4 }}
containers:
{{- if .Values.healthNameOverride }}
- name: {{ .Values.healthNameOverride | quote }}
{{- else }}
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail '{{ template "elasticsearch.uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 4 }}
{{- end }}
restartPolicy: Never
{{- end -}}

View File

@@ -0,0 +1,348 @@
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
clusterDeprecationIndexing: "false"
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.16.2"
imagePullPolicy: "IfNotPresent"
podAnnotations:
{}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources:
{}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
automountToken: true
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes:
[]
# - name: extras
# emptyDir: {}
extraVolumeMounts:
[]
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publicly expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
healthNameOverride: ""
lifecycle:
{}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talk to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true
# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,6 @@
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
version: 1.1.3
digest: sha256:3422f8c1139c2f0f0d816e7ee7280705d2010e83c3e2367b28dcb4f7dd911135
generated: "2020-12-22T11:27:07.137984504Z"

View File

@@ -0,0 +1,29 @@
annotations:
category: Database
apiVersion: v2
appVersion: 4.4.3
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
tags:
- bitnami-common
version: 1.x.x
description: NoSQL document-oriented database that stores JSON-like documents with
dynamic schemas, simplifying the integration of data in content-driven applications.
home: https://github.com/bitnami/charts/tree/master/bitnami/mongodb
icon: https://bitnami.com/assets/stacks/mongodb/img/mongodb-stack-220x234.png
keywords:
- mongodb
- database
- nosql
- cluster
- replicaset
- replication
maintainers:
- email: containers@bitnami.com
name: Bitnami
name: mongodb
sources:
- https://github.com/bitnami/bitnami-docker-mongodb
- https://mongodb.org
version: 10.3.4

View File

@@ -0,0 +1,663 @@
# MongoDB
[MongoDB](https://www.mongodb.com/) is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas, making the integration of data in certain types of applications easier and faster.
## TL;DR
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/mongodb
```
## Introduction
This chart bootstraps a [MongoDB](https://github.com/bitnami/bitnami-docker-mongodb) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
- Kubernetes 1.12+
- Helm 3.0-beta3+
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install my-release bitnami/mongodb
```
The command deploys MongoDB on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Architecture
This charts allows you install MongoDB using two different architecture setups: "standalone" or "replicaset". You can use the `architecture` parameter to choose the one to use:
```console
architecture="standalone"
architecture="replicaset"
```
The standalone architecture installs a deployment (or statefulset) with one MongoDB server (it cannot be scaled):
```
┌────────────────┐
│ MongoDB │
| svc │
└───────┬────────┘
┌──────────┐
│ MongoDB │
│ Server │
│ Pod │
└──────────┘
```
The chart supports the replicaset architecture with and without a [MongoDB Arbiter](https://docs.mongodb.com/manual/core/replica-set-arbiter/):
- When the MongoDB Arbiter is enabled, the chart installs two statefulsets: A statefulset with N MongoDB servers (organised with one primary and N-1 secondary nodes), and a statefulset with one MongoDB arbiter node (it cannot be scaled).
```
┌────────────────┐ ┌────────────────┐ ┌────────────────┐ ┌─────────────┐
│ MongoDB 0 │ │ MongoDB 1 │ │ MongoDB N │ │ Arbiter │
| external svc │ | external svc │ | external svc │ | svc │
└───────┬────────┘ └───────┬────────┘ └───────┬────────┘ └──────┬──────┘
│ │ │ │
▼ ▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐
│ MongoDB 0 │ │ MongoDB 1 │ │ MongoDB N │ │ MongoDB │
│ Server │ │ Server │ .... │ Server │ │ Arbiter │
│ Pod │ │ Pod │ │ Pod │ │ Pod │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
primary secondary secondary
```
The PSA model is useful when the third Availability Zone cannot hold a full MongoDB instance. The MongoDB Arbiter as decision maker is lightweight and can run alongside other workloads.
_Note:_ An update takes your MongoDB replicaset offline if the Arbiter is enabled and the number of MongoDB replicas is two. Helm applies updates to the statefulsets for the MongoDB instance _and_ the Arbiter at the same time so you loose two out of three quorum votes.
- Without the Arbiter, the chart deploys a single statefulset with N MongoDB servers (organised with one primary and N-1 secondary nodes)
```
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ MongoDB 0 │ │ MongoDB 1 │ │ MongoDB N │
| external svc │ | external svc │ | external svc │
└───────┬────────┘ └───────┬────────┘ └───────┬────────┘
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ MongoDB 0 │ │ MongoDB 1 │ │ MongoDB N │
│ Server │ │ Server │ .... │ Server │
│ Pod │ │ Pod │ │ Pod │
└───────────┘ └───────────┘ └───────────┘
primary secondary secondary
```
There are no services load balancing requests between MongoDB nodes, instead each node has an associated service to access them individually.
> Note: although the 1st replica is initially assigned the "primary" role, any of the "secondary" nodes can become the "primary" if it is down, or during upgrades. Do not make any assumption about what replica has the "primary" role, instead configure your Mongo client with the list of MongoDB hostnames so it can dynamically choose the node to send requests.
## Parameters
The following tables lists the configurable parameters of the MongoDB chart and their default values per section/component:
### Global parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `global.storageClass` | Global storage class for dynamic provisioning | `nil` |
| `global.namespaceOverride` | Global string to override the release namespace | `nil` |
### Common parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `nameOverride` | String to partially override mongodb.fullname | `nil` |
| `fullnameOverride` | String to fully override mongodb.fullname | `nil` |
| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `schedulerName` | Name of the scheduler (other than default) to dispatch pods | `nil` |
| `image.registry` | MongoDB image registry | `docker.io` |
| `image.repository` | MongoDB image name | `bitnami/mongodb` |
| `image.tag` | MongoDB image tag | `{TAG_NAME}` |
| `image.pullPolicy` | MongoDB image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.debug` | Set to true if you would like to see extra information on logs | `false` |
### MongoDB parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `architecture` | MongoDB architecture (`standalone` or `replicaset`) | `standalone` |
| `useStatefulSet` | Set to true to use a StatefulSet instead of a Deployment (only when `architecture=standalone`) | `false` |
| `auth.enabled` | Enable authentication | `true` |
| `auth.rootPassword` | MongoDB admin password | _random 10 character long alphanumeric string_ |
| `auth.username` | MongoDB custom user (mandatory if `auth.database` is set) | `nil` |
| `auth.password` | MongoDB custom user password | _random 10 character long alphanumeric string_ |
| `auth.database` | MongoDB custom database | `nil` |
| `auth.replicaSetKey` | Key used for authentication in the replicaset (only when `architecture=replicaset`) | _random 10 character long alphanumeric string_ |
| `auth.existingSecret` | Existing secret with MongoDB credentials (keys: `mongodb-password`, `mongodb-root-password`, ` mongodb-replica-set-key`) | `nil` |
| `replicaSetName` | Name of the replica set (only when `architecture=replicaset`) | `rs0` |
| `replicaSetHostnames` | Enable DNS hostnames in the replicaset config (only when `architecture=replicaset`) | `true` |
| `enableIPv6` | Switch to enable/disable IPv6 on MongoDB | `false` |
| `directoryPerDB` | Switch to enable/disable DirectoryPerDB on MongoDB | `false` |
| `systemLogVerbosity` | MongoDB system log verbosity level | `0` |
| `disableSystemLog` | Switch to enable/disable MongoDB system log | `false` |
| `configuration` | MongoDB configuration file to be used | `{}` |
| `existingConfigmap` | Name of existing ConfigMap with MongoDB configuration | `nil` |
| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts | `nil` |
| `command` | Override default container command (useful when using custom images) | `nil` |
| `args` | Override default container args (useful when using custom images) | `nil` |
| `extraFlags` | MongoDB additional command line flags | `[]` |
| `extraEnvVars` | Extra environment variables to add to MongoDB pods | `[]` |
| `extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` |
| `extraEnvVarsSecret` | Name of existing Secret containing extra env vars (in case of sensitive data) | `nil` |
| `tls.enabled` | Enable MongoDB TLS support between nodes in the cluster as well as between mongo clients and nodes | `false` |
| `tls.existingSecret` | Existing secret with TLS certificates (keys: `mongodb-ca-cert`, `mongodb-ca-key`, `client-pem`) | `nil` |
| `tls.caCert` | Custom CA certificated (base64 encoded) | `nil` |
| `tls.caKey` | CA certificate private key (base64 encoded) | `nil` |
| `tls.image.registry` | Init container TLS certs setup image registry (nginx) | `docker.io` |
| `tls.image.repository` | Init container TLS certs setup image name (nginx) | `bitnami/nginx` |
| `tls.image.tag` | Init container TLS certs setup image tag (nginx) | `{TAG_NAME}` |
| `tls.image.pullPolicy` | Init container TLS certs setup image pull policy (nginx) | `Always` |
### MongoDB statefulset parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `replicaCount` | Number of MongoDB nodes (only when `architecture=replicaset`) | `2` |
| `labels` | Annotations to be added to the MongoDB statefulset | `{}` (evaluated as a template) |
| `annotations` | Additional labels to be added to the MongoDB statefulset | `{}` (evaluated as a template) |
| `podManagementPolicy` | Pod management policy for MongoDB | `OrderedReady` |
| `strategyType` | StrategyType for MongoDB statefulset | `RollingUpdate` |
| `podLabels` | MongoDB pod labels | `{}` (evaluated as a template) |
| `podAnnotations` | MongoDB Pod annotations | `{}` (evaluated as a template) |
| `priorityClassName` | Name of the existing priority class to be used by MongoDB pod(s) | `""` |
| `podAffinityPreset` | MongoDB Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `podAntiAffinityPreset` | MongoDB Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `nodeAffinityPreset.type` | MongoDB Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `nodeAffinityPreset.key` | MongoDB Node label key to match Ignored if `affinity` is set. | `""` |
| `nodeAffinityPreset.values` | MongoDB Node label values to match. Ignored if `affinity` is set. | `[]` |
| `affinity` | MongoDB Affinity for pod assignment | `{}` (evaluated as a template) |
| `nodeSelector` | MongoDB Node labels for pod assignment | `{}` (evaluated as a template) |
| `tolerations` | MongoDB Tolerations for pod assignment | `[]` (evaluated as a template) |
| `podSecurityContext` | MongoDB pod(s)' Security Context | Check `values.yaml` file |
| `containerSecurityContext` | MongoDB containers' Security Context | Check `values.yaml` file |
| `resources.limits` | The resources limits for MongoDB containers | `{}` |
| `resources.requests` | The requested resources for MongoDB containers | `{}` |
| `livenessProbe` | Liveness probe configuration for MongoDB | Check `values.yaml` file |
| `readinessProbe` | Readiness probe configuration for MongoDB | Check `values.yaml` file |
| `customLivenessProbe` | Override default liveness probe for MongoDB containers | `nil` |
| `customReadinessProbe` | Override default readiness probe for MongoDB containers | `nil` |
| `pdb.create` | Enable/disable a Pod Disruption Budget creation for MongoDB pod(s) | `false` |
| `pdb.minAvailable` | Minimum number/percentage of MongoDB pods that should remain scheduled | `1` |
| `pdb.maxUnavailable` | Maximum number/percentage of MongoDB pods that may be made unavailable | `nil` |
| `initContainers` | Add additional init containers for the MongoDB pod(s) | `{}` (evaluated as a template) |
| `sidecars` | Add additional sidecar containers for the MongoDB pod(s) | `{}` (evaluated as a template) |
| `extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the MongoDB container(s) | `{}` |
| `extraVolumes` | Optionally specify extra list of additional volumes to the MongoDB statefulset | `{}` |
### Exposure parameters
| Parameter | Description | Default |
|---------------------------------------------------|----------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.port` | MongoDB service port | `27017` |
| `service.portName` | MongoDB service port name | `mongodb` |
| `service.nodePort` | Port to bind to for NodePort and LoadBalancer service types | `""` |
| `service.clusterIP` | MongoDB service cluster IP | `nil` |
| `service.loadBalancerIP` | loadBalancerIP for MongoDB Service | `nil` |
| `service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer | `[]` |
| `service.annotations` | Service annotations | `{}` (evaluated as a template) |
| `externalAccess.enabled` | Enable Kubernetes external cluster access to MongoDB nodes | `false` |
| `externalAccess.autoDiscovery.enabled` | Enable using an init container to auto-detect external IPs by querying the K8s API | `false` |
| `externalAccess.autoDiscovery.image.registry` | Init container auto-discovery image registry (kubectl) | `docker.io` |
| `externalAccess.autoDiscovery.image.repository` | Init container auto-discovery image name (kubectl) | `bitnami/kubectl` |
| `externalAccess.autoDiscovery.image.tag` | Init container auto-discovery image tag (kubectl) | `{TAG_NAME}` |
| `externalAccess.autoDiscovery.image.pullPolicy` | Init container auto-discovery image pull policy (kubectl) | `Always` |
| `externalAccess.autoDiscovery.resources.limits` | Init container auto-discovery resource limits | `{}` |
| `externalAccess.autoDiscovery.resources.requests` | Init container auto-discovery resource requests | `{}` |
| `externalAccess.service.type` | Kubernetes Service type for external access. It can be NodePort or LoadBalancer | `LoadBalancer` |
| `externalAccess.service.port` | MongoDB port used for external access when service type is LoadBalancer | `27017` |
| `externalAccess.service.loadBalancerIPs` | Array of load balancer IPs for MongoDB nodes | `[]` |
| `externalAccess.service.loadBalancerSourceRanges` | Address(es) that are allowed when service is LoadBalancer | `[]` |
| `externalAccess.service.domain` | Domain or external IP used to configure MongoDB advertised hostname when service type is NodePort | `nil` |
| `externalAccess.service.nodePorts` | Array of node ports used to configure MongoDB advertised hostname when service type is NodePort | `[]` |
| `externalAccess.service.annotations` | Service annotations for external access | `{}`(evaluated as a template) |
### Persistence parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `persistence.enabled` | Enable MongoDB data persistence using PVC | `true` |
| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`) | `nil` (evaluated as a template) |
| `persistence.storageClass` | PVC Storage Class for MongoDB data volume | `nil` |
| `persistence.accessMode` | PVC Access Mode for MongoDB data volume | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request for MongoDB data volume | `8Gi` |
| `persistence.mountPath` | Path to mount the volume at | `/bitnami/mongodb` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.volumeClaimTemplates.selector` | A label query over volumes to consider for binding (e.g. when using local volumes) | `` |
### RBAC parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| `serviceAccount.create` | Enable creation of ServiceAccount for MongoDB pods | `true` |
| `serviceAccount.name` | Name of the created serviceAccount | Generated using the `mongodb.fullname` template |
| `rbac.create` | Weather to create & use RBAC resources or not | `false` |
### Volume Permissions parameters
| Parameter | Description | Default |
|-------------------------------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `volumePermissions.enabled` | Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` | `false` |
| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `buster` |
| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
| `volumePermissions.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `volumePermissions.resources.limits` | Init container volume-permissions resource limits | `{}` |
| `volumePermissions.resources.requests` | Init container volume-permissions resource requests | `{}` |
| `volumePermissions.securityContext` | Security context of the init container | Check `values.yaml` file |
### Arbiter parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `arbiter.enabled` | Enable deploying the arbiter | `true` |
| `arbiter.configuration` | Arbiter configuration file to be used | `{}` |
| `arbiter.existingConfigmap` | Name of existing ConfigMap with Arbiter configuration | `nil` |
| `arbiter.command` | Override default container command (useful when using custom images) | `nil` |
| `arbiter.args` | Override default container args (useful when using custom images) | `nil` |
| `arbiter.extraFlags` | Arbiter additional command line flags | `[]` |
| `arbiter.extraEnvVars` | Extra environment variables to add to Arbiter pods | `[]` |
| `arbiter.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars | `nil` |
| `arbiter.extraEnvVarsSecret` | Name of existing Secret containing extra env vars (in case of sensitive data) | `nil` |
| `arbiter.labels` | Annotations to be added to the Arbiter statefulset | `{}` (evaluated as a template) |
| `arbiter.annotations` | Additional labels to be added to the Arbiter statefulset | `{}` (evaluated as a template) |
| `arbiter.podLabels` | Arbiter pod labels | `{}` (evaluated as a template) |
| `arbiter.podAnnotations` | Arbiter Pod annotations | `{}` (evaluated as a template) |
| `arbiter.priorityClassName` | Name of the existing priority class to be used by Arbiter pod(s) | `""` |
| `arbiter.podAffinityPreset` | Arbiter Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `arbiter.podAntiAffinityPreset` | Arbiter Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `soft` |
| `arbiter.nodeAffinityPreset.type` | Arbiter Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` | `""` |
| `arbiter.nodeAffinityPreset.key` | Arbiter Node label key to match Ignored if `affinity` is set. | `""` |
| `arbiter.nodeAffinityPreset.values` | Arbiter Node label values to match. Ignored if `affinity` is set. | `[]` |
| `arbiter.affinity` | Arbiter Affinity for pod assignment | `{}` (evaluated as a template) |
| `arbiter.nodeSelector` | Arbiter Node labels for pod assignment | `{}` (evaluated as a template) |
| `arbiter.tolerations` | Arbiter Tolerations for pod assignment | `[]` (evaluated as a template) |
| `arbiter.podSecurityContext` | Arbiter pod(s)' Security Context | Check `values.yaml` file |
| `arbiter.containerSecurityContext` | Arbiter containers' Security Context | Check `values.yaml` file |
| `arbiter.resources.limits` | The resources limits for Arbiter containers | `{}` |
| `arbiter.resources.requests` | The requested resources for Arbiter containers | `{}` |
| `arbiter.livenessProbe` | Liveness probe configuration for Arbiter | Check `values.yaml` file |
| `arbiter.readinessProbe` | Readiness probe configuration for Arbiter | Check `values.yaml` file |
| `arbiter.customLivenessProbe` | Override default liveness probe for Arbiter containers | `nil` |
| `arbiter.customReadinessProbe` | Override default readiness probe for Arbiter containers | `nil` |
| `arbiter.pdb.create` | Enable/disable a Pod Disruption Budget creation for Arbiter pod(s) | `false` |
| `arbiter.pdb.minAvailable` | Minimum number/percentage of Arbiter pods that should remain scheduled | `1` |
| `arbiter.pdb.maxUnavailable` | Maximum number/percentage of Arbiter pods that may be made unavailable | `nil` |
| `arbiter.initContainers` | Add additional init containers for the Arbiter pod(s) | `{}` (evaluated as a template) |
| `arbiter.sidecars` | Add additional sidecar containers for the Arbiter pod(s) | `{}` (evaluated as a template) |
| `arbiter.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Arbiter container(s) | `{}` |
| `arbiter.extraVolumes` | Optionally specify extra list of additional volumes to the Arbiter statefulset | `{}` |
### Metrics parameters
| Parameter | Description | Default |
|-------------------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `metrics.enabled` | Enable using a sidecar Prometheus exporter | `false` |
| `metrics.image.registry` | MongoDB Prometheus exporter image registry | `docker.io` |
| `metrics.image.repository` | MongoDB Prometheus exporter image name | `bitnami/mongodb-exporter` |
| `metrics.image.tag` | MongoDB Prometheus exporter image tag | `{TAG_NAME}` |
| `metrics.image.pullPolicy` | MongoDB Prometheus exporter image pull policy | `Always` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `metrics.extraFlags` | Additional command line flags | `""` |
| `metrics.extraUri` | Additional URI options of the metrics service | `""` |
| `metrics.service.type` | Type of the Prometheus metrics service | `ClusterIP file` |
| `metrics.service.port` | Port of the Prometheus metrics service | `9216` |
| `metrics.service.annotations` | Annotations for Prometheus metrics service | Check `values.yaml` file |
| `metrics.resources.limits` | The resources limits for Prometheus exporter containers | `{}` |
| `metrics.resources.requests` | The requested resources for Prometheus exporter containers | `{}` |
| `metrics.livenessProbe` | Liveness probe configuration for Prometheus exporter | Check `values.yaml` file |
| `metrics.readinessProbe` | Readiness probe configuration for Prometheus exporter | Check `values.yaml` file |
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using Prometheus Operator | `false` |
| `metrics.serviceMonitor.namespace` | Namespace which Prometheus is running in | `monitoring` |
| `metrics.serviceMonitor.interval` | Interval at which metrics should be scraped | `30s` |
| `metrics.serviceMonitor.scrapeTimeout` | Specify the timeout after which the scrape is ended | `nil` |
| `metrics.serviceMonitor.additionalLabels` | Used to pass Labels that are required by the Installed Prometheus Operator | `{}` |
| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` |
| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | `monitoring` |
| `metrics.prometheusRule.rules` | Rules to be created, check values for an example. | `[]` |
| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
$ helm install my-release \
--set auth.rootPassword=secretpassword,auth.username=my-user,auth.password=my-password,auth.database=my-database \
bitnami/mongodb
```
The above command sets the MongoDB `root` account password to `secretpassword`. Additionally, it creates a standard database user named `my-user`, with the password `my-password`, who has access to a database named `my-database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
$ helm install my-release -f values.yaml bitnami/mongodb
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
### Production configuration and horizontal scaling
This chart includes a `values-production.yaml` file where you can find some parameters oriented to production configuration in comparison to the regular `values.yaml`. You can use this file instead of the default one.
- Switch to enable/disable replica set configuration:
```diff
- architecture: standalone
+ architecture: replicaset
```
- Increase the number of MongoDB nodes:
```diff
- replicaCount: 2
+ replicaCount: 4
```
- Enable Pod Disruption Budget:
```diff
- pdb.create: false
+ pdb.create: true
```
- Enable using a sidecar Prometheus exporter:
```diff
- metrics.enabled: false
+ metrics.enabled: true
```
To horizontally scale this chart, you can use the `--replicaCount` flag to modify the number of secondary nodes in your MongoDB replica set.
### Initialize a fresh instance
The [Bitnami MongoDB](https://github.com/bitnami/bitnami-docker-mongodb) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, you can specify them using the `initdbScripts` parameter as dict.
You can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the previous option.
The allowed extensions are `.sh`, and `.js`.
### Replicaset: Accessing MongoDB nodes from outside the cluster
In order to access MongoDB nodes from outside the cluster when using a replicaset architecture, a specific service per MongoDB pod will be created. There are two ways of configuring external access:
- Using LoadBalancer services
- Using NodePort services.
#### Using LoadBalancer services
You have two alternatives to use LoadBalancer services:
- Option A) Use random load balancer IPs using an **initContainer** that waits for the IPs to be ready and discover them automatically.
```console
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=LoadBalancer
externalAccess.service.port=27017
externalAccess.autoDiscovery.enabled=true
serviceAccount.create=true
rbac.create=true
```
> Note: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
- Option B) Manually specify the load balancer IPs:
```console
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=LoadBalancer
externalAccess.service.port=27017
externalAccess.service.loadBalancerIPs[0]='external-ip-1'
externalAccess.service.loadBalancerIPs[1]='external-ip-2'}
```
> Note: You need to know in advance the load balancer IPs so each MongoDB node advertised hostname is configured with it.
#### Using NodePort services
Manually specify the node ports to use:
```console
architecture=replicaset
replicaCount=2
externalAccess.enabled=true
externalAccess.service.type=NodePort
externalAccess.service.nodePorts[0]='node-port-1'
externalAccess.service.nodePorts[1]='node-port-2'
```
> Note: You need to know in advance the node ports that will be exposed so each MongoDB node advertised hostname is configured with it.
The pod will try to get the external ip of the node using `curl -s https://ipinfo.io/ip` unless `externalAccess.service.domain` is provided.
### Adding extra environment variables
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the `extraEnvVars` property.
```yaml
extraEnvVars:
- name: LOG_LEVEL
value: error
```
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the `extraEnvVarsCM` or the `extraEnvVarsSecret` properties.
### Sidecars and Init Containers
If you have a need for additional containers to run within the same pod as MongoDB (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
```yaml
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
Similarly, you can add extra init containers using the `initContainers` parameter.
```yaml
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
```
## Persistence
The [Bitnami MongoDB](https://github.com/bitnami/bitnami-docker-mongodb) image stores the MongoDB data and configurations at the `/bitnami/mongodb` path of the container.
The chart mounts a [Persistent Volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) at this location. The volume is created using dynamic volume provisioning.
### Adjust permissions of persistent volume mountpoint
As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it. By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions.
As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination. You can enable this initContainer by setting `volumePermissions.enabled` to `true`.
## Using Prometheus rules
You can use custom Prometheus rules for Prometheus operator by using the `prometheusRule` parameter, see below a basic configuration example:
```yaml
metrics:
enabled: true
prometheusRule:
enabled: true
rules:
- name: rule1
rules:
- alert: HighRequestLatency
expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5
for: 10m
labels:
severity: page
annotations:
summary: High request latency
```
## Enabling SSL/TLS
This container supports enabling SSL/TLS between nodes in the cluster, as well as between mongo clients and nodes, by setting the MONGODB_EXTRA_FLAGS and MONGODB_CLIENT_EXTRA_FLAGS environment variables, together with the correct MONGODB_ADVERTISED_HOSTNAME.
To enable full TLS encryption set tls.enabled to true
### Generating the self-signed cert via pre install helm hooks
The secrets-ca.yaml utilizes the helm "pre-install" hook to ensure that the certs will only be generated on chart install. The genCA function will create a new self-signed x509 certificate authority, the genSignedCert function creates an object with a pair of items in it — the Cert and Key which we base64 encode and use in a yaml like object. With the genSignedCert function, we pass the CN, an empty IP list (the nil part), the validity and the CA created previously.
To hold the signed certificate created above, we can use a kubernetes Secret object and the initContainer sets up the rest.
Using Helms hook annotations ensures that the certs will only be generated on chart install. This will prevent overriding the certs anytime we upgrade the charts released instance.
### Using your own CA
To use your own CA set `tls.caCert` and `tls.caKey` with appropriate base64 encoded data. The secrets-ca.yaml will utilize this data to create secret. Please Note:- Currently only RSA private keys are supported.
### Accessing the cluster
To access the cluster you will need to enable the initContainer which generates the MongoDB server/client pem needed to access the cluster. Please ensure that you include the $my_hostname section with your actual hostname and alternative hostnames section should contain the hostnames you want to allow access to the MongoDB replicaset. Additionally, if the [external access](#replicaset-accessing-mongodb-nodes-from-outside-the-cluster) is enabled, the `loadBalancerIPs` are added to the alternative names list.
Note: You will be generating self signed certs for the MongoDB deployment. With the initContainer it will generate a new MongoDB private key which will be used to create your own Certificate Authority (CA),the public cert for the CA will be created, the Certificate Signing Request will be created as well and signed using the private key of the CA previously created. Finally the PEM bundle will be created using the private key and public certificate. The process will be repeated for each node in the cluster.
### Starting the cluster
After the certs have been generated and made available to the containers at the correct mount points, the mongod server will be started with TLS enabled. The options for the TLS mode will be (disabled|allowTLS|preferTLS|requireTLS). This value can be changed via the MONGODB_EXTRA_FLAGS field using the tlsMode. The client should now be able to connect to the TLS enabled cluster with the provided certs.
### Setting Pod's affinity
This chart allows you to set your custom affinity using the `XXX.affinity` parameter(s). Find more information about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters.
## Troubleshooting
Find more information about how to deal with common errors related to Bitnamis Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues).
## Upgrading
If authentication is enabled, it's necessary to set the `auth.rootPassword` (also `auth.replicaSetKey` when using a replicaset architecture) when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Credentials' section. Please note down the password, and run the command below to upgrade your chart:
```bash
$ helm upgrade my-release bitnami/mongodb --set auth.rootPassword=[PASSWORD] (--set auth.replicaSetKey=[REPLICASETKEY])
```
> Note: you need to substitute the placeholders [PASSWORD] and [REPLICASETKEY] with the values obtained in the installation notes.
### To 10.0.0
[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
**What changes were introduced in this major version?**
- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field.
- Move dependency information from the *requirements.yaml* to the *Chart.yaml*
- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock*
- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
**Considerations when upgrading to this version**
- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3
**Useful links**
- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/
- https://helm.sh/docs/topics/v2_v3_migration/
- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
### To 9.0.0
MongoDB container images were updated to `4.4.x` and it can affect compatibility with older versions of MongoDB. Refer to the following guides to upgrade your applications:
- [Standalone](https://docs.mongodb.com/manual/release-notes/4.4-upgrade-standalone/)
- [Replica Set](https://docs.mongodb.com/manual/release-notes/4.4-upgrade-replica-set/)
### To 8.0.0
- Architecture used to configure MongoDB as a replicaset was completely refactored. Now, both primary and secondary nodes are part of the same statefulset.
- Chart labels were adapted to follow the Helm charts best practices.
- This version introduces `bitnami/common`, a [library chart](https://helm.sh/docs/topics/library_charts/#helm) as a dependency. More documentation about this new utility could be found [here](https://github.com/bitnami/charts/tree/master/bitnami/common#bitnami-common-library-chart). Please, make sure that you have updated the chart dependencies before executing any upgrade.
- Several parameters were renamed or disappeared in favor of new ones on this major version. These are the most important ones:
- `replicas` is renamed to `replicaCount`.
- Authentication parameters are reorganized under the `auth.*` parameter:
- `usePassword` is renamed to `auth.enabled`.
- `mongodbRootPassword`, `mongodbUsername`, `mongodbPassword`, `mongodbDatabase`, and `replicaSet.key` are now `auth.rootPassword`, `auth.username`, `auth.password`, `auth.database`, and `auth.replicaSetKey` respectively.
- `securityContext.*` is deprecated in favor of `podSecurityContext` and `containerSecurityContext`.
- Parameters prefixed with `mongodb` are renamed removing the prefix. E.g. `mongodbEnableIPv6` is renamed to `enableIPv6`.
- Parameters affecting Arbiter nodes are reorganized under the `arbiter.*` parameter.
Consequences:
- Backwards compatibility is not guaranteed. To upgrade to `8.0.0`, install a new release of the MongoDB chart, and migrate your data by creating a backup of the database, and restoring it on the new release.
### To 7.0.0
From this version, the way of setting the ingress rules has changed. Instead of using `ingress.paths` and `ingress.hosts` as separate objects, you should now define the rules as objects inside the `ingress.hosts` value, for example:
```yaml
ingress:
hosts:
- name: mongodb.local
path: /
```
### To 6.0.0
From this version, `mongodbEnableIPv6` is set to `false` by default in order to work properly in most k8s clusters, if you want to use IPv6 support, you need to set this variable to `true` by adding `--set mongodbEnableIPv6=true` to your `helm` command.
You can find more information in the [`bitnami/mongodb` image README](https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md).
### To 5.0.0
When enabling replicaset configuration, backwards compatibility is not guaranteed unless you modify the labels used on the chart's statefulsets.
Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is `my-release`:
```console
$ kubectl delete statefulset my-release-mongodb-arbiter my-release-mongodb-primary my-release-mongodb-secondary --cascade=false
```

View File

@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

Some files were not shown because too many files have changed in this diff Show More