Helm Engineering Reference

Helm: The Kubernetes Package Manager, Demystified

A protocol-level reference for engineers who ship to Kubernetes daily. Covers chart anatomy, Go template mechanics, value resolution order, dependency management, hook semantics, release lifecycle, OCI registries, debugging strategies, common pitfalls, and production hardening.

Helm 3.x Go Templates Chart v2 API OCI Artifacts Helmfile

1. Core Concepts and Architecture

Helm 3 is a client-side tool. There is no Tiller, no server component. The Helm binary templates charts locally, sends rendered manifests to the Kubernetes API, and stores release metadata as Secrets in the target namespace.

Chart

A package — a directory of templates, default values, metadata, and optional dependencies. Charts are versioned independently from the application they deploy.

  • Defined by Chart.yaml
  • Packaged as .tgz archives
  • Stored in chart repos or OCI registries

Release

A running instance of a chart with a specific set of values. Same chart can be installed multiple times, each creating a distinct release.

  • Named by the user at install time
  • Scoped to a namespace
  • Versioned (each upgrade = new revision)

Values

Configuration injected into templates at render time. Values cascade from multiple sources with a defined merge order.

  • values.yaml (chart defaults)
  • -f file.yaml (user overrides)
  • --set key=val (CLI overrides)

No server component. Helm 3 removed Tiller entirely. Release state is stored as Kubernetes Secrets (type helm.sh/release.v1) in the release's namespace. This means RBAC on the namespace controls who can manage releases — no special Helm RBAC needed.

2. Chart Anatomy

A chart is a directory tree following a strict convention. Helm ignores files outside this structure. Understanding the layout is essential for authoring, debugging, and reviewing charts.

Chart.yaml

Chart.yamlrequired fields + common optional
apiVersion: v2                 # v2 = Helm 3 (v1 = Helm 2, do not use)
name: myapp
description: Production deployment chart for myapp
type: application              # "application" (default) or "library"
version: 1.4.2                 # Chart version (semver, bump on any chart change)
appVersion: "3.8.1"            # App version (informational, shown in helm list)

# Kubernetes version constraint
kubeVersion: ">= 1.25.0-0"

# Dependencies (replaces requirements.yaml from Helm 2)
dependencies:
  - name: redis
    version: "17.x"           # Semver range
    repository: "https://charts.bitnami.com/bitnami"
    condition: redis.enabled   # Toggle via values
  - name: postgresql
    version: "12.5.9"
    repository: "oci://registry-1.docker.io/bitnamicharts"
    alias: db                  # Reference as .Values.db in templates

maintainers:
  - name: Platform Team
    email: platform@company.com

version vs appVersion: These are independent. version is the chart's own semver — bump it whenever you change templates, values, or dependencies. appVersion is the version of the application inside the chart (e.g., your Docker image tag). Helm uses version for dependency resolution and upgrade diffing. appVersion is purely informational.

values.schema.json

values.schema.jsonvalidates values before rendering
{
  "$schema": "https://json-schema.org/draft-07/schema#",
  "type": "object",
  "required": ["image", "replicaCount"],
  "properties": {
    "replicaCount": {
      "type": "integer",
      "minimum": 1,
      "maximum": 100
    },
    "image": {
      "type": "object",
      "required": ["repository", "tag"],
      "properties": {
        "repository": { "type": "string", "minLength": 1 },
        "tag": { "type": "string", "pattern": "^[a-zA-Z0-9._-]+$" },
        "pullPolicy": { "type": "string", "enum": ["Always", "IfNotPresent", "Never"] }
      }
    }
  }
}

Use JSON Schema. It catches misconfigured values before they reach the Kubernetes API. Helm runs schema validation during install, upgrade, lint, and template. This is the single most underused Helm feature for preventing production incidents.

3. Go Template Language

Helm templates use Go's text/template package extended with Sprig functions and Helm-specific builtins. The syntax is powerful but unforgiving — whitespace, scoping, and type coercion cause most template bugs.

Syntax Fundamentals

SyntaxPurposeExample
{{ .Values.key }}Access a value{{ .Values.image.tag }}
{{- ... -}}Trim whitespace (left/right){{- if .Values.ingress.enabled }}
{{ include "name" . }}Call a named template{{ include "mychart.labels" . | nindent 4 }}
{{ toYaml .Values.x }}Render value as YAML{{ toYaml .Values.resources | nindent 12 }}
{{ tpl .Values.x . }}Render a value as a template{{ tpl .Values.config.template . }}
{{ required "msg" .Values.x }}Fail if value is empty{{ required "image.tag is required" .Values.image.tag }}
{{ default "val" .Values.x }}Default if nil/empty{{ default "IfNotPresent" .Values.image.pullPolicy }}
{{ .Release.Name }}Built-in release objectRelease name, namespace, revision, etc.

Built-in Objects

ObjectFieldsCommon Use
.Release.Name, .Namespace, .Revision, .IsUpgrade, .IsInstallResource naming, conditional logic
.Chart.Name, .Version, .AppVersionLabels, annotations
.ValuesMerged values from all sourcesEverything user-configurable
.Capabilities.KubeVersion, .APIVersionsConditional API version selection
.Template.Name, .BasePathConfigMap checksum annotations
.Files.Get, .GetBytes, .Glob, .AsConfig, .AsSecretsEmbed config files from chart

Flow Control

conditionals, loops, and with blocksGo template patterns
# if / else if / else
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
...
{{- end }}

# Negation
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}

# Boolean AND / OR
{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}
...
{{- end }}

# range (loop over list)
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
  http:
    paths:
      {{- range .paths }}
      - path: {{ .path }}
        pathType: {{ .pathType }}
      {{- end }}
{{- end }}

# range (loop over map with $key, $value)
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
  value: {{ $value | quote }}
{{- end }}

# with (change scope — BEWARE: . is rebound)
{{- with .Values.nodeSelector }}
nodeSelector:
  {{- toYaml . | nindent 2 }}
{{- end }}

# Access parent scope inside with/range using $
{{- with .Values.tolerations }}
tolerations:
  {{- toYaml . | nindent 2 }}
# Still need release name? Use $.Release.Name ($ = root scope)
{{- end }}

The with scope trap. Inside a with block, . is rebound to the value passed to with. You cannot access .Values, .Release, etc. via . anymore. Use $ (the root scope) instead: $.Release.Name, $.Values.image.tag.

Named Templates (_helpers.tpl)

_helpers.tplreusable template definitions
{{/* Generate standard labels */}}
{{- define "mychart.labels" -}}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/* Generate selector labels (subset of above — must be immutable) */}}
{{- define "mychart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/* Chart name + version for chart label */}}
{{- define "mychart.chart" -}}
{{ printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/* Fullname: release-chart, truncated to 63 chars */}}
{{- define "mychart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

include vs template

  • {{ template "name" . }} injects output directly — cannot be piped.
  • {{ include "name" . }} returns a string — can be piped to nindent, quote, etc.
  • Always use include. There is no reason to use template in Helm charts.

Whitespace Control

  • {{- trims whitespace before the tag (left chomp)
  • -}} trims whitespace after the tag (right chomp)
  • nindent N adds a newline then N spaces (use with include)
  • indent N indents without leading newline (rarely what you want)
  • Most YAML rendering bugs are whitespace bugs. Use helm template to verify output.

Useful Sprig Functions

FunctionExampleResult
quote{{ .Values.name | quote }}"myapp"
upper / lower{{ .Values.env | upper }}PRODUCTION
replace{{ .Values.x | replace "." "-" }}my-app
trunc{{ .Values.name | trunc 63 }}First 63 characters
b64enc / b64dec{{ .Values.secret | b64enc }}Base64 encode
sha256sum{{ include "..." . | sha256sum }}Checksum for rollout trigger
toJson / fromJson{{ .Values.config | toJson }}JSON serialization
ternary{{ ternary "a" "b" .Values.flag }}"a" if true, "b" if false
hasKey{{ if hasKey .Values "extra" }}Check if map key exists
merge / mustMergeOverwrite{{ merge .Values.defaults .Values.overrides }}Deep merge maps
lookup{{ lookup "v1" "Secret" "ns" "name" }}Query live cluster (empty on template)

4. Values: Resolution Order and Merge Semantics

Values come from multiple sources and are deep-merged. Later sources override earlier ones. Understanding the merge order prevents the most common class of "why isn't my value taking effect" bugs.

Value Precedence (lowest to highest)

1. Chart defaults
values.yaml in the chart directory. Lowest priority.
2. Parent chart values
If this is a subchart, parent's values.yaml can override subchart values under the subchart's key.
3. User values files
-f values-prod.yaml — multiple -f flags are merged left to right (rightmost wins).
4. --set / --set-string
--set image.tag=v2.1.0 — highest priority. Overrides everything above.
5. --set-json
--set-json 'resources={"limits":{"cpu":"2"}}' — same priority as --set, parsed as JSON.
value override examplesCLI precedence
# Multiple value files (later files override earlier)
helm upgrade --install myapp ./chart \
  -f values.yaml \
  -f values-prod.yaml \
  -f values-secrets.yaml

# --set overrides everything (careful with complex values)
helm upgrade --install myapp ./chart \
  -f values-prod.yaml \
  --set image.tag=abc123 \
  --set replicaCount=5

# --set with special characters
--set ingress.hosts[0].host=api.example.com       # Array index
--set nodeSelector."kubernetes\.io/os"=linux       # Escaped dots in keys
--set config.data="line1\nline2"                   # Newlines

# --set-string forces string type (avoids YAML type coercion)
--set-string image.tag=1.0                         # "1.0" not 1.0 (float)
--set-string enabled=true                          # "true" not true (bool)

# --set-json for complex structures
--set-json 'tolerations=[{"key":"dedicated","operator":"Equal","value":"gpu"}]'

# View final merged values for a deployed release
helm get values myapp -n production
helm get values myapp -n production --all           # Include defaults

YAML type coercion gotcha. --set image.tag=1.0 produces the float 1, not the string "1.0". --set enabled=true produces a boolean, not a string. Use --set-string when the value must remain a string. In templates, always | quote values that must be strings in YAML output.

5. Dependencies and Subcharts

Charts can depend on other charts. Dependencies are declared in Chart.yaml, resolved from repositories, and stored in the charts/ directory. Understanding how values flow between parent and child is critical.

dependency management commandsChart.lock workflow
# Download dependencies into charts/ based on Chart.yaml
helm dependency update ./mychart

# Rebuild charts/ from Chart.lock (for CI reproducibility)
helm dependency build ./mychart

# List current dependency state
helm dependency list ./mychart

# Typical CI workflow:
# 1. Developer runs: helm dependency update (updates Chart.lock)
# 2. Commit both Chart.yaml and Chart.lock
# 3. CI runs: helm dependency build (uses locked versions)

Passing Values to Subcharts

parent values.yamlsubchart value injection
# Values for this chart
replicaCount: 3
image:
  repository: myapp
  tag: "2.0.0"

# Values for the "redis" dependency (key matches dependency name)
redis:
  enabled: true
  architecture: standalone
  auth:
    enabled: true
    password: "override-me-with-secret"
  master:
    resources:
      requests:
        cpu: 100m
        memory: 128Mi

# Values for the "postgresql" dependency (using alias "db")
db:
  enabled: true
  auth:
    postgresPassword: "override-me"
    database: myapp

# Global values (accessible to ALL charts and subcharts as .Values.global)
global:
  imagePullSecrets:
    - name: regcred
  storageClass: gp3-encrypted

Condition & Tags

  • condition: redis.enabled — boolean in parent values toggles the entire subchart on/off
  • tags: [backend] — group dependencies; --set tags.backend=false disables all tagged deps
  • Condition takes precedence over tags if both are set

Global Values

  • .Values.global.* is accessible from every chart and subchart
  • Use for cross-cutting concerns: image pull secrets, storage class, domain name
  • Subchart values under their key are not accessible from the parent — only globals are shared

Commit Chart.lock. Like package-lock.json or go.sum, Chart.lock pins exact dependency versions. Without it, helm dependency update resolves ranges fresh each time, potentially pulling breaking changes. CI should run helm dependency build (which uses the lock file), not update.

6. Release Lifecycle

Every Helm operation modifies release state. Understanding the exact sequence of events during install, upgrade, and rollback prevents surprises in production.

Command Reference

CommandPurposeKey Flags
helm install Create a new release --wait, --timeout, --create-namespace, --dry-run
helm upgrade Update an existing release --install, --atomic, --cleanup-on-fail, --reuse-values, --reset-values
helm upgrade --install Install if absent, upgrade if present Idempotent — prefer this for CI/CD
helm rollback Revert to a previous revision helm rollback myapp 3 (to revision 3)
helm uninstall Delete release and all its resources --keep-history retains release metadata for audit
helm template Render manifests locally (no cluster needed) --debug, --show-only templates/x.yaml
helm lint Check chart for errors and warnings --strict (treat warnings as errors)
helm test Run test pods defined in templates/tests/ --logs to display test pod output

Upgrade Sequence (what happens internally)

1. Merge values
Chart defaults ← user files ← --set overrides. Validate against values.schema.json if present.
2. Render templates
Execute Go templates with merged values. Produce Kubernetes YAML manifests.
3. Run pre-upgrade hooks
Create hook resources, wait for completion, delete (per hook deletion policy).
4. Apply manifests
Three-way strategic merge patch: live state ↔ old manifest ↔ new manifest. Creates, updates, deletes resources.
5. Wait (if --wait)
Poll until Deployments, StatefulSets, and Jobs reach ready state, or timeout.
6. Run post-upgrade hooks
Execute post-upgrade hooks, wait for completion.
7. Store release
Persist new release revision as a Secret in the namespace. Previous revision retained for rollback.
production deploy patternsafe, idempotent, observable
# The canonical production upgrade command
helm upgrade --install myapp ./chart \
  --namespace production \
  --create-namespace \
  --values values-prod.yaml \
  --set image.tag=${GIT_SHA} \
  --atomic \                    # Auto-rollback on failure
  --timeout 10m \               # Max wait for readiness
  --wait                        # Wait for all resources ready

# --atomic implies --wait and --cleanup-on-fail:
#   - If upgrade fails → rollback to previous revision
#   - If install fails → delete the failed release entirely
#   - This is the single most important flag for CI/CD safety

--reuse-values is a footgun. It merges the previous release's values with any new --set overrides, but ignores new defaults added to values.yaml. If you add a new key to your chart's values.yaml, --reuse-values will not pick it up. Prefer --reset-values (the default) and always pass the full values file explicitly.

Inspect Release State

release inspection commandsdebugging deployed state
# List all releases across namespaces
helm list -A
helm list -A --filter 'myapp'

# Release history (revisions, status, timestamps)
helm history myapp -n production

# Get the values used for the current release
helm get values myapp -n production
helm get values myapp -n production --all        # Including defaults
helm get values myapp -n production --revision 5  # Specific revision

# Get the rendered manifests from a deployed release
helm get manifest myapp -n production

# Get everything (values + manifest + notes + hooks)
helm get all myapp -n production

# Compare two revisions (requires helm-diff plugin)
helm diff revision myapp 6 7 -n production

7. Hooks

Hooks are templated resources with special annotations that run at specific points in the release lifecycle. Common uses: database migrations, cache warming, cleanup jobs, notifications.

HookFires WhenTypical Use
pre-installAfter templates render, before any resources createdValidate prerequisites, create secrets
post-installAfter all resources are loadedDatabase seeding, notification
pre-upgradeAfter templates render, before upgrade appliedDatabase migrations, backups
post-upgradeAfter upgrade completesCache invalidation, smoke tests
pre-deleteBefore any resources are deletedFinal backup, drain connections
post-deleteAfter all resources deletedCleanup external resources
pre-rollbackBefore rollback appliedNotify, snapshot state
post-rollbackAfter rollback completesRestore migrations
testWhen helm test is invokedIntegration / smoke tests
pre-upgrade hook: database migration jobcommon production pattern
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "mychart.fullname" . }}-migrate
  labels:
    {{- include "mychart.labels" . | nindent 4 }}
  annotations:
    "helm.sh/hook": pre-upgrade,pre-install
    "helm.sh/hook-weight": "-5"            # Lower weight runs first
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
  backoffLimit: 1
  activeDeadlineSeconds: 300
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: migrate
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        command: ["./migrate", "--target", "latest"]
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url

Hook Delete Policies

PolicyBehaviorWhen to Use
before-hook-creationDelete previous hook resource before creating new oneMost common. Use for Jobs (avoids name conflicts)
hook-succeededDelete after hook succeedsClean up on success, keep on failure for debugging
hook-failedDelete after hook failsRarely useful alone

Hook ordering. Hooks with the same lifecycle event are sorted by hook-weight (ascending, default 0). Hooks with equal weight have no guaranteed order. Always set explicit weights when order matters. --wait applies to hooks: Helm waits for each hook to reach ready/complete before proceeding.

8. Repositories & OCI Registries

Charts can be distributed via traditional Helm repositories (index.yaml + HTTP) or OCI-compliant registries. OCI is the future — it uses the same infrastructure as container images.

Traditional Repositories

repo managementindex.yaml based
# Add a repo
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add jetstack https://charts.jetstack.io

# Update all repos (fetches latest index.yaml)
helm repo update

# Search for charts
helm search repo nginx --versions
helm search repo redis --version "17.x"

# Show default values for a chart
helm show values bitnami/redis --version 17.15.0

# Install from repo
helm install myredis bitnami/redis \
  --version 17.15.0 -f redis-values.yaml

OCI Registries

OCI workflowcontainer registry for charts
# Login to OCI registry
helm registry login registry.example.com

# Package chart
helm package ./mychart

# Push to OCI registry
helm push mychart-1.4.2.tgz \
  oci://registry.example.com/charts

# Pull from OCI
helm pull oci://registry.example.com/charts/mychart \
  --version 1.4.2

# Install directly from OCI
helm install myapp \
  oci://registry.example.com/charts/mychart \
  --version 1.4.2

# Use in Chart.yaml dependencies
# repository: "oci://registry.example.com/charts"

OCI advantages. No index.yaml to maintain. Native to ECR, GCR, ACR, GHCR, Harbor, Docker Hub. Same auth as container images. Same signing/scanning tools (Cosign, Notation). Immutable tags. Use OCI for new projects; traditional repos for backwards compatibility.

9. Debugging & Troubleshooting

Most Helm issues fall into three categories: template rendering errors, value merge confusion, and Kubernetes apply failures. This section covers the diagnostic toolkit for each.

Template Debugging

render and inspect templateslocal debugging without a cluster
# Render all templates locally (no cluster required)
helm template myapp ./chart -f values-prod.yaml

# Render a single template
helm template myapp ./chart --show-only templates/deployment.yaml

# Render with debug output (shows computed values + template errors)
helm template myapp ./chart -f values-prod.yaml --debug

# Dry run against the cluster (validates API server side)
helm upgrade --install myapp ./chart -f values-prod.yaml --dry-run

# Dry-run with server-side validation (catches more errors)
helm upgrade --install myapp ./chart -f values-prod.yaml --dry-run=server

# Lint the chart (catches structure + template issues)
helm lint ./chart -f values-prod.yaml --strict

# Compare current vs new (requires helm-diff plugin)
helm diff upgrade myapp ./chart -f values-prod.yaml -n production

Release Debugging

inspect a broken releasewhat went wrong?
# Check release status
helm status myapp -n production

# View history — look for FAILED or PENDING_UPGRADE
helm history myapp -n production
# REVISION  STATUS          DESCRIPTION
# 5         deployed        Upgrade complete
# 6         failed          Upgrade "myapp" failed: timed out
# 7         deployed        Rollback to 5

# Get the manifest that was applied
helm get manifest myapp -n production --revision 6

# Get the values that were used for the failed revision
helm get values myapp -n production --revision 6

# Compare values between revisions
diff <(helm get values myapp -n prod --revision 5) \
     <(helm get values myapp -n prod --revision 6)

# Check Kubernetes events for the namespace
kubectl get events -n production --sort-by='.lastTimestamp' | tail -30

# Check pod status
kubectl get pods -n production -l app.kubernetes.io/instance=myapp

Common Error Messages

ErrorCauseFix
UPGRADE FAILED: another operation is in progress Previous install/upgrade did not complete cleanly helm rollback myapp 0 or delete the pending release secret manually
Error: rendered manifests contain a resource that already exists Resource was created outside Helm, or release name mismatch Adopt with kubectl annotate + kubectl label Helm metadata, or delete and reinstall
Error: INSTALLATION FAILED: unable to build kubernetes objects Invalid YAML in rendered templates helm template --debug to find the malformed output
Error: timed out waiting for the condition Pods didn't reach Ready in time Check pod logs, events, resource limits, image pull. Increase --timeout
cannot patch "X" with kind Deployment: ... field is immutable Trying to change an immutable field (label selectors, etc.) Delete the resource first or use a different release name
Error: YAML parse error on templates/x.yaml: error converting YAML to JSON Template produced invalid YAML (usually bad indentation) helm template --show-only templates/x.yaml --debug
nil pointer evaluating interface {}.key Accessing a value path that doesn't exist Guard with {{ if .Values.x }} or use {{ default "" .Values.x }}
Release "x" in namespace "y" failed and has been rolled back --atomic detected failure and auto-rolled back Check the event log and pod status for the root cause. This is --atomic working correctly.

Stuck Release Recovery

fixing "another operation in progress"manual release state surgery
# Option 1: Rollback to last good revision
helm rollback myapp 0 -n production        # 0 = previous revision

# Option 2: If rollback also fails, manually patch the release secret
# Find the stuck release secret
kubectl get secrets -n production -l owner=helm,name=myapp

# The latest secret will have status "pending-upgrade" or "pending-install"
# Patch it to "failed" so Helm can proceed
kubectl patch secret sh.helm.release.v1.myapp.v8 -n production \
  --type=merge -p '{"metadata":{"labels":{"status":"failed"}}}'

# Then retry the upgrade
helm upgrade --install myapp ./chart -f values.yaml -n production --atomic

# Option 3: Nuclear — uninstall and reinstall (loses history)
helm uninstall myapp -n production
helm install myapp ./chart -f values.yaml -n production

10. Common Pitfalls and Anti-Patterns

These are the mistakes that cause production incidents. Every item on this list has been learned the hard way.

Selector Label Mutation

Kubernetes Deployment matchLabels are immutable after creation. If your _helpers.tpl selector template includes app.kubernetes.io/version and you bump it, the next upgrade fails.

  • Selector labels must be stable across upgrades
  • Only use name + instance in selectors
  • Put version in metadata labels and annotations, never selectors

Forgetting quote

YAML values like true, 1.0, null, yes are interpreted as booleans, floats, nulls. Without quoting, annotations and env values get silently mangled.

  • Always {{ .Values.x | quote }} for annotations
  • Always quote env var value: fields
  • "true" != true in YAML

ConfigMap Rollout Blindness

Updating a ConfigMap doesn't trigger a Deployment rollout. Pods keep running with the old config until they're restarted.

  • Add a checksum annotation to force rollout:
  • checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
  • Or use Reloader controller for automatic restarts

--reuse-values Drift

This flag carries forward the previous release's values, ignoring any new defaults added to the chart. Over time, releases drift from chart defaults.

  • New values.yaml keys are silently missing
  • Always use --reset-values (default) + explicit -f files
  • Store your values files in Git, not in Helm release state

CRD Lifecycle Trap

CRDs in the crds/ directory are installed on helm install but never upgraded or deleted by Helm. This is by design (CRDs are cluster-scoped and dangerous to modify).

  • For CRD upgrades, apply them manually or via a separate process
  • Don't put CRDs in templates/ either — ordering issues
  • Many operators use a separate CRD-only chart

Resource Deletion on Uninstall

helm uninstall deletes everything it manages, including PVCs if they were created by the chart (not by StatefulSet volumeClaimTemplates).

  • Add "helm.sh/resource-policy": keep annotation to PVCs and critical resources
  • StatefulSet PVCs created via volumeClaimTemplates are NOT managed by Helm and survive uninstall

The nil map trap. In Go templates, accessing a nested key on a nil map panics: {{ .Values.foo.bar.baz }} fails if foo is nil. Always guard with {{ if .Values.foo }}{{ .Values.foo.bar }}{{ end }} or provide defaults in values.yaml so the parent key always exists as an empty map.

11. Tips, Tricks, and Production Patterns

Hard-won patterns from operating Helm at scale.

Force Rollout on Config Change

checksum annotation patterntriggers rollout when configmap/secret changes
spec:
  template:
    metadata:
      annotations:
        # Rollout when ConfigMap changes
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
        # Rollout when Secret changes
        checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}

Fail Fast with required

required values patterncatch missing values at render time
# Fail at render time, not at deploy time
image: "{{ required "image.repository is required" .Values.image.repository }}:{{ required "image.tag is required" .Values.image.tag }}"

# Useful error message in helm template output:
# Error: execution error at (mychart/templates/deployment.yaml:25):
#   image.tag is required

Embed Files from Chart

.Files objectembed config files, scripts, etc.
# Embed a config file as ConfigMap data
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "mychart.fullname" . }}-config
data:
  # Single file
  nginx.conf: |-
    {{ .Files.Get "files/nginx.conf" | nindent 4 }}
  # All files matching a glob
  {{- range $path, $_ := .Files.Glob "files/configs/*.yaml" }}
  {{ base $path }}: |-
    {{ $.Files.Get $path | nindent 4 }}
  {{- end }}

# Embed as a Secret (auto base64 encoded)
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "mychart.fullname" . }}-certs
type: Opaque
data:
  {{- (.Files.Glob "certs/*").AsSecrets | nindent 2 }}

Conditional API Version

capabilities checksupport multiple K8s versions
# Use the right API version based on cluster capabilities
{{- if .Capabilities.APIVersions.Has "autoscaling/v2" }}
apiVersion: autoscaling/v2
{{- else }}
apiVersion: autoscaling/v2beta2
{{- end }}
kind: HorizontalPodAutoscaler

# Check minimum Kubernetes version
{{- if semverCompare ">= 1.25-0" .Capabilities.KubeVersion.GitVersion }}
# Use PodDisruptionBudget policy/v1
{{- end }}

Protect Resources from Deletion

resource-policy: keepsurvive helm uninstall
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ include "mychart.fullname" . }}-data
  annotations:
    "helm.sh/resource-policy": keep      # Helm will NOT delete this on uninstall
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 100Gi

Library Charts

shared templates via library chartsDRY across multiple charts
# Chart.yaml of the library chart
apiVersion: v2
name: common-templates
type: library                  # Cannot be installed directly
version: 1.0.0

# Chart.yaml of the consuming chart
dependencies:
  - name: common-templates
    version: "1.x"
    repository: "oci://registry.example.com/charts"

# Use in templates:
{{- include "common-templates.labels" . | nindent 4 }}
{{- include "common-templates.deployment" . }}

12. Helmfile — Declarative Multi-Release Management

Helmfile is to Helm what Terraform is to cloud APIs: a declarative layer that manages multiple releases, environments, and value layering in a single config file. Essential for clusters with 10+ Helm releases.

helmfile.yamlcomplete multi-environment example
repositories:
  - name: bitnami
    url: https://charts.bitnami.com/bitnami
  - name: ingress-nginx
    url: https://kubernetes.github.io/ingress-nginx
  - name: prometheus
    url: https://prometheus-community.github.io/helm-charts

# Environment-specific values
environments:
  dev:
    values:
      - env/defaults.yaml
      - env/dev.yaml
  staging:
    values:
      - env/defaults.yaml
      - env/staging.yaml
  production:
    values:
      - env/defaults.yaml
      - env/production.yaml

# Helm defaults applied to all releases
helmDefaults:
  atomic: true
  timeout: 600
  wait: true
  createNamespace: true

# Release definitions
releases:
  - name: ingress-nginx
    namespace: ingress
    chart: ingress-nginx/ingress-nginx
    version: 4.8.3
    values:
      - values/ingress.yaml

  - name: prometheus
    namespace: monitoring
    chart: prometheus/kube-prometheus-stack
    version: 55.5.0
    values:
      - values/prometheus.yaml
      - values/prometheus-{{ .Environment.Name }}.yaml

  - name: myapp
    namespace: {{ .Environment.Name }}
    chart: ./charts/myapp
    values:
      - values/myapp.yaml
      - values/myapp-{{ .Environment.Name }}.yaml
    set:
      - name: image.tag
        value: {{ env "IMAGE_TAG" | default "latest" }}
    needs:                          # Dependency ordering
      - ingress/ingress-nginx
helmfile commandsdaily workflow
# Diff all releases against live cluster
helmfile -e production diff

# Apply all releases (install/upgrade)
helmfile -e production apply

# Apply only specific releases
helmfile -e production -l name=myapp apply

# Sync (apply without diff confirmation)
helmfile -e production sync

# Destroy all releases
helmfile -e staging destroy

# Template locally (no cluster)
helmfile -e production template

# Lint all charts
helmfile -e production lint

Helmfile + GitOps. Store helmfile.yaml and all values in Git. CI runs helmfile -e production diff on PRs for review, then helmfile -e production apply on merge. This gives you declarative, auditable, reviewable Kubernetes deployments without ArgoCD/Flux complexity.

13. Hardening Checklist

Chart Authoring

  1. Include values.schema.json with required fields and type constraints
  2. Use required for critical values that must be provided
  3. Use quote on all annotation values, env values, and string fields
  4. Include .helmignore to exclude secrets, IDE files, test data from packaged chart
  5. Keep selector labels immutable (name + instance only)
  6. Set "helm.sh/resource-policy": keep on PVCs and critical stateful resources
  7. Add checksum annotations for ConfigMap/Secret rollout triggers
  8. Set kubeVersion constraint in Chart.yaml
  9. Include NOTES.txt with post-deploy instructions and access info
  10. Write test pods in templates/tests/ for helm test

Deployment Operations

  1. Always use --atomic in CI/CD — auto-rollback on failure
  2. Always pass values via -f file.yaml, never rely on --reuse-values
  3. Store all values files in Git alongside the chart or in a deploy repo
  4. Pin chart versions in Helmfile or CI scripts (no latest)
  5. Commit Chart.lock for reproducible dependency resolution
  6. Run helm diff before every production upgrade
  7. Set appropriate --timeout based on pod startup time
  8. Use helm template --dry-run=server in CI to catch API validation errors
  9. Configure revisionHistoryLimit on Deployments (default 10 is fine)
  10. Monitor release secret count — Helm stores one Secret per revision per release
operator referencedaily commands cheatsheet
# ---- Deploy ----
helm upgrade --install NAME CHART -f values.yaml -n NS --atomic --timeout 10m

# ---- Inspect ----
helm list -A                          # All releases
helm status NAME -n NS                # Release status
helm get values NAME -n NS --all      # Effective values
helm get manifest NAME -n NS          # Rendered manifests
helm history NAME -n NS               # Revision history

# ---- Debug ----
helm template NAME CHART -f values.yaml --debug     # Render locally
helm lint CHART -f values.yaml --strict              # Validate chart
helm diff upgrade NAME CHART -f values.yaml -n NS   # Preview changes

# ---- Recover ----
helm rollback NAME REVISION -n NS    # Rollback
helm rollback NAME 0 -n NS           # Rollback to previous

# ---- Clean ----
helm uninstall NAME -n NS            # Delete release
helm uninstall NAME -n NS --keep-history  # Delete but keep history