Home

Awesome

flux2-kustomize-helm-example

test e2e license

For this example we assume a scenario with two clusters: staging and production. The end goal is to leverage Flux and Kustomize to manage both clusters while minimizing duplicated declarations.

We will configure Flux to install, test and upgrade a demo app using HelmRepository and HelmRelease custom resources. Flux will monitor the Helm repository, and it will automatically upgrade the Helm releases to their latest chart version based on semver ranges.

Prerequisites

You will need a Kubernetes cluster version 1.28 or newer. For a quick local test, you can use Kubernetes kind. Any other Kubernetes setup will work as well though.

In order to follow the guide you'll need a GitHub account and a personal access token that can create repositories (check all permissions under repo).

Install the Flux CLI on macOS or Linux using Homebrew:

brew install fluxcd/tap/flux

Or install the CLI by downloading precompiled binaries using a Bash script:

curl -s https://fluxcd.io/install.sh | sudo bash

Repository structure

The Git repository contains the following top directories:

├── apps
│   ├── base
│   ├── production 
│   └── staging
├── infrastructure
│   ├── configs
│   └── controllers
└── clusters
    ├── production
    └── staging

Applications

The apps configuration is structured into:

./apps/
├── base
│   └── podinfo
│       ├── kustomization.yaml
│       ├── namespace.yaml
│       ├── release.yaml
│       └── repository.yaml
├── production
│   ├── kustomization.yaml
│   └── podinfo-patch.yaml
└── staging
    ├── kustomization.yaml
    └── podinfo-patch.yaml

In apps/base/podinfo/ dir we have a Flux HelmRelease with common values for both clusters:

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: podinfo
  namespace: podinfo
spec:
  releaseName: podinfo
  chart:
    spec:
      chart: podinfo
      sourceRef:
        kind: HelmRepository
        name: podinfo
        namespace: flux-system
  interval: 50m
  values:
    ingress:
      enabled: true
      className: nginx

In apps/staging/ dir we have a Kustomize patch with the staging specific values:

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: podinfo
spec:
  chart:
    spec:
      version: ">=1.0.0-alpha"
  test:
    enable: true
  values:
    ingress:
      hosts:
        - host: podinfo.staging

Note that with version: ">=1.0.0-alpha" we configure Flux to automatically upgrade the HelmRelease to the latest chart version including alpha, beta and pre-releases.

In apps/production/ dir we have a Kustomize patch with the production specific values:

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: podinfo
  namespace: podinfo
spec:
  chart:
    spec:
      version: ">=1.0.0"
  values:
    ingress:
      hosts:
        - host: podinfo.production

Note that with version: ">=1.0.0" we configure Flux to automatically upgrade the HelmRelease to the latest stable chart version (alpha, beta and pre-releases will be ignored).

Infrastructure

The infrastructure is structured into:

./infrastructure/
├── configs
│   ├── cluster-issuers.yaml
│   └── kustomization.yaml
└── controllers
    ├── cert-manager.yaml
    ├── ingress-nginx.yaml
    └── kustomization.yaml

In infrastructure/controllers/ dir we have the Flux HelmRepository and HelmRelease definitions such as:

apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: cert-manager
  namespace: cert-manager
spec:
  interval: 30m
  chart:
    spec:
      chart: cert-manager
      version: "1.x"
      sourceRef:
        kind: HelmRepository
        name: cert-manager
        namespace: cert-manager
      interval: 12h
  values:
    installCRDs: true

Note that with interval: 12h we configure Flux to pull the Helm repository index every twelfth hours to check for updates. If the new chart version that matches the 1.x semver range is found, Flux will upgrade the release.

In infrastructure/configs/ dir we have Kubernetes custom resources, such as the Let's Encrypt issuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    # Replace the email address with your own contact email
    email: fluxcdbot@users.noreply.github.com
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-nginx
    solvers:
      - http01:
          ingress:
            class: nginx

In clusters/production/infrastructure.yaml we replace the Let's Encrypt server value to point to the production API:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: infra-configs
  namespace: flux-system
spec:
  # ...omitted for brevity
  dependsOn:
    - name: infra-controllers
  patches:
    - patch: |
        - op: replace
          path: /spec/acme/server
          value: https://acme-v02.api.letsencrypt.org/directory
      target:
        kind: ClusterIssuer
        name: letsencrypt

Note that with dependsOn we tell Flux to first install or upgrade the controllers and only then the configs. This ensures that the Kubernetes CRDs are registered on the cluster, before Flux applies any custom resources.

Bootstrap staging and production

The clusters dir contains the Flux configuration:

./clusters/
├── production
│   ├── apps.yaml
│   └── infrastructure.yaml
└── staging
    ├── apps.yaml
    └── infrastructure.yaml

In clusters/staging/ dir we have the Flux Kustomization definitions, for example:

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: infra-configs
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./apps/staging
  prune: true
  wait: true

Note that with path: ./apps/staging we configure Flux to sync the staging Kustomize overlay and with dependsOn we tell Flux to create the infrastructure items before deploying the apps.

Fork this repository on your personal GitHub account and export your GitHub access token, username and repo name:

export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
export GITHUB_REPO=<repository-name>

Verify that your staging cluster satisfies the prerequisites with:

flux check --pre

Set the kubectl context to your staging cluster and bootstrap Flux:

flux bootstrap github \
    --context=staging \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=clusters/staging

The bootstrap command commits the manifests for the Flux components in clusters/staging/flux-system dir and creates a deploy key with read-only access on GitHub, so it can pull changes inside the cluster.

Watch for the Helm releases being installed on staging:

$ watch flux get helmreleases --all-namespaces

NAMESPACE    	NAME         	REVISION	SUSPENDED	READY	MESSAGE 
cert-manager 	cert-manager 	v1.11.0 	False    	True 	Release reconciliation succeeded
ingress-nginx	ingress-nginx	4.4.2   	False    	True 	Release reconciliation succeeded
podinfo      	podinfo      	6.3.0   	False    	True 	Release reconciliation succeeded

Verify that the demo app can be accessed via ingress:

$ kubectl -n ingress-nginx port-forward svc/ingress-nginx-controller 8080:80 &

$ curl -H "Host: podinfo.staging" http://localhost:8080
{
  "hostname": "podinfo-59489db7b5-lmwpn",
  "version": "6.2.3"
}

Bootstrap Flux on production by setting the context and path to your production cluster:

flux bootstrap github \
    --context=production \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=clusters/production

Watch the production reconciliation:

$ flux get kustomizations --watch

NAME             	REVISION     	SUSPENDED	READY	MESSAGE                         
apps             	main/696182e	False    	True 	Applied revision: main/696182e	
flux-system      	main/696182e	False    	True 	Applied revision: main/696182e	
infra-configs    	main/696182e	False    	True 	Applied revision: main/696182e	
infra-controllers	main/696182e	False    	True 	Applied revision: main/696182e	

Add clusters

If you want to add a cluster to your fleet, first clone your repo locally:

git clone https://github.com/${GITHUB_USER}/${GITHUB_REPO}.git
cd ${GITHUB_REPO}

Create a dir inside clusters with your cluster name:

mkdir -p clusters/dev

Copy the sync manifests from staging:

cp clusters/staging/infrastructure.yaml clusters/dev
cp clusters/staging/apps.yaml clusters/dev

You could create a dev overlay inside apps, make sure to change the spec.path inside clusters/dev/apps.yaml to path: ./apps/dev.

Push the changes to the main branch:

git add -A && git commit -m "add dev cluster" && git push

Set the kubectl context and path to your dev cluster and bootstrap Flux:

flux bootstrap github \
    --context=dev \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=clusters/dev

Identical environments

If you want to spin up an identical environment, you can bootstrap a cluster e.g. production-clone and reuse the production definitions.

Bootstrap the production-clone cluster:

flux bootstrap github \
    --context=production-clone \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=clusters/production-clone

Pull the changes locally:

git pull origin main

Create a kustomization.yaml inside the clusters/production-clone dir:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - flux-system
  - ../production/infrastructure.yaml
  - ../production/apps.yaml

Note that besides the flux-system kustomize overlay, we also include the infrastructure and apps manifests from the production dir.

Push the changes to the main branch:

git add -A && git commit -m "add production clone" && git push

Tell Flux to deploy the production workloads on the production-clone cluster:

flux reconcile kustomization flux-system \
    --context=production-clone \
    --with-source 

Testing

Any change to the Kubernetes manifests or to the repository structure should be validated in CI before a pull requests is merged into the main branch and synced on the cluster.

This repository contains the following GitHub CI workflows: