Skip to content

Support other Helm storage backends besides Secrets#760

Open
fiscafusca wants to merge 1 commit into
fluxcd:mainfrom
fiscafusca:feat/helm-storage-backends-support
Open

Support other Helm storage backends besides Secrets#760
fiscafusca wants to merge 1 commit into
fluxcd:mainfrom
fiscafusca:feat/helm-storage-backends-support

Conversation

@fiscafusca
Copy link
Copy Markdown

Overview

This PR adds support for all Helm storage backends (see the official documentation).

Usage

The storage driver can be set through the --helm-storage-driver flag, and its value is propagated to the NewRunner function. The set of allowed values is [secret, configmap, sql]. If the flag is unset, the value is retrieved from the environment variable HELM_DRIVER. If the environment variable is also unset, the value simply defaults to secret - for backwards compatibility.

Use case

The possibility to switch between backends is essential to have more flexibility in the cluster. Storing release information in Secrets may not always be the best option, mostly due to:

  • Size of Helm releases information, which may exceed 1MB;
  • Combined total size of cluster Secrets, which may cause disruptions in the cluster;
  • Helm release location - in some use cases it could be necessary to move release information to another location for compliance.

Moving the release information, for example, to a SQL backend, would easily address these issues and allow keeping a longer history of deployments.

Testing (cluster)

The changes were tested on a local K8s cluster. I personally used a single node kind cluster with K8s v1.27.3.

Steps to reproduce

Create the cluster:

kind create cluster

Build a local image (I named it test-helm-controller:latest) and load it to the kind cluster:

export IMG=test-helm-controller:latest
make docker-build
kind load image test-helm-controller:latest

Deploy the default helm-controller and source-controller:

make deploy

Create the namespace for the Helm release:

kubectl create namespace hello

Prepare the manifests for the HelmRepo and HelmRelease to use for testing. I personally used this chart for simplicity.

helmrepo.yaml:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: cloudecho
  namespace: hello
spec:
  interval: 1m0s
  url: https://cloudecho.github.io/charts/

helmrelease.yaml:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: hello
  namespace: hello
spec:
  chart:
    spec:
      chart: hello
      sourceRef:
        kind: HelmRepository
        name: cloudecho
      version: 0.1.2
  values:
    replicaCount: 3
  interval: 1m0s

Test case 1: Backwards compatibility

The Helm release information should still be stored in Secrets when both the flag and the env variable are unset.

Deployment patch in config/manager/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f -

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Check for the Helm release secret:

kubectl get secrets -n hello -l 'owner=helm'

NAME                          TYPE                 DATA   AGE
sh.helm.release.v1.hello.v1   helm.sh/release.v1   1      29s

Test case 2: Configmaps

The Helm release information should still be stored in Configmaps.

Deployment patch in config/manager/kustomization.yaml (flag):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: --helm-storage-driver=configmap
  target:
    kind: Deployment

OR (env var):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER", "value": "configmap"}
  target:
    kind: Deployment

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f - 

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Check for the Helm release configmap:

kubectl get configmaps -n hello -l 'owner=helm'

NAME                          DATA   AGE
sh.helm.release.v1.hello.v1   1      25s

Test case 3: SQL storage

The Helm release information should still be stored in Configmaps.
For this test, I used a PostgreSQL DB hosted on an Azure server.

Deployment patch in config/manager/kustomization.yaml (flag):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: --helm-storage-driver=sql
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
  target:
    kind: Deployment

OR (env var):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: fluxcd/helm-controller
  newName: test-helm-controller
  newTag: latest
patches:
- patch: |-
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER", "value": "sql"}
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value: {"name": "HELM_DRIVER_SQL_CONNECTION_STRING", "value": "<your-connection-string>"}
  target:
    kind: Deployment

Deploy the patched helm-controller:

kustomize build config/manager | k apply -f - 

Apply the sample HelmRepo and HelmRelease:

kubectl apply -f helmrepo.yaml
kubectl apply -f helmrelease.yaml

Connect to the DB (I used psql) and check the release_v1 table. You will see that there's a new row:

helm=> SELECT count(*) FROM releases_v1 ;
 count
-------
     1
(1 row)

The content can be checked by simply running the following SQL query:

SELECT * FROM releases_v1 ;

Unit and regression testing

The runner.go file has no test file, and the functions in helmrelease_controller.go that involve the new variable do not have any coverage, so it wasn't clear to me how to proceed. The changes I made are backwards compatible and should not cause any issue AFAIK, but I'm open to add anything else if needed. So feel free to leave any feedback/suggestion :)

Signed-off-by: Giorgia Fiscaletti <giorgiafiscaletti@gmail.com>
@hiddeco
Copy link
Copy Markdown
Member

hiddeco commented Aug 24, 2023

This is a welcome contribution, but with the changes on the horizon (see dev branch), and e.g. #738. It would be better to incorporate this in the new code, which contains substantial changes (and test coverage) around how we work with the Helm storage.

@fiscafusca
Copy link
Copy Markdown
Author

@hiddeco I see, just took a quick look at the dev and new-reconciler branches.
I could still help with adding this feature to the new code too, if you're interested.

Are you planning to release the rework soon?

@fiscafusca
Copy link
Copy Markdown
Author

@stefanprodan could you maybe take a look and tell me whether we can proceed with incorporating this in the new code? Sorry for the ping, but it would be really helpful to have this feature in the near future!

This also closes #272.

I see @hiddeco is on paternity leave - congratulations 🎈 enjoy your time as a new dad!

@carlossg
Copy link
Copy Markdown
Contributor

From https://twitter.com/stefanprodan/status/1716833055615443138

By the end of this year we’ll release a new version of the Helm APIs together with a major refactoring of helm-controller. After this, we can look at alternative storage backends.

@scottsananderson
Copy link
Copy Markdown

Is there any plan of implementing this ? It would be a shame to have this work go to waste. We would very much appreciate this as we have chart approaching the limit.

funkypenguin added a commit to funkypenguin/helm-controller that referenced this pull request May 5, 2026
Adds a `--helm-storage-driver` flag and an `HELM_DRIVER` environment
variable fallback that select the Helm release storage backend, mirroring
the Helm CLI's HELM_DRIVER behaviour. Supported values are
`secret`/`secrets`, `configmap`/`configmaps`, `memory`, and `sql`,
matched case-insensitively. Unset preserves the current behaviour
(Secret), so the change is backwards compatible.

The SQL driver reads its connection string from
`HELM_DRIVER_SQL_CONNECTION_STRING` (matching the Helm CLI). It is
useful when:

  - Helm release information exceeds the 1MiB Secret size limit;
  - the cumulative Secret count is causing cluster-wide pressure;
  - compliance requires storing release data outside the cluster.

A `Memory` driver is accepted for parity with Helm itself, but the
flag help marks it as test/dev only, since the storage is
re-initialised on every reconcile.

Implementation notes:

  - The driver name is normalised once at startup (with validation),
    so an unsupported value fails the controller process rather than
    every HelmRelease reconcile.
  - SQL drivers are managed by an SQLDriverPool keyed by storage
    namespace, sharing one helmdriver.SQL instance per namespace.
    Helm v4 does not expose Close on storage.Driver, so connection
    pools are released only when the controller exits; the
    per-namespace cache bounds the live pool count to the set of
    storage namespaces actually in use rather than once per reconcile.
  - At startup, when the SQL driver is selected, the controller probes
    the database with a transient database/sql.Open + PingContext +
    Close to surface invalid DSNs or unreachable backends before the
    manager starts. The probe connection is closed cleanly and does
    not contribute to the long-lived pool.
  - SQL connect/migration errors can echo the connection string; they
    are caught at the WithStorage call site and a generic message is
    returned, so DSN material cannot leak into HelmRelease status
    conditions.

Closes fluxcd#272.
Supersedes fluxcd#760.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants