Support other Helm storage backends besides Secrets#760
Conversation
Signed-off-by: Giorgia Fiscaletti <giorgiafiscaletti@gmail.com>
|
This is a welcome contribution, but with the changes on the horizon (see |
|
@hiddeco I see, just took a quick look at the Are you planning to release the rework soon? |
|
@stefanprodan could you maybe take a look and tell me whether we can proceed with incorporating this in the new code? Sorry for the ping, but it would be really helpful to have this feature in the near future! This also closes #272. I see @hiddeco is on paternity leave - congratulations 🎈 enjoy your time as a new dad! |
|
From https://twitter.com/stefanprodan/status/1716833055615443138
|
|
Is there any plan of implementing this ? It would be a shame to have this work go to waste. We would very much appreciate this as we have chart approaching the limit. |
Adds a `--helm-storage-driver` flag and an `HELM_DRIVER` environment
variable fallback that select the Helm release storage backend, mirroring
the Helm CLI's HELM_DRIVER behaviour. Supported values are
`secret`/`secrets`, `configmap`/`configmaps`, `memory`, and `sql`,
matched case-insensitively. Unset preserves the current behaviour
(Secret), so the change is backwards compatible.
The SQL driver reads its connection string from
`HELM_DRIVER_SQL_CONNECTION_STRING` (matching the Helm CLI). It is
useful when:
- Helm release information exceeds the 1MiB Secret size limit;
- the cumulative Secret count is causing cluster-wide pressure;
- compliance requires storing release data outside the cluster.
A `Memory` driver is accepted for parity with Helm itself, but the
flag help marks it as test/dev only, since the storage is
re-initialised on every reconcile.
Implementation notes:
- The driver name is normalised once at startup (with validation),
so an unsupported value fails the controller process rather than
every HelmRelease reconcile.
- SQL drivers are managed by an SQLDriverPool keyed by storage
namespace, sharing one helmdriver.SQL instance per namespace.
Helm v4 does not expose Close on storage.Driver, so connection
pools are released only when the controller exits; the
per-namespace cache bounds the live pool count to the set of
storage namespaces actually in use rather than once per reconcile.
- At startup, when the SQL driver is selected, the controller probes
the database with a transient database/sql.Open + PingContext +
Close to surface invalid DSNs or unreachable backends before the
manager starts. The probe connection is closed cleanly and does
not contribute to the long-lived pool.
- SQL connect/migration errors can echo the connection string; they
are caught at the WithStorage call site and a generic message is
returned, so DSN material cannot leak into HelmRelease status
conditions.
Closes fluxcd#272.
Supersedes fluxcd#760.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
Overview
This PR adds support for all Helm storage backends (see the official documentation).
Usage
The storage driver can be set through the
--helm-storage-driverflag, and its value is propagated to theNewRunnerfunction. The set of allowed values is [secret,configmap,sql]. If the flag is unset, the value is retrieved from the environment variableHELM_DRIVER. If the environment variable is also unset, the value simply defaults tosecret- for backwards compatibility.Use case
The possibility to switch between backends is essential to have more flexibility in the cluster. Storing release information in Secrets may not always be the best option, mostly due to:
Moving the release information, for example, to a SQL backend, would easily address these issues and allow keeping a longer history of deployments.
Testing (cluster)
The changes were tested on a local K8s cluster. I personally used a single node
kindcluster with K8sv1.27.3.Steps to reproduce
Create the cluster:
Build a local image (I named it
test-helm-controller:latest) and load it to the kind cluster:export IMG=test-helm-controller:latest make docker-build kind load image test-helm-controller:latestDeploy the default
helm-controllerandsource-controller:Create the namespace for the Helm release:
Prepare the manifests for the
HelmRepoandHelmReleaseto use for testing. I personally used this chart for simplicity.helmrepo.yaml:helmrelease.yaml:Test case 1: Backwards compatibility
The Helm release information should still be stored in Secrets when both the flag and the env variable are unset.
Deployment patch in
config/manager/kustomization.yaml:Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Check for the Helm release secret:
kubectl get secrets -n hello -l 'owner=helm' NAME TYPE DATA AGE sh.helm.release.v1.hello.v1 helm.sh/release.v1 1 29sTest case 2: Configmaps
The Helm release information should still be stored in Configmaps.
Deployment patch in
config/manager/kustomization.yaml(flag):OR (env var):
Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Check for the Helm release configmap:
kubectl get configmaps -n hello -l 'owner=helm' NAME DATA AGE sh.helm.release.v1.hello.v1 1 25sTest case 3: SQL storage
The Helm release information should still be stored in Configmaps.
For this test, I used a PostgreSQL DB hosted on an Azure server.
Deployment patch in
config/manager/kustomization.yaml(flag):OR (env var):
Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Connect to the DB (I used
psql) and check therelease_v1table. You will see that there's a new row:The content can be checked by simply running the following SQL query:
Unit and regression testing
The
runner.gofile has no test file, and the functions inhelmrelease_controller.gothat involve the new variable do not have any coverage, so it wasn't clear to me how to proceed. The changes I made are backwards compatible and should not cause any issue AFAIK, but I'm open to add anything else if needed. So feel free to leave any feedback/suggestion :)