Lightweight Kubernetes GitOps Secrets

Dernière mise à jour : sept. 8

Photo by Bharat Patil on Unsplash

Every day at Sokube we evangelize GitOps as a way to simplify, improve and accelerate digital operations, and we support our customers on this journey. With the abundance of technology bricks that now compose the "Cloud Native" landscape, something that years ago looked difficult and hazardous is now easily reduced to choosing the combination of products / platforms that best fits a use-case.

However, when it comes to keeping sensitive information confidential inside a GitOps approach, choices are neither multiple nor simple. With its reversible base64 encoding, Kubernetes Secrets has been the Achille's heel of the platform regarding information security or confidentiality whether or not RBAC has been correctly configured. Native Kubernetes Secrets to store confidential or sensitive information should be purely prohibited on production systems, at least.

Couple of GitOps-friendly alternatives that preserve confidentiality have naturally emerged as workarounds: the first part of this article will be focusing on a short overview of what's existing. While Hashicorp Vault remains a truly secure, feature-rich and widely adopted solution, we believe it is positioned rather as an enterprise-wide or cross-organizational solution for confidential information. In the context of GitOps however, it would distort the “git as single source of truth” principle. Finally, a pertinent deployment of HashiCorp Vault often requires the appropriate organizational changes as well as a solid budget for its operations.

In the second part of the article, we will focus on two lightweight solutions which provide exactly what is needed for GitOps without disrupting the organization, the project and the budgets : Bitnami Sealed Secrets and Soluto's Kamus.

Kubernetes Secrets Landscape

Most of our use-cases involve on-premise only installations, so while Amazon, Azure, and Google usually provide their secret management for their platform, we've focused primarily on the on-premise solutions, and we have evaluated:

  • Complexity: Overall we have estimated the efforts to get a grasp on the solution, to install and operate it.

  • Maturity: Based on the number and frequency of the releases, the age of the solution, the practical examples found on the web that demonstrate its adoption

  • Scope: Classify the scope of validity of the secrets, organization meaning that there is no specific restriction of validity of the secret, contrary to cluster or service account where the secret can’t be decrypted on another specific cluster or with another service account.

  • Dependency: This is an important aspect of the solution that makes it more or less applicable in particular contexts

Bitnami Sealed Secrets

The Bitnami Sealed Secrets solution is based on a controller that runs in the target cluster, where a cluster-specific pair of private/public keys is generated.


The public key is used by a client binary (kubeseal) to live-encrypt data (in the form of custom resources named SealedSecret). This operation can also be done offline (without the need for a cluster with a bitnami controller) by using the controller certificate (public key) that can be downloaded from the cluster. The resulting resource is a SealedSecret that is safe to store inside a git repository.


The controller detects the presence of SealedSecret resources and automatically decrypts the content in an equivalent standard Secret resource.


Installation of the solution is relatively easy:

$ kubectl apply -f

The client tool (kubeseal) to interact with the controller inside the target cluster is a standalone binary that can be downloaded separately:

$ wget -O kubeseal


We will use a specific namespace:

$ kubectl create namespace my-namespace
namespace/my-namespace created

Bitnami's kubeseal can be used as a pipe command, and when it's combined with the dry-runs capabilities of kubectl, we can generate sealed secrets without even intermediate resources stored locally as files:

$ kubectl create secret generic db-credentials \
  -n my-namespace \
  --from-literal=DBUSER=mydbuser \
  --from-literal=DBPWD=highlysecret \
  --dry-run=client \
  -o yaml \
  | kubeseal -o yaml > db-credentials-sealed-secret.yaml

The resulting SealedSecret can be stored safely inside a git repository:

kind: SealedSecret
  creationTimestamp: null
  name: db-credentials
  namespace: my-namespace
      creationTimestamp: null
      name: db-credentials
      namespace: my-namespace

Applying the resource:

$ kubectl apply -f db-credentials-sealed-secret.yaml

Checking the resource:

$ kubectl get sealedsecrets -n my-namespace db-credentials 
NAME             AGE
db-credentials   2m3s
$ kubectl get events -n my-namespace
LAST SEEN   TYPE     REASON     OBJECT                        MESSAGE
2m9s        Normal   Unsealed   sealedsecret/db-credentials   SealedSecret unsealed successfully

Checking the equivalent Secret (and data) that has been generated by the controller:

$ kubectl get secrets -n my-namespace
NAME                  TYPE                                  DATA   AGE
db-credentials        Opaque                                2      3m18s
$ kubectl get secrets -n my-namespace db-credentials -o jsonpath='{.data.DBUSER}' | base64 --decode
$ kubectl get secrets -n my-namespace db-credentials -o jsonpath='{.data.DBPWD}' | base64 --decode


Important to know: Each sealed secret is encrypted with its own random asymmetric key that is specific to the sealed secret name and namespace. Copy-pasting the encrypted data for another secret or in another namespace won't work.

By default, SealedSecrets generated for a cluster won't work with another cluster (the installation of the Bitnami Controller creates a new pair of public/private keys), which could make the overall GitOps staging operations cumbersome (each environment would need to have its custom SealedSecrets). One solution is to export/import the same set of key pairs from one cluster to another. However a compromised key on a cluster would also grant access to all sealed secrets on all clusters with the same key pair.


➕ Simplicity of setup and usage

➕ Mature and well maintained solution

➖ Regular Secrets are still exposed

➖ No usage outside Secrets (e.g. ConfigMap)


The Kamus solution offers two different mechanisms for secrets generation. One that is similar to Bitnami Sealed Secrets, another which is a true zero-trust secrets management solution.

First mode: KamusSecret

KamusSecret tend to mimic the Bitnami Sealed Secrets : a controller inside the cluster detects KamusSecrets and provide a decrypted version as a Secret.

The encoding part is a bit more tedious, as the kamus-client doesn't support passing a whole Secret resource but rather works as a raw encryption service: individual items need to be separately encrypted with the client, and aggregated manually inside an “envelope” KamusSecret definition.

As with Bitnami, the controller is responsible to decrypt a KamusSecret as a regular Secret.

Second mode: Zero-Trust Secrets

Contrary to a Bitnami Sealed Secret or a KamusSecret, Kamus zero-trust mode has the huge advantage of never revealing the decrypted data except for the intended service at runtime.

The encryption part of the zero-trust secrets mode is very similar to the previous example: instead of wrapping encoded data in a KamusSecret, a ConfigMap will be used to store secrets as encrypted files in a mounted ConfigMap volume.

For consumption of secrets, an initContainer is invoking the kamus-decryptor service and decrypted results are stored in an ephemeral emptyDir volume accessible only for the application container in the pod.


Installation of the Controller and its CustomResourcesDefinitions is relatively easy:

$ helm repo add soluto
$ key=$(openssl rand -base64 32 | tr -d '\n')
$ helm upgrade --install kamus soluto/kamus --set keyManagement.AES.key=$key

We've added here the random key generation as the solution comes (dangerously) with a predefined key. Also, contrary to Bitnami, the solution installs its workloads in the default namespace so we recommend tweaking the helm release accordingly. Finally, replicas in the encryptor/decryptor deployments are set to 2 by default, which might be unnecessary.

The Kamus client is unfortunately an npm package (a debatable choice that forces users to install npm package manager). A Docker container image seems to be available, however it’s not mentioned on Kamus website so we have chosen to stick to the documented approach:

$ npm install -g @soluto-asurion/kamus-cli

We will use a specific namespace and a specific service account:

$ kubectl create namespace my-namespace
namespace/my-namespace created
$ kubectl create serviceaccount dummy-sa -n my-namespace
serviceaccount/dummy-sa created

The kamus-cli also needs an exposed encryptor pod as an URL:

$ export POD_NAME=$(kubectl get pods --namespace default -l "app=kamus,release=kamus,component=encryptor" -o jsonpath="{.items[0]}")$ kubectl port-forward $POD_NAME 8880:9999 &

KamusSecret Usage

First scenario, setting up a KamusSecret containing our sensitive information:

$ kamus-cli encrypt --secret mydbuser --service-account dummy-sa --namespace my-namespace --allow-insecure-url --kamus-url http://localhost:8880
[info  kamus-cli]: Encryption started...
[info  kamus-cli]: service account: dummy-sa
[info  kamus-cli]: namespace: my-namespace
[warn  kamus-cli]: Auth options were not provided, will try to encrypt without authentication to kamus
Handling connection for