Helm workshop

Requirements

In order to have an easily provisioned temporary playground we’ll make use of k3d which is a lightweight local Kubernetes instance.

After installing the binary you should enable the completion (bash or zsh) as follows (do the same for both helm and k3d).

1source <(k3d completion bash)

Then create the sandbox cluster named “helm-workshop

 1k3d cluster create helm-workshop
 2INFO[0000] Created network 'k3d-helm-workshop'
 3INFO[0000] Created volume 'k3d-helm-workshop-images'
 4INFO[0001] Creating node 'k3d-helm-workshop-server-0'
 5INFO[0006] Creating LoadBalancer 'k3d-helm-workshop-serverlb'
 6INFO[0007] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
 7INFO[0010] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
 8INFO[0010] Cluster 'helm-workshop' created successfully!
 9INFO[0010] You can now use it like this:
10kubectl cluster-info

Note that your current configuration should be automatically switched to the newly created cluster.

1$ kubectl config current-context
2k3d-helm-workshop

Other considerations

Hosting and versioning

Most of the time we would want to share the charts in order to be used on different systems or to pull the dependencies.

There are multiple options for that, here are the ones that are generally used.

  • Chartmuseum is the official solution. This is a pretty simple webserver that exposes a Rest API.
  • Harbor. Its main purpose is to store images (containers), but it offers many other features such as vulnerability scanning, images signing and integrates chartmuseum.
  • Artifactory can be used to stored Helm charts too
  • An OCI store (container registry).

Pushing the charts into a central location requires to manage the versions of the charts. Any changes should trigger a version bump in the Chart.yaml file.

Secrets management

One sensitive topic that we didn’t talk about is how to handle secrets.

This is not directly related to Helm but this is a general issue on Kubernetes.

There are many options, some of them work great with Helm, some others require managing secrets apart from Helm releases.

In the ArgoCD documentation they tried to reference all the options available.

Cleanup

Pretty simple we’ll drop the whole k3d cluster

1k3d cluster delete helm-workshop

Posts in this series