Kubernetes workshop: Local environment

Prepare your local Kubernetes environment

Goal: Having a running local Kubernetes environment

Ensure that you fulfilled the requirements

Using k3d to create a cluster

In order to have an easily provisioned temporary playground we’ll make use of k3d which is a lightweight local Kubernetes instance. (Note that they are alternatives to run a local Kubernetes cluster such as: kubeadm, microk8s, minikube)

After installing the binary you should enable the completion (bash or zsh) as follows (do the same for both kubectl and k3d).

1source <(k3d completion bash)

Then create the sandbox cluster named "workshop" with an additional worker

 1k3d cluster create workshop -p "8081:80@loadbalancer" --agents 1
 2INFO[0000] Prep: Network
 3INFO[0000] Created network 'k3d-workshop' (ce74508d3fe09d8622f1ae83effd412d754dfdb441aa9d550723805f9b528c6b)
 4INFO[0000] Created volume 'k3d-workshop-images'
 5INFO[0001] Creating node 'k3d-workshop-server-0'
 6INFO[0001] Creating node 'k3d-workshop-agent-0'
 7INFO[0001] Creating LoadBalancer 'k3d-workshop-serverlb'
 8INFO[0001] Starting cluster 'workshop'
 9INFO[0001] Starting servers...
10INFO[0001] Starting Node 'k3d-workshop-server-0'
11INFO[0006] Starting agents...
12INFO[0006] Starting Node 'k3d-workshop-agent-0'
13INFO[0018] Starting helpers...
14INFO[0018] Starting Node 'k3d-workshop-serverlb'
15INFO[0019] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
16INFO[0023] Successfully added host record to /etc/hosts in 3/3 nodes and to the CoreDNS ConfigMap
17INFO[0023] Cluster 'workshop' created successfully!
18INFO[0023] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
19INFO[0023] You can now use it like this:
20kubectl config use-context k3d-workshop
21kubectl cluster-info

As k3d is made to be used on top of docker you can see the status of the running containers. You should have 3 containers, one for the loadbalancing, one for the control-plane and an agent (worker).

1docker ps
2CONTAINER ID   IMAGE                      COMMAND                  CREATED              STATUS              PORTS                             NAMES
34b5847b265dd   rancher/k3d-proxy:v4.4.6   "/bin/sh -c nginx-pr…"   About a minute ago   Up About a minute   80/tcp, 0.0.0.0:43903->6443/tcp   k3d-workshop-serverlb
4523a025087b3   rancher/k3s:v1.21.1-k3s1   "/bin/entrypoint.sh …"   About a minute ago   Up About a minute                                     k3d-workshop-agent-0
5791b8a69bc1f   rancher/k3s:v1.21.1-k3s1   "/bin/entrypoint.sh …"   About a minute ago   Up About a minute                                     k3d-workshop-server-0

With kubectl you'll see 2 running pods: a control-plane and a worker

1kubectl get nodes
2NAME                    STATUS   ROLES                  AGE     VERSION
3k3d-workshop-agent-0    Ready    <none>                 2m34s   v1.21.1+k3s1
4k3d-workshop-server-0   Ready    control-plane,master   2m44s   v1.21.1+k3s1

You can also have a look to the default cluster's components that are all located in the namespace kube-system

 1kubectl get pods -n kube-system
 2NAME                                      READY   STATUS      RESTARTS   AGE
 3helm-install-traefik-crd-h5j7m            0/1     Completed   0          16h
 4helm-install-traefik-8mzhk                0/1     Completed   0          16h
 5svclb-traefik-gh4rk                       2/2     Running     2          16h
 6traefik-97b44b794-lcmh4                   1/1     Running     1          16h
 7coredns-7448499f4d-h7xvn                  1/1     Running     2          16h
 8local-path-provisioner-5ff76fc89d-qvpf7   1/1     Running     1          16h
 9svclb-traefik-cbvmp                       2/2     Running     2          16h
10metrics-server-86cbb8457f-5v9ls           1/1     Running     1          16h

The CLI configuration

The main interface to the Kubernetes API is kubectl. This CLI is configured with what we call a kubeconfig you can have a look at its content wether by having a look at its default location is ~/.kube/config or running the command

1kubectl config view
2apiVersion: v1
3clusters:
4- cluster:
5    certificate-authority-data: DATA+OMITTED
6    server: https://0.0....

and you can check if the CLI is properly configured by running

1kubectl cluster-info
2Kubernetes control plane is running at https://0.0.0.0:43903
3CoreDNS is running at https://0.0.0.0:43903/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
4Metrics-server is running at https://0.0.0.0:43903/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

Kubectl plugins

It is really easy to extend the capabilities of he kubectl CLI. Here is a basic "hello-world" example:

Write a dumb script, just ensure its name is prefixed with kubectl- and put it in your PATH

1cat > kubectl-helloworld<<EOF
2#!/bin/bash
3echo "Hello world!"
4EOF
5
6chmod u+x kubectl-helloworld && sudo mv kubectl-helloworld /usr/local/bin

Then it can be used as an argument of kubectl

1kubectl helloworld
2Hello world!

Delete our test

1sudo rm /usr/local/bin/kubectl-helloworld

You can find more information on how to create a kubectl plugin here

In order to benefit from the plugins written by the community there's a tool named krew

Update the local index

1kubectl krew update
2Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
3Updated the local copy of plugin index.

Browse the available plugins

1kubectl krew search
2NAME                            DESCRIPTION                                         INSTALLED
3access-matrix                   Show an RBAC access matrix for server resources     no
4advise-psp                      Suggests PodSecurityPolicies for cluster.           no
5allctx                          Run commands on contexts in your kubeconfig         no
6apparmor-manager                Manage AppArmor profiles for cluster.               no
7...

For the current workshop we'll make use of ctx ns

  • ctx: Switch between contexts in your kubeconfig (Really helpful when you have multiple clusters to manage)
  • ns: Switch between Kubernetes namespaces (Avoid to specify the namespace for each kubectl commands when working on a given namespace)
1kubectl krew install ctx ns
2Updated the local copy of plugin index.
3Installing plugin: ctx
4...

Then you'll be able to switch between contexts (clusters) and namespaces.

1kubectl ns
2Context "k3d-workshop" modified.
3Active namespace is "kube-system".

➡️ Next: Run an application on Kubernetes

Posts in this series