This guide shows you how to use kind, or Kubernetes in Docker, which is basically what just what it says: a tool for running Kubernetes (k8s) clusters locally using containers, nominally for developing and/or testing software.
Read on to discover how kind can fit into (and possibly improve) your workflow!
The case for kind's being
Kind is a lifesaver if or when you need to emulate a production Kubernete environment on your laptop.
OK, why would you want or need to do this? Probably because youâre responsible for building and maintaining Kubernetes apps or services, and you need a pattern for developing and testing them locally. Also, friends tell me many orgs use kind in their CI pipelines (in addition to local use) to validate Kubernetes manifests, troubleshoot problem deployments, or experiment with new k8s features. Kind lets you do this by running the Kubernetes control plane and worker nodes as OCI containers in Docker or Podman.
(FYI: We explored using kind in place of lightweight solutions like k3s or microk8s in an article published last year.)
Thatâs kind in a nutshell. Now letâs explore how to get and use it.
Getting It
First, we need to make sure weâve got Flox installed.
Next, weâll create a Flox project directory, just like weâd create one for a local GitHub repo:
mkdir kind
Once we cd
into this directory, we can initialize our Flox environment (flox init) and install required dependencies from Flox Catalog. In addition to the kind
package, weâll probably want to grab a few supporting packagesâlike kubectl
and maybe even k9s
:
flox install kind kubectl k9s && flox activate
â
'kind' installed to environment 'kind'
â
'kubectl' installed to environment 'kind'
â
'k9s' installed to environment 'kind'
â
You are now using the environment 'kind'.
To stop using this environment, type 'exit'
Having activated our environment, weâre ready to use it. So letâs do that.
Using It
To run kind, you first need to create the configuration file (a YAML artifact) youâll use to define your K8s cluster. Then youâll invoke the kind
command to deploy your cluster. A sample kind.yaml
might look like this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: gutes-zeug
nodes:
- role: control-plane
image: kindest/node:v1.32.0
- role: worker
image: kindest/node:v1.32.0
- role: worker
image: kindest/node:v1.32.0
- role: worker
image: kindest/node:v1.32.0
- role: worker
image: kindest/node:v1.32.0
This names the cluster (name: gutes-zeug
) and tells kind to use Kubernetes v1.32.0 container images. This matches the version of kubectl
thatâs installed in the environment. It also creates a K8s control plane (role: control-plane
) and four worker nodes (role: worker
), replicating a hypothetical prod environment. Basically, now all youâve got to do is run kind create cluster --config kind.yaml
to start it. From there, you can use kubectl
, helm
, or other tools to deploy software on it.
Perfecting it
Iâm going to cheat a little bit from here on outâbut in a good way. It happens that Iâve prebuilt a Flox environment to automate the process of configuring and spinning up a kind cluster. Basically, I wrote some bash that asks me âDo you want to create a new kind configuration file?â and, if I say yes
, prompts me to:
- Specify a cluster name (e.g., 'dev-cluster')
- Specify a Kubernetes version (e.g., 'v1.29.2' or 'latest')
- Specify the number of worker nodes (e.g., 3)
Once finished, it concludes by asking me: âDo you want to create and deploy your kind cluster?â
You can grab a persistent local version of this environment by running the following command:
flox pull --copy barstoolbluz/kind
Or you can activate a temporary remote Flox environment by running:
flox activate -r barstoolbluz/kind
In both cases, Flox grabs my environment from FloxHub. But if you flox pull âcopy
, youâll first want to make sure youâve cd
-ed into a new project directory, after which youâll need to flox activate
it. Alternatively, flox activate -r
works anywhere. It runs a temporary version of my environment on your system and puts you into a Flox subshell. When you type exit and hit Enter
, this environment disappears.
(Note: my environment also has a built-in function, delete-cluster
, you can invoke to stop and destroy your cluster when youâre finished. If you donât specify a cluster name via the command lineâe.g., delete-cluster foo
âit will prompt you for one! Itâs possible to automate starting and stopping kind clusters using Floxâs built-in service management capabilities, but itâs Friday afternoon after the holidays. You understand, right?)
My environment uses the gum package from the incredible folks at charmbracelet to power its bootstrapping wizard. Hereâs an example of it in action:
daedalus@askesis:~/tests/kind$ flox activate
â
You are now using the environment 'kind'.
To stop using this environment, type 'exit'
Specify cluster name (e.g., 'dev-cluster'): zeug
Specify Kubernetes version (e.g., 'v1.29.2' or 'latest'): latest
Specify number of worker nodes (e.g., 3): 4
Created configuration file: zeug-kind.yaml
Creating cluster "zeug" ...
â Ensuring node image (kindest/node:v1.32.0) đŒ
â Preparing nodes đŠ đŠ đŠ đŠ đŠ
â Writing configuration đ
â Starting control-plane đčïž
â Installing CNI đ
â Installing StorageClass đŸ
â Joining worker nodes đ
Set kubectl context to "kind-zeug"
You can now use your cluster with:
kubectl cluster-info --context kind-zeug
Thanks for using kind! đ
Sure, it says itâs running, but ⊠is it really running? Letâs find out!
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08017b54c933 kindest/node:v1.32.0 "/usr/local/bin/entrâŠ" 7 minutes ago Up 7 minutes kind-worker4
468376ba6996 kindest/node:v1.32.0 "/usr/local/bin/entrâŠ" 7 minutes ago Up 7 minutes kind-worker3
4b871fa37e20 kindest/node:v1.32.0 "/usr/local/bin/entrâŠ" 7 minutes ago Up 7 minutes kind-worker2
edaea18691b0 kindest/node:v1.32.0 "/usr/local/bin/entrâŠ" 7 minutes ago Up 7 minutes kind-worker
7768b09db00d kindest/node:v1.32.0 "/usr/local/bin/entrâŠ" 7 minutes ago Up 7 minutes 127.0.0.1:45327->6443/tcp kind-control-plane
So far, so good. Letâs see what output we get from kubectl
:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 9m2s v1.32.0
kind-worker Ready <none> 8m52s v1.32.0
kind-worker2 Ready <none> 8m52s v1.32.0
kind-worker3 Ready <none> 8m52s v1.32.0
kind-worker4 Ready <none> 8m52s v1.32.0
Excellent! Now letâs dig a little deeper:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-668d6bf9bc-fk65b 1/1 Running 0 9m16s
coredns-668d6bf9bc-gf86f 1/1 Running 0 9m16s
etcd-kind-control-plane 1/1 Running 0 9m22s
kindnet-glfvf 1/1 Running 0 9m17s
kindnet-pk9qd 1/1 Running 0 9m14s
kindnet-rxjwc 1/1 Running 0 9m14s
kindnet-tmhj7 1/1 Running 0 9m14s
kindnet-tt5vw 1/1 Running 0 9m14s
kube-apiserver-kind-control-plane 1/1 Running 0 9m22s
kube-controller-manager-kind-control-plane 1/1 Running 0 9m22s
kube-proxy-h9wwv 1/1 Running 0 9m14s
kube-proxy-hqjlq 1/1 Running 0 9m14s
kube-proxy-hztd4 1/1 Running 0 9m17s
kube-proxy-nnzqj 1/1 Running 0 9m14s
kube-proxy-tg6vr 1/1 Running 0 9m14s
kube-scheduler-kind-control-plane 1/1 Running 0 9m22s
Everything looks as it should. I think? (Iâm far from a Kubernetes expert, after all.)
If I had a Kubernetes deployment manifest, I could apply it using kubectl
, or I could deploy a Helm chart using the helm
package. So far as either tool knows, it's interacting with a gigantic k8s cluster.
Speaking of which, both kubectl
and helm
are built into my environment. If I need specific historical versions of either, they're almost certainly in the Flox catalog.
flox list
gum: gum (0.14.5)
helm: helm (0.9.0)
jq: jq (1.7.1)
k9s: k9s (0.32.7)
kind: kind (0.26.0)
kubectl: kubectl (1.32.0)
I can use kubectl
for quick-and-dirty imperative deployments. And since I donât actually have a production app or service to build and test locally, that's exactly what I'm going to do:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --type=NodePort --port=80
kubectl create deployment redis --image=redis
kubectl expose deployment redis --type=ClusterIP --port=6379
kubectl create deployment rabbitmq --image=rabbitmq
kubectl expose deployment rabbitmq --type=ClusterIP --port=5672
Iâve installed k9s
in my environment, which I can use to view the status of my deployments. The screenshot below shows that nginx
, redis
, and rabbitmq
have deployed successfully.
Iâve saved one final detail for last. If you donât have a container runtime like Docker or Podman installed ... my environment wonât activate. Deflating-trumpet sound, right? Not so fast! You can remotely activate Floxâs Colima example environment to get an on-when-you-need-it-off-when-you-donât container runtime:
flox activate -s -r flox/colima
Colima gives you Docker in a VM, kind of like on Windows or macOS, just without the heavy-duty desktop.
Summary
So thatâs kind. IYKYK: if you know, you know ⊠kind is nothing short of miraculous.
Try experimenting with kind yourself: Itâs an excellent way to learn K8sâlocally, right on your laptopâwithout setting up a multi-node (or multi-VM) lab environment. (Trust me, Iâve been there!) And itâs becoming an increasingly popular solution for emulating a production K8s environment both locally and in CI.
Happy hacking!