Kubernetes Home Lab Setup

I setup a Kubernetes Home Lab server. I wanted my Kubernetes setup to be one Control node and two worker nodes. I also wanted the nodes to NOT be slow. I can handle a little slowness, it is a home lab after all. But too slow just hampers my ability to learn.

Hardware was first and I wanted it cheap. I plan on having my Kubernetes setup in a virtualized environment. I choose Hyper-V using Windows 10 Pro mainly because it was included in the server. For less than $300 I got a renewed HP Compaq 8200 I7 cpu and 32GB of ram. This included a 1 Tb hard drive and Windows 10 Pro.

I am not going into details how to setup Hyper-v on Windows 10. There are plenty instructions out there and it is pretty easy. You will also need to setup an external v-switch so the OS can reach the outside world and get security patches. Try this URL: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/connect-to-network The OS I use to host the Kubernetes nodes is Ubuntu 18.04. From my other posts you know that I am a Red Hat guy. I am not really interested in Debian based Linux OS. But in this instance I used it due to ease.

I setup three nodes: control1, worker1, worker2. Each of the nodes has 2 CPUs, 8Gb ram and 100Gb of disk space. Why 100Gb? Why not, I have a 1 Tb disk. Once the minimal OS was installed I also setup openssh server. I do this so I can get to the nodes from my laptop.

$ sudo apt install openssh-server

Now let get to the commands to setup the Kubernetes nodes. On every node we need to execute the following commands:

# setup needed modules
$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

# Initialize the modules
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

# Kernel parameters needed
$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# Set the kernel parametes
$ sudo sysctl --system

# Lets setup containerd
# Not Docker because VMWare Tanzu uses containerd and at work 
# I am using Tanzu
$ sudo apt-get update && sudo apt-get install -y containerd
$ sudo mkdir -p /etc/containerd
$ sudo containerd config default | sudo tee /etc/containerd/config.toml
$ udo systemctl restart containerd

# turn swap off, no one needs swap 
$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# lets get some needed software
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl

# get the gpg key to be sure we have good software
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

# Kubernetes!!!
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
$ sudo apt-mark hold kubelet kubeadm kubectl

Now that all the software is there, we need to initialize Kubernetes and get it working. One thing to remember is your pod network cidr MUST be different from your VM network. If your VM network is on 192.168.0.0 network, do NOT put your pod network on the same network. This is why my pod network is on 10.0.0.0.

The next commands are only on the control node. Since I am using Tanzu at work I am install the Calico Network on the Kubernetes setup

#Initialize the Kubernetes cluster using kubeadm 
$ sudo kubeadm init --pod-network-cidr 10.0.0.0/8

#Set kubectl access:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

#Test access to cluster:
$ kubectl version

#Install the Calico Network Add-On
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

#Check status of Calico components:
$ kubectl get pods -n kube-system

Ensure all the pods are running before going on.

We are now going to join the worker nodes to the cluster

#In the Control Plane Node, get the join command needed
$ kubeadm token create --print-join-command
#In both Worker Nodes, paste the kubeadm join command
# you got from the kubeadm command above
$ sudo kubeadm join ...
#In the Control Plane Node, view cluster status
$ kubectl get nodes

The next post will be the setup of Contour with Envoy proxies. I use a Nginx setup on the control node as a load balancer for the Envoy proxies. This will allow me to access the web pages externally from the cluster.