Overview
This article provides instructions in building the Kubernetes cluster using kubeadm and any post installation requirements.
Build Cluster
On the first control plane node run the kubeadm command.
kubeadm init --config kubeadm-config.yaml --upload-certs
After the first node has been initialized, the connect strings will be provided to join the two control nodes and three worker nodes to the new cluster. Add in the second control plane node using the string and then the third node. Do them in order as the third one will time out while the second one is pulling images.
When all three control plane nodes are up, use the worker connect string from the first control plane node and add in all three worker nodes. They can be added in parallel or sequentially but they do get added quickly.
You can then check the status of the cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bldr0cuomknode1.dev.internal.pri Ready <none> 8d v1.25.7
bldr0cuomknode2.dev.internal.pri Ready <none> 8d v1.25.7
bldr0cuomknode3.dev.internal.pri Ready <none> 8d v1.25.7
bldr0cuomkube1.dev.internal.pri Ready control-plane 8d v1.25.7
bldr0cuomkube2.dev.internal.pri Ready control-plane 8d v1.25.7
bldr0cuomkube3.dev.internal.pri Ready control-plane 8d v1.25.7
And check all the pods as well to make sure everything is running as expected.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-565d847f94-bp2c7 1/1 Running 2 8d
kube-system coredns-565d847f94-twlvf 1/1 Running 0 3d17h
kube-system etcd-bldr0cuomkube1.dev.internal.pri 1/1 Running 0 4d
kube-system etcd-bldr0cuomkube2.dev.internal.pri 1/1 Running 1 (4d ago) 4d
kube-system etcd-bldr0cuomkube3.dev.internal.pri 1/1 Running 0 18h
kube-system kube-apiserver-bldr0cuomkube1.dev.internal.pri 1/1 Running 0 4d
kube-system kube-apiserver-bldr0cuomkube2.dev.internal.pri 1/1 Running 0 4d
kube-system kube-apiserver-bldr0cuomkube3.dev.internal.pri 1/1 Running 0 18h
kube-system kube-controller-manager-bldr0cuomkube1.dev.internal.pri 1/1 Running 0 4d
kube-system kube-controller-manager-bldr0cuomkube2.dev.internal.pri 1/1 Running 0 4d
kube-system kube-controller-manager-bldr0cuomkube3.dev.internal.pri 1/1 Running 0 18h
kube-system kube-proxy-bpcfh 1/1 Running 1 8d
kube-system kube-proxy-jl469 1/1 Running 1 8d
kube-system kube-proxy-lrbh6 1/1 Running 2 8d
kube-system kube-proxy-n9q4f 1/1 Running 2 8d
kube-system kube-proxy-tf9wt 1/1 Running 1 8d
kube-system kube-proxy-v66pt 1/1 Running 2 8d
kube-system kube-scheduler-bldr0cuomkube1.dev.internal.pri 1/1 Running 0 4d
kube-system kube-scheduler-bldr0cuomkube2.dev.internal.pri 1/1 Running 0 4d
kube-system kube-scheduler-bldr0cuomkube3.dev.internal.pri 1/1 Running 0 18h
Certificate Signing Requests
When the cluster is up, due to the kubelet configuration updates, you’ll need to approve some CSRs. It’s an easy process to do with one caveat, the certs are only good for a year so you’ll need to do this again next year. Make a note.
$ kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-4kr8m 20s kubernetes.io/kubelet-serving system:node:bldr0cuomkube3.dev.internal.pri <none> Pending
csr-fqpvs 28s kubernetes.io/kubelet-serving system:node:bldr0cuomknode3.dev.internal.pri <none> Pending
csr-m526d 27s kubernetes.io/kubelet-serving system:node:bldr0cuomkube2.dev.internal.pri <none> Pending
csr-nc6t7 27s kubernetes.io/kubelet-serving system:node:bldr0cuomkube1.dev.internal.pri <none> Pending
csr-wxhfd 28s kubernetes.io/kubelet-serving system:node:bldr0cuomknode1.dev.internal.pri <none> Pending
csr-z42x4 28s kubernetes.io/kubelet-serving system:node:bldr0cuomknode2.dev.internal.pri <none> Pending
$ kubectl certificate approve csr-4kr8m
certificatesigningrequest.certificates.k8s.io/csr-4kr8m approved
Security Settings
Per the CIS group, several of the installed files need to be updated to ensure proper settings. Review the CIS documentation to see which files and directories need to be updated.
Image Updates
As noted earlier, update the kubernetes manifests to point to the local image registry. These files are on each of the control nodes in the /etc/kubernetes/manifests directory. In addition update the imagePullPolicy to Always. This ensures you always get the correct, uncorrupted image. Each kube and etcd containers will restart automatically when the manifest files are updated.
Conclusion
The cluster is up now. Now we’ll need to add the network management layer (Calico), metrics-server, ingress controller, and for the development cluster, a continuous delivery tool (argocd).
Pingback: Kubernetes Index | Motorcycle Touring