In a series on my home environment, I’m next working on the Kubernetes sandbox. It’s defined as a 3 master, 5 minion cluster. The instructions I currently have seem to work only with a 1 master, n minion cluster (as many minions as I want to install). I need to figure out how to add 2 more masters.
Anyway, on to the configurations.
For the Master, you need to add in the Centos repository for docker:
# vi /etc/yum.repos.d/virt7-docker-common-release.repo [virt7-docker-common-release] name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0
Once in place, enable the repo and then install kubernetes, etcd, and flannel:
# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):
# vi /etc/kubernetes/config # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379" # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the replication controller and scheduler find the kube-apiserver KUBE_MASTER="--master=http://kube1:8080"
Edit the etcd.conf file. There are a bunch of entries but a majority are commented out. Either edit the lines or copy and comment out the original and add new lines:
# vi /etc/etcd/etcd.conf # [member] ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" # [cluster] ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
Edit the kubernetes apiserver file:
# vi /etc/kubernetes/apiserver # The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port kubelets listen on KUBELET_PORT="--kubelet-port=10250" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # Add your own! KUBE_API_ARGS=""
Configure etcd to hold the network overlay on the master. Use an unused network:
$ etcdctl mkdir /kube-centos/network $ etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
Update the flannel configuration:
# vi /etc/sysconfig/flanneld # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://kube1:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS=""
Finally start the services. You should see a green “active (running)” for the status of each of the services.
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
Everything worked perfectly following the above instructions.
On the Minions or worker nodes, you’ll need to follow these steps. Many are the same as for the master but I split them out to make it easier to follow. Conceivably you can copy the necessary configuration files from the master to all the minions with the exception of the kubelet file.
For the Minions, you need to add in the Centos repository for docker:
# vi /etc/yum.repos.d/virt7-docker-common-release.repo [virt7-docker-common-release] name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ gpgcheck=0
Once in place, enable the repo and then install kubernetes, etcd, and flannel:
# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):
# vi /etc/kubernetes/config # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379" # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the replication controller and scheduler find the kube-apiserver KUBE_MASTER="--master=http://kube1:8080"
Update the flannel configuration:
# vi /etc/sysconfig/flanneld # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://kube1:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS=""
Edit the kubelet file on each of the Minions. The main thing to note here is the KUBELET_HOSTNAME. You can either leave it blank if the Minion hostnames are fine or enter in the names you want to use. Leaving it blank lets you copy it to all the nodes without having to edit it, again assuming the hostname is the one you’ll be using for the variable:
# vi /etc/kubernetes/kubelet # The address for the info server to serve on KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=knode1" # <-------- Check the node number! # Location of the api-server KUBELET_API_SERVER="--api-servers=http://lnmt1cuomkube1:8080" # Add your own! KUBELET_ARGS=""
And start the services on the nodes:
for SERVICES in kube-proxy kubelet flanneld docker do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
The final step is to configure kubectl:
kubectl config set-cluster default-cluster --server=http://kube1:8080 kubectl config set-context default-context --cluster=default-cluster --user=default-admin kubectl config use-context default-context
Once that's done on the Master and all the Minions, you should be able to get a node listing:
# kubectl get nodes NAME STATUS AGE knode1 Ready 1h knode2 Ready 1h knode3 Ready 1h knode4 Ready 1h knode5 Ready 1h