Overview
After discussion between my manager and the SysEng management team, it’s been decided that I’m to take over the Kubernetes infrastructure. The one SysEng who’s still here has put in his time and the belief is that management of Kubernetes should be an OpsEng task. Since I have the most experience due to helping to manage the existing servers plus working with the SysEng, I’m owning the environment. This documentation has been created to provide insights into my installation process and as a starting point when building a new cluster.
Script Environment
Like the 1.2 environment, the current clusters are being build with shell scripts. Also like the 1.2 environment, the scripts are specific to the cluster being built. You have to modify the set_variables.sh script for each cluster.
SysEng Scripts:
- set_variables.sh – Set the variables used by the following scripts
- functions.lib – Functions used by the following scripts
- generate_certs.sh – Generate the ca and sub certs for all systems
- generate_configs.sh – Generate the configuration files
- copyfiles.sh – Copy the zipped files to the servers
- bootstrap_cluster.sh – Dialog system to start etc and the control plane
- configure_cluster.sh – Dialog system to let you manage users and RBAC
- create_users.sh – Create users
- deploy_addons.sh – Deploy the additional tools
- kubectl_remote_access.sh – Setting up a config for the users to access the dashboard
- nuke_env.sh – Remove all parts of Kubernetes
When I received the 1.9 scripts from SysEng, I created new scripts based on them but cluster agnostic and using a .config file to separate the different clusters. This let me also check them into our revision control infrastructure. As we’re using certificates in two different locations, I’ve split the process into three. The cert generation scripts and the cluster build and cluster admin scripts.
- kubecerts – This package contains all the scripts used to build the CA along with all the server and user certificates. These files are initially located on the build server and are part of the kubernetes and kubeadm packages.
- kubernetes – This package contains all the scripts used to build the cluster. These scripts are located in /var/tmp/kubernetes.
- kubeadm – This package contains all the scripts used to manage users and namespaces. These files are located on the management server in the /usr/local/admin/kubernetes/[sitename] directory.
Kubernetes The Hard Way
I used Kelsey Hightower’s Kubernetes The Hard Way site to better understand the intricacies of Kubernetes that I didn’t know from managing the existing clusters and from working with SysEng. Kelsey Hightower works for Google and started his instruction with 1.8 so his site was quite spot on and needed for me to move forward.
Environment
As my workload is expanding, here are the environments. For the non Production sites, all sites are in the local offices of course. For Production the remote DR site is the remote DR site 🙂
- Sandbox site
- Dev local and remote DR site
- QA local and remote DR site
- Integration Lab local and remote DR site
- Production local and remote DR site.
The Sandbox, Dev, and QA sites are three masters and three worker nodes. Integration Lab and Production sites are three masters and seven worker nodes.
I needed to create a new local Integration Lab site but was able to reuse the old Integration Lab DR site as it was spec’d for the old 1.2 installation but never used. I rebuilt them for the 1.9 installations.
Additional tools installed are as follows.
- kubedns
- heapster
- influxdb
- grafana
- dashboard
The Configuration Files
Each site has its own configuration file located in the config directory.
There is a .config file for each of the packages as the kubernetes.config file might have a few extra things that the kubecerts.config file may or may not have.
You’ll use the install script located in the root of each of the packages to copy the site specific configuration file into the root as .config which is then used by all the scripts.
The kubecerts Scripts And Configurations
The kubecerts scripts use the Cloud Flare binaries.
Configurations
- .config – This file contains the site name, CA information, Load Balancer information, Service IP, and a list of master and worker server names and IP addresses. This file is located in the installation root directory.
Scripts
- version – contains the Kubernetes version.
- install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
- bin/buildcerts.sh – shell script that uses the .config data file to create master and worker directories and create certificates.
- bin/cleanup – script that deletes it all so you can do it again.
- bin/config – directory that contains the csr files, formatted in json which are used by the buildcerts.sh script to create certificates.
- bin/config/encryption-config.yaml – which is needed by the kube-apiserver manifest.
- config – directory that contains all the site config files.
Installation
You’ll run these commands on the build server.
- Run the install script to select the site you’ll be building certificates for.
- In the bin directory, run the buildcerts.sh shell script.
- The encryption key used by the encryption-config.yaml file occasionally returns an error due to an invalid character in the final string. Run the cleanup script to remove all the certs and start over with step 1.
- Using the tar command, create a site specific (named after the apiserver hostname) tar file.
- Copy the site.tar file into the /opt/static/kubernetes/sitecerts directory.
When the kubernetes script is run, it copies the site.tar file into the kubernetes directory and untars the certificates. Then the entire package is scp’d over to all nodes in the cluster where you then begin the kubernetes installation process.
The kubernetes Scripts and Configurations
The kubernetes static directory contains all the necessary binary files used when building the clusters. Make sure you have the various kubectl binaries, the kube-apiserver,kube-controller-manager, kube-scheduler, and kube-proxy binaries. In addition, you’ll need the cri and cni archives. And the etcd and etcdctl binary, used to manage etcd. As we’re also using the Cloud Flare binaries, make sure the cfssl and cfssljson binaries are also in place for the certificates.
Configurations
This configuration is a bit more complicated but very similar to the kubecerts one. It has the site name, CA information, Load Balancer information, Service IP, a list of master and worker nodes, plus path information for the various kubernetes certs and configurations, and some configuration options. Obviously see the .config base file for the information.
Scripts
- version – contains the Kubernetes version
- install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
- bin/buildadmin.sa.sh – used to build the first cluster admin.
- bin/installcerts.sh – installs all the generated certificates into their appropriate directories.
- etcd/buildetcd.sh – installs and configures the etcd binary.
- etcd/config – directory with the etcd.service configuration file.
- master/buildmaster.sh – shell script that installs and configures a master server.
- master/config – directory with the configuration files for the core services.
- tools – various scripts used for managing the cluster.
- worker/buildrepo.sh – this script pulls the certificate from the Artifactory server so images can be pulled.
worker/buildworker.sh – script that installs and configures a worker node. - worker/config – various configuration files used when configuring a worker node.
Installation
You’ll be running scripts on every node of the cluster.
- Run the install script to select the site you’ll be installing the cluster for.
- In the bin directory, run the installcerts.sh shell script. This copies all the certs into the appropriate directories.
Master Servers
- In the etcd directory, run the buildetcd.sh shell script. This installs etcd.
- In the master directory, run the buildmaster.sh shell script. This installs the kube-apiserver, kube-controller-manager, and kube-scheduler services.
Once done, in the bin directory run the buildadmin.sa.sh shell script. This creates your serviceaccount and a kubeconfig file you use to access the servers from the management server.
Worker Nodes
- In the worker directory, run the buildworker.sh shell script.
- Then run the buildrepo.sh script. This ensures the certificate from the Artifactory server is installed on the node. Otherwise you can’t download images from Artifactory
The kubeadm Scripts and Configurations
There are two parts of this process. This part describes both parts however the installation step describes completing the installation of the cluster.
Configurations
The configuration file is is pretty minor containing the cluster name, the certificate details (users need new certs), Load Balancer, and Artifactory credentials.
Scripts
- version – contains the Kubernetes version.
- install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
- bin/buildadmin.sa.sh – script used to create the cluster admin serviceaccount.
- bin/builduser.sh – script use to create namespace specific users.
bin/config – email text files used to notify users and admins of cluster management. - bindings/clusterrolebinding.yaml – the main file used to tie service accounts to the cluster.
- config – directory with csr files for admins and users.
- roles/clusterrole.yaml – role used to manage the cluster.
- tools – directory with tools used to manage the cluster.
- yaml/buildsystem.sh – shell script used to apply the additional yaml files to install tools.
Installation
- Run the install script to select the site you’ll be managing the cluster with
- In the bindings directory, run the kubectl apply -f clusterrolebinding command
- In the roles directory, run the kubectl apply -f clusterrole command
- In the roles directory, run the kubectl apply -f dashboard command
- In the yaml directory, run the buildsystem.sh shell script to install grafana, heapster, influxdb, kube-dns, and the kubernetes-dashbaord.
User Creation
The buildadmin.sa.sh and builduser.sh shell scripts are part of the overall kubeadm script package.
Admins
This is the process in the buildadmin.sa.sh shell script.
- Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
- Create a clusterrolebinding to the cluster-admin clusterrole.
- Create username.kubeconfig file in the /usr/local/admin/kubernetes/users directory.
- Generate a password from the password script
- Create a zip file using the password.
- Saves the file into the /var/www/html/kubernetes directory.
- Sends the user an email.
Users
This is the process in the builduser.sh shell script.
- Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
- As users are members of namespaces and only have read-only (or view) access to the cluster, a namespace is created.
- A rolebinding is created so the user has edit access to the namespace.
- Create a username.kubeocnfig file in the /usr/local/admin/kubernetes/users directory
- Generate a password from the password script
- Create a zip file using the password.
- The file is copied into /var/www/html/kubernetes.
- Sends the user an email with their password and the location of their encrypted kubeconfig file.
Completion
This completes the installation process. When done, you should have a functioning cluster and access to the cluster.