Kubernetes Manual Upgrade to 1.18.8

Upgrading Kubernetes Clusters

This documentation is intended to provide the manual process for upgrading the server Operating Systems, Kubernetes to 1.18.8, and any additional upgrades. This provides example output and should help in troubleshooting should the automated processes experience a problem.

All of the steps required to prepare for an installation should be completed prior to starting this process.

Server and Kubernetes Upgrades

Patch Servers

As part of quarterly upgrades, the Operating Systems for all servers need to be upgraded.

For the control plane, there isn’t a “pool” so just patch each server and reboot it. Do one server at a time and check the status of the cluster before moving to subsequent master servers in the control plane.

For the worker nodes, you’ll need to drain each of the workers before patching and rebooting. Run the following command to both confirm the current version of 1.17.6 and that all nodes are in a Ready state to be patched:

$ kubectl get nodes
NAME                           STATUS   ROLES    AGE    VERSION
ndld0cuomkube1.internal.pri    Ready    master   259d   v1.17.6
ndld0cuomkube2.internal.pri    Ready    master   259d   v1.17.6
ndld0cuomkube3.internal.pri    Ready    master   259d   v1.17.6
ndld0cuomknode1.internal.pri   Ready    <none>   259d   v1.17.6
ndld0cuomknode2.internal.pri   Ready    <none>   259d   v1.17.6
ndld0cuomknode3.internal.pri   Ready    <none>   259d   v1.17.6

To drain a server, patch, and then return the server to the pool, follow the steps below:

$ kubectl drain [nodename] --delete-local-data --ignore-daemonsets

Then patch the server and reboot:

# yum upgrade -y
# shutdown -t 0 now -r

Finally bring the node back into the pool.

$ kubectl uncordon [nodename]

Update Versionlock Information

Currently the clusters have locked kubernetes to version 1.17.6, kubernetes-cni to version 0.7.5, and docker to 1.13.1-161. The locks on each server need to be removed and new locks put into place for the new version of kubernetes, kubernetes-cni, and docker where appropriate.

Versionlock file location: /etc/yum/pluginconf.d/

Simply delete the existing locks:

/usr/bin/yum versionlock delete "kubelet.*"
/usr/bin/yum versionlock delete "kubectl.*"
/usr/bin/yum versionlock delete "kubeadm.*"
/usr/bin/yum versionlock delete "kubernetes-cni.*"
/usr/bin/yum versionlock delete "docker.*"
/usr/bin/yum versionlock delete "docker-common.*"
/usr/bin/yum versionlock delete "docker-client.*"
/usr/bin/yum versionlock delete "docker-rhel-push-plugin.*"

And then add in the new locks at the desired levels:

/usr/bin/yum versionlock add "kubelet-1.18.8-0.*"
/usr/bin/yum versionlock add "kubectl-1.18.8-0.*"
/usr/bin/yum versionlock add "kubeadm-1.18.8-0.*"
/usr/bin/yum versionlock "docker-1.13.1-162.*"
/usr/bin/yum versionlock "docker-common-1.13.1-162.*"
/usr/bin/yum versionlock "docker-client-1.13.1-162.*"
/usr/bin/yum versionlock "docker-rhel-push-plugin-1.13.1-162.*"
/usr/bin/yum versionlock "kubernetes-cni-0.8.6-0.*"

Then install the updated kubernetes and docker binaries. Note that the versionlocked versions and the installed version must match:

/usr/bin/yum install kubelet-1.18.8-0.x86_64
/usr/bin/yum install kubectl-1.18.8-0.x86_64
/usr/bin/yum install kubeadm-1.18.8-0.x86_64
/usr/bin/yum install docker-1.13.1-162.git64e9980.el7_8.x86_64
/usr/bin/yum install docker-common-1.13.1-162.git64e9980.el7_8.x86_64
/usr/bin/yum install docker-client-1.13.1-162.git64e9980.el7_8.x86_64
/usr/bin/yum install docker-rhel-push-plugin-1.13.1-162.git64e9980.el7_8.x86_64
/usr/bin/yum install kubernetes-cni-0.8.6-0.x86_64

Upgrade Kubernetes

Using the kubeadm command on the first master server, you can review the plan and then upgrade the cluster:

[root@ndld0cuomkube1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.6
[upgrade/versions] kubeadm version: v1.18.8
I0901 16:37:26.141057   32596 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest version in the v1.17 series: v1.17.11
[upgrade/versions] Latest version in the v1.17 series: v1.17.11

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     9 x v1.17.6   v1.17.11

Upgrade to the latest version in the v1.17 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.17.6   v1.17.11
Controller Manager   v1.17.6   v1.17.11
Scheduler            v1.17.6   v1.17.11
Kube Proxy           v1.17.6   v1.17.11
CoreDNS              1.6.5     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.17.11

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     9 x v1.17.6   v1.18.8

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.17.6   v1.18.8
Controller Manager   v1.17.6   v1.18.8
Scheduler            v1.17.6   v1.18.8
Kube Proxy           v1.17.6   v1.18.8
CoreDNS              1.6.5     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.18.8

_____________________________________________________________________

There are likely newer versions of Kubernetes control plane containers available. In order to maintain consistency across all clusters, only upgrade the masters to 1.18.8.

[root@ndld0cuomkube1 ~]# kubeadm upgrade apply 1.18.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.17.6
[upgrade/versions] kubeadm version: v1.18.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.8"...
Static pod: kube-apiserver-ndld0cuomkube1.internal.pri hash: bd6dbccfa412f07652db6f47485acd35
Static pod: kube-controller-manager-ndld0cuomkube1.internal.pri hash: 825ea808f14bdad0c2d98e038547c430
Static pod: kube-scheduler-ndld0cuomkube1.internal.pri hash: 1caf2ef6d0ddace3294395f89153cef9
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.8" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests766631209"
W0901 16:44:07.979317   10575 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-01-16-44-07/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ndld0cuomkube1.internal.pri hash: bd6dbccfa412f07652db6f47485acd35
Static pod: kube-apiserver-ndld0cuomkube1.internal.pri hash: 19eda19deaac25d2bb9327b8293ac498
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-01-16-44-07/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ndld0cuomkube1.internal.pri hash: 825ea808f14bdad0c2d98e038547c430
Static pod: kube-controller-manager-ndld0cuomkube1.internal.pri hash: 9dda1d669f9a43cd117cb5cdf36b6582
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-01-16-44-07/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ndld0cuomkube1.internal.pri hash: 1caf2ef6d0ddace3294395f89153cef9
Static pod: kube-scheduler-ndld0cuomkube1.internal.pri hash: cb2a7e4997f70016b2a80ff8f1811ca8
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.8". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

Update Control Planes

On the second and third master, run the kubeadm upgrade apply 1.18.8 command and the control plane will be upgraded.

Update File and Directory Permissions

Verify the permissions match the table below once the upgrade is complete:

Path or Fileuser:groupPermissions
/etc/kubernetes/manifests/etcd.yaml root:root 0644
/etc/kubernetes/manifests/kube-apiserver.yaml 0644
/etc/kubernetes/manifests/kube-controller-manager.yaml root:root0644
/etc/kubernetes/manifests/kube-scheduler root:root 0644
/var/lib/etcd root:root 0700
/etc/kubernetes/admin.conf root:root 0644
/etc/kubernetes/scheduler.conf root:root 0644
/etc/kubernetes/controller-manager.conf root:root 0644
/etc/kubernetes/pki root:root 0755
/etc/kubernetes/pki/ca.crt root:root 0644
/etc/kubernetes/pki/apiserver.crt root:root 0644
/etc/kubernetes/pki/apiserver-kubelet-client.crt root:root 0644
/etc/kubernetes/pki/front-proxy-ca.crt root:root 0644
/etc/kubernetes/pki/front-proxy-client.crt root:root 0644
/etc/kubernetes/pki/sa.pub root:root 0644
/etc/kubernetes/pki/ca.key root:root 0600
/etc/kubernetes/pki/apiserver.key root:root 0600
/etc/kubernetes/pki/apiserver-kubelet-client.key root:root 0600
/etc/kubernetes/pki/front-proxy-ca.key root:root 0600
/etc/kubernetes/pki/front-proxy-client.key root:root 0600
/etc/kubernetes/pki/sa.key root:root 0600
/etc/kubernetes/pki/etcd root:root 0755
/etc/kubernetes/pki/etcd/ca.crt root:root 0644
/etc/kubernetes/pki/etcd/server.crt root:root 0644
/etc/kubernetes/pki/etcd/peer.crt root:root 0644
/etc/kubernetes/pki/etcd/healthcheck-client.crt root:root 0644
/etc/kubernetes/pki/etcd/ca.key root:root 0600
/etc/kubernetes/pki/etcd/server.key root:root 0600
/etc/kubernetes/pki/etcd/peer.key root:root 0600
/etc/kubernetes/pki/etcd/healthcheck-client.key root:root 0600

Update Manifests

During the kubeadm upgrade, the current control plane manifests are moved from /etc/kubernetes/manifests into /etc/kubernetes/tmp and new manifest files deployed. There are multiple settings and permissions that need to be reviewed and updated before the task is considered completed.

The kubeadm-config configmap has been updated to point to bldr0cuomrepo1.internal.pri:5000 however it and the various container configurationsshould be checked anyway. One of the issues is if it’s not updated or used, you’ll have to make the update manually including manually editing the kube-proxy daemonset configuration.

Note that when a manifest is updated, the associated image is reloaded. No need to manage the pods once manifests are updated.

etcd Manifest

Verify and update etcd.yaml

  • Change imagePullPolicy to Always
  • Change image switching g8s.gcr.io with bldr0cuomrepo1.internal.pri:5000

kube-apiserver Manifest

Verify and update kube-apiserver.yaml

  • Add AlwaysPullImages and ResourceQuota admission controllers to the –enable-admission-plugins line
  • Change imagePullPolicy to Always
  • Change image switching k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000

kube-controller-manager Manifest

Verify and update kube-controller-manager.yaml

  • Add ” – –cluster-name=kubecluster-[site]” after ” – –cluster-cidr=192.168.0.0/16″
  • Change imagePullPolicy to Always
  • Change image switching k8s.gcr.io to bldr0cuomrepo1.internal.pri:5000

kube-scheduler Manifest

Verify and update kube-scheduler.yaml

  • Change imagePullPolicy to Always
  • Change image switching k8s,gcr.io to bldr0cuomrepo1.internal.pri:5000

Update kube-proxy

You’ll need to edit the kube-proxy daemonset to change the imagePullPolicy. Check the image tag at the same time.

$ kubectl edit daemonset kube-proxy -n kube-system
  • Change imagePullPolicy to Always.
  • Change image switching k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000

Save the changes.

Update coredns

You” need to edit the coredns deployment to change the imagePullPolizy. Check the image tag at the same time.

$ kubectl edit deployment coredns -n kube-system
  • Change imagePullPolicy to Always
  • Change image switching k8s.gcr.io to bldr0cuomrepo1.internal.pri:5000

Save the changes

Restart kubelet

Once done, kubelet and docker needs to be restarted on all nodes.

systemctl daemon-reload
systemctl restart kubelet
systemctl restart docker

Verify

Once kubelet has been restarted on all nodes, verify all nodes are at 1.18.8.

$ kubectl get nodes
NAME                          STATUS   ROLES    AGE    VERSION
ndld0cuomkube1.intrado.sqa    Ready    master   259d   v1.18.8
ndld0cuomkube2.intrado.sqa    Ready    master   259d   v1.18.8
ndld0cuomkube3.intrado.sqa    Ready    master   259d   v1.18.8
ndld0cuomknode1.intrado.sqa   Ready    <none>   259d   v1.18.8
ndld0cuomknode2.intrado.sqa   Ready    <none>   259d   v1.18.8
ndld0cuomknode3.intrado.sqa   Ready    <none>   259d   v1.18.8

Configuration Upgrades

Configuration files are on the tool servers (lnmt1cuomtool11) in the /usr/local/admin/playbooks/cschelin/kubernetes/configurations directory and the expectation is you’ll be in that directory when directed to apply configurations.

Calico Upgrade

In the calico directory, run the following command:

$ kubectl apply -f calico.yaml
configmap/calico-config configured
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged

After calico is applied, the calico-kube-controllers pod will restart and then the calico-node pod restarts to retrieve the updated image.

Pull the calicoctl binary and copy it to /usr/local/bin, then verify the version. Note that this has likely already been done on the tool server. Verify it before pulling the binary.

$ curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.16.0/calicoctl

Verification

Verify the permissions of the files once the upgrade is complete.

Path or Fileuser:groupPermissions
/etc/cni/net.d/10-calico-conflist root:root0644
/etc/cni/net.d/calico-kubeconfig root:root0644

metrics-server Upgrade

In the metrics-server directory, run the following command:

$ kubectl apply -f components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
serviceaccount/metrics-server unchanged
deployment.apps/metrics-server configured
service/metrics-server unchanged
clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged

Once the metrics-server deployment has been updated, the pod will restart.

kube-state-metrics Upgrade

In the kube-state-metrics directory, run the following command:

$ kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics configured
clusterrole.rbac.authorization.k8s.io/kube-state-metrics configured
deployment.apps/kube-state-metrics configured
serviceaccount/kube-state-metrics configured
service/kube-state-metrics configured

Once the kube-state-metrics deployment is updated, the pod will restart.

Filebeat Upgrade

Filebeat uses Elastic Stack clusters in four environments. Filebeat itself is installed on all clusters. Ensure you’re managing the correct cluster when upgrading the filebeat container as configurations are specific to each cluster.

Change to the appropriate cluster context directory and run the following command:

$ kubectl apply -f filebeat-kubernetes.yaml

Verification

Essentially monitor each cluster. You should see the filebeat containers restarting and returning to a Running state.

$ kubectl get pods -n monitoring -o wide
Posted in Computers, Kubernetes | Tagged | Leave a comment

Kubernetes Ansible Upgrade to 1.18.8

Upgrading Kubernetes Clusters

This document provides a guide to upgrading the Kubernetes clusters in the quickest manner. Much of the upgrade process can be done using Ansible Playbooks. There are a few processes that need to be done centrally on the tool server. And the OS and control plane updates are also manual in part due to the requirement to manually remove servers from the Kubernetes API pool.

In most cases, examples are not provided as it is assumed that you are familiar with the processes and can perform the updates without having to be reminded of how to verify.

For any process that is performed with an Ansible Playbook, it is assumed you are on the lnmt1cuomtool11 server in the /usr/local/admin/playbooks/cschelin/kubernetes directory. All Ansible related steps expect to start from that directory. In addition, the application of pod configurations will be in the configurations subdirectory.

Perform Upgrades

Patch Servers

Patch the control plane master servers one at a time and esure the cluster is healthy before continuing to the second and third master servers.

Drain each worker prior to patching and rebooting the worker node.

$ kubectl drain [nodename] --delete-local-data --ignore-daemonsets

Patch the server and reboot

yum upgrade -y
shutdown -t 0 now -r

Rejoin the worker node to the pool.

kubectl uncordon [nodename]

Update Versionlock And Components

In the upgrade directory, run the update -t [tag] script. This will install yum-plugin-versionlock if missing, remove the old versionlocks, create new versionlocks for kubernetes, kubernetes-cni, and docker, and then the components will be upgraded.

Upgrade Kubernetes

Using the kubeadm command on the first master server, upgrade the first master server.

# kubeadm upgrade apply 1.18.8

Upgrade Control Planes

On the second and third master, run the kubeadm upgrade apply 1.18.8 command and the control plane will be upgraded.

Update kube-proxy

Check the kube-proxy daemonset and update the image tag if required.

$ kubectl edit daemonset kube-proxy -n kube-system
  • Change image switching k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000

Save the changes

Update coredns

Check the coredns-deployment and update the image tag if required.

$ kubectl edit deployment corednss -n kube-system
  • Change image switching k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000

Save the changes.

Restart kubelet and docker

In the restart directory, run the update -t [tag] script. This will restart kubelet and docker on all servers.

Calico Upgrade

In the configurations/calico directory, run the following command:

$ kubectl apply -f calico.yaml

calicoctl Upgrade

Pull the updated calicoctl binary and copy it to /usr/local/bin.

$ curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.16.0/calicoctl

Update File and Directory Permissions and Manifests

In the postinstall directory, run the update -t [tag] script. This will perform the following steps.

  • Add the cluster-name to the kube-controller-manager.yaml file
  • Update the imagePullPolicy and image lines to all manifests
  • Add the AlwaysPullImages and ResourceQuota admission controllers to the kube-apiserver.yaml file.
  • Update the permissions of all files and directories.

Filebeat Upgrade

In the configurations directory, change to the appropriate cluster context directory, bldr0-0, cabo0-0, tato0-1, and lnmt1-2 and run the following command.

$ kubectl apply -f filebeat-kubernetes.yaml
Posted in Computers, Kubernetes | Tagged | Leave a comment

Kubernetes Preparation Steps For 1.18.8

Upgrading Kubernetes Clusters

The purpose of the document is to provide the background information on what is being upgraded, what versions, and the steps required to prepare for the upgrade itself. These steps are only done once. Once all these steps have been completed and all the configurations checked into gitlab, all clusters are then ready to be upgraded.

Upgrade Preparation Steps

Upgrades to the sandbox environment are done a few weeks before the official release for more in depth testing. Checking the release docs, changelog, and general operational status for the various tools that are in use.

Sever Preparations

With the possibility of an upgrade to Spacewalk and to ensure the necessary software is installed prior to the upgrade, make sure all repositories are enabled and that the yum-plugin-versionlock software is installed.

Enable Repositories

Check the Spacewalk configuration and ensure that upgrades are coming from the local server and not from the internet.

Install yum versionlock

The critical components of Kubernetes are locked into place using the versionlock yum plugin. If not already installed, install it before beginning work.

# yum install yum-plugin-versionlock -y

Software Preparations

This section describes the updates that need to be made to the various containers that are installed in the Kubernetes clusters. Most of the changes involve updating the location to point to the local Docker repository vs pulling directly from the internet.

Ansible Playbooks

This section isn’t going to be instructions on setting up or using Ansible Playbooks. The updates to the various configurations are also saved with the Ansible Playbooks repo. You’ll make the appropriate changes to the updated configuration files and then push them back up to the gitlab server.

Update calico.yaml

In the calico directory, run the following command to get the current calico.yaml file.

$ curl https://docs.projectcalico.org/manifests/calico.yaml -O

Edit the file, search for image: and insert in front of calico, the path to the local repository.

bldr0cuomrepo1.internal.pri:5000/

Make sure you follow the documentation to update calicoctl to 3.16.0.

Update metrics-server

In the metrics-server directory, run the following command to get the current components.yaml file:

$ wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

Edit the file, search for image: and replace k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000/

Update kube-state-metrics

Updating kube-state-metrics is a bit more involved as there are several files that are part of the distribution, however you only need a small subset. You’ll need to clone or if you already have it, pull the kube-state-metrics repo.

$ git clone https://github.com/kubernetes/kube-state-metrics.git

Once you have the repo, in the kube-state-metrics/examples/standard directory, copy all the files into the playbooks kube-state-metrics directory.

Edit the deployment.yaml file and replace quay.io with bldr0cuomrepo1.internal.pri:5000/

Update filebeat-kubernetes.yaml

In the filebeat directory, run the following command to get the current filebeat-kubernetes.yaml file:

$ curl -L -O https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml

Change all references in the filebeat-kubernetes.yaml file from kube-system to monitoring. If a new installation, create the monitoring namespace.

Then copy the file into each of the cluster directories and make the following changes.

DaemonSet Changes

In the DaemonSet section, replace the image location docker.elastic.co/filebeat:7.9.2 with bldr0cuomrepo1.internal.pri:5000/beats/filebeat:7.9.2. This pulls the image from our local repository vs from the Internet.

In order for the search and replace script to work the best, make the following changes:

        - name: ELASTICSEARCH_HOST
          value: "<elasticsearch>"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: ""
        - name: ELASTICSEARCH_PASSWORD
          value: ""

In addition, remove the following lines. They confuse the container if they exist.

        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:

Add the default username and password to the following lines as noted:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME:elastic}
      password: ${ELASTICSEARCH_PASSWORD:changeme}

ConfigMap Changes

In the ConfigMap section, activate the filebeat.autodiscover section by uncommenting it and delete the filebeat.inputs configuration section. In the filebeat.autodiscover section, make the following three changes as noted with comments.

filebeat.autodiscover:
  providers:
    - type: kubernetes
      host: ${NODE_NAME}                          # rename node to host
      hints.enabled: true
      hints.default_config.enabled: false         # add this line
      hints.default_config:
        type: container
        paths:
          - /var/log/containers/*${data.kubernetes.container.id}.log
        exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines  # add this line

In the processors section, remove the cloud.id and cloud.auth lines, add the following uncommented lines, and change DEPLOY_ENV to the environment filebeat is being deployed to: dev, sqa, staging, or prod.

# Add deployment environment field to every event to make it easier to sort between Dev and SQA logs.
# DEPLOY_ENV values: dev, sqa, staging, or prod
   - add_fields:
       target: ''
       fields:
         environment: 'DEPLOY_ENV'

Elastic Stack in Dev and QA

This Elastic Stack cluster is used by the Development and QA Kubernetes clusters. Update the files in the bldr0-0 and cabo0-0 subdirectories.

- name: ELASTICSEARCH_HOST
  value: bldr0cuomemstr1.internal.pri

Elastic Stack in Staging

This Elastic Stack cluster is used by the Staging Kubernetes cluster. Update the files in the tato0-1 subdirectory.

- name: ELASTICSEARCH_HOST
  value: tato0cuomelkmstr1.internal.pri

Elastic Stack in Production

This Elastic Stack cluster is used by the Production Kubernetes Cluster. Update the file in the lnmt1-2 subdirectory.

- name: ELASTICSEARCH_HOST
  value: lnmt1cuomelkmstr1.internal.pri
Posted in Computers, Kubernetes | Tagged | Leave a comment

Kubernetes Upgrade to 1.18.8

Upgrading Kubernetes Clusters

The following lists what software and pods will be upgraded during this quarter.

  • Upgrade the Operating System
  • Upgrade Kubernetes
    • Upgrade kudeadm, kubectl, and kubelet RPMs from 1.17.6 to 1.18.8.
    • Upgrade kubernetes-cni RPM from 0.7.5-0 to 0.8.6-0.
    • kube-apiserver is upgraded from 1.17.6 to 1.18.8.
    • kube-controller-manager is upgraded from 1.17.6 to 1.18.8.
    • kube-scheduler is upgraded from 1.17.6 to 1.18.8.
    • kube-proxy is upgraded from 1.17.6 to 1.18.8.
    • coredns is upgraded from 1.6.5 to 1.6.7.
    • etcd maintains at the current version of 3.4.3-0.
  • Upgrade Calico from 3.14.1 to 3.16.0.
  • Upgrade Filebeat from 7.8.0 to 7.9.2.
  • Upgrade docker from 1.3.1-161 to 1.13.1-162.
  • metrics-servers is upgraded from 0.3.6 to 0.3.7.
  • kube-state-metrics isupgrade from 1.9.5 to 1.9.7.

Unchanged Products

There are no unchanged products this quarter.

Upgrade Notes

The following notes provide information on what changes might be affecting users of the clusters when upgrading from one version to the next. The notes I’m adding reflect what I think relevant to the environment so no notes on Azure or OpenShift will be listed. For more detailss, click on the provided links. If something is found that might be relevant, please respond and I’ll check it out and add it in.

Kubernetes Core

The following notes will reflect changes that might be relevant between the currently installed 1.17.6 up through 1.18.8, the target upgrade for Q4. While I’m working to not miss something, if we’re not sure, check the links to see if any changes apply to your product or project.

  • 1.17.7 – kubernetes-cni upgraded to 0.8.6.
  • 1.17.8 – Nothing of interest. Note that there’s a 1.17.8-rc1 as well.
  • 1.17.9 – Privilege escalation patch: CVE-2020-8559. DOS patch: CVE-2020-8557.
  • 1.17.10 – Do not use this release; artifacts are not complete.
  • 1.17.11 – A note that Kubernetes is built with go 1.13.15. No other updates.
  • 1.18.0 – Lots of notes as always. Most are cloud specific (Azure mainly). Some interesting bits though:
    • kubectl debug command added, permits the creation of a sidecar in a pod to assist with troubleshooting a problematic container.
    • IPv6 support is now beta in 1.18.
    • Deprecated APIs
      • apps/v1beta1, apps/v1beta2 – apps/v1
      • daemonsets, deployments, replicates under extensions/v1beta1 – use apps/v1
    • New IngressClass resource added to enable better Ingress configuration
    • autoscaling/v2beta2 HPA added spec.behavior
    • startupProbe (beta) for slow starting containers.
  • 1.18.1 – Nothing much to note
  • 1.18.2 – Fix conversion error for HPA objects with invalid annotations
  • 1.18.3 – init containers are now considered for calculation of resource requests when scheduling
  • 1.18.4 – kubernetes-cni upgraded to 0.8.6
  • 1.18.5 – Nothing of interest. Note there’s a 1.18.5-rc1 as well.
  • 1.18.6 – Privilege escalation patch; CVE-2020-8559. DOS patch; CVE-2020-8557.
  • 1.18.7 – Do not use this release; artifacts are not complete.
  • 1.18.8 – Kubernetes now built with go 1.13.15. Nothing else.

kubernetes-cni

Still search for release notes for the upgrade from 0.7.5 to 0.8.6.

coredns

  • 1.6.6 – Mainly a fix for DNS Flag Day 2020, the bufsize plugin. A fix related to CVE-2019-19794.
  • 1.6.7 – Adding an expiration jitter. Resolve TXT records via CNAME.

Calico

The major release notes are on a single page. Versions noted here to describe the upgrade for each version. For example, 3.14.1 and 3.14.2 both point to the 3.14 Release Notes. Here I’m describing the changes, if relevant, between the .0, .1, and .2 releases.

Note that currently many features of Calico haven’t been implemented yet so improvements, changes, and fixes for Calico probably don’t impact the current clusters.

  • 3.14.1 – Fix CVE-2020-13597 – IPv6 rogue router advertisement vulnerability. Added port 6443 to failsafe ports.
  • 3.14.2 – Remove unnecessary packages from cni-plugin and pod2daemon images.
  • 3.15.0 – WireGuard enabled to secure on the wire in-cluster pod traffic. The ability to migrate key/store data from etcd to use the kube-apiserver.
  • 3.15.1 – Fix service IP advertisement breaking host service connectivity.
  • 3.15.2 – Add monitor-addresses option to calico-node to continually monitor IP addresses. Handle CNI plugin panics more gracefully. Remove unnecessary packages from cni-plugin and pod2daemon images to address CVEs.
  • 3.16.0 – Supports eBPF which is a RH8.2 product (future info not currently available to my clusters. Removed more unnecessary packages from pod2daemon image.

Filebeat

  • 7.8.1 – Corrected base64 encoding of the monitoring.elasticsearch.api_key. Added support for timezone offsets.
  • 7.9.0 – Fixed handling for Kubernetes Update and Delete watcher events. Fixed memory leak in tcp and unix input sources. Fixed file ownership in docker images so they can be used in a secure environment. Logstash module can automatically detect the log format and process accordingly.
  • 7.9.1 – Nothing really jumped out as relevant.
  • 7.9.2 – Nothing in the release notes yet.

docker

This release is related to a CVE to address a vulnerability in 1.13.1-108.

metrics-server

  • 0.3.7 – New image location. Image run as a non-root user. Single file now vs a group of files (components.yaml).

kube-state-metrics

Like Calico, the CHANGELOG is a single file. The different bullet points point to the same file, but describe the changes if relevant.

  • 1.9.6 – Just a single change related to an API mismatch.
  • 1.9.7 – Switched an apiVersion to v1 for the mutatingwebhookconfiguration file.

References

Posted in Computers, Kubernetes | Tagged | Leave a comment

Cinnamon Buns

I tried using the recipe on the website but there were so many ads making constant changes to the webpage that it was impossible to stay where the instructions were. As such, I’m copying the basic instructions here and I’ll use it for the baking attempt.

Dough

  • 1 cup warm milk
  • 2 1/2 teaspoons instant dry yeast
  • 2 large eggs at room temperature
  • 1/3 cup of salted butter (softened)
  • 4 1/2 cups all-purpose flour
  • 1 teaspoon salt
  • 1/2 cup granulated sugar
  1. Pour the warm milk in the bowl of a stand mixer and sprinkle the yeast over the top.
  2. Add the eggs, butter, salt, and sugar
  3. Add in 4 cups of the flour and mix using the beater blade just until the ingredients are barley combined. Allow the mixture to rest for 5 minutes for the ingredients to soak together.
  4. Scrape the dough off of the beater blade and remove it. Attach the dough hook.
  5. Beat the dough on medium speed, adding in up to 1/2 cup more flour if needed to form a dough. Knead for up to 7 minutes until the dough is elastic and smooth. The dough should be a little tacky and still sticking to the side of the bowl. Don’t add too much flour though.
  6. Spray a large bowl with cooking spray.
  7. Use a rubber spatula to remove the dough from the mixer bowl and place it in the greased large bowl.
  8. Cover the bowl with a towel or wax paper.
  9. Set the bowl in a warm place and allow the dough to rise until double. A good place might to be start the oven off at a low setting, 100* for example, turn it off when it’s warm, and then put the bowl into the oven. Figure about 30 minutes for the dough to rise.
  10. When ready, put the dough on a well floured pastry mat or parchment paper and sprinkle more flour on the dough.
  11. Flour up a rolling pin and spread the dough out. It should be about 2′ by 1 1/2′ when done.
  12. Smooth the filling evenly over the rectangle.
  13. Roll the dough up starting on the long, 2′ end.
  14. Cut into 12 pieces and place in a greased baking pan.
  15. Cover the pan and let the rolls rise for 20 minutes or so.
  16. Preheat the oven to 375 degrees.
  17. Pour 1/2 cup of heavy cream over the risen rolls.
  18. Bake for 20-22 minutes or until the rolls are golden brown and the center cooked.
  19. Allow the rolls to cool.
  20. Spread the frosting over the rolls.

Filling

Simple enough. Combine the three ingredients in a bowl and mix until well combined.

  • 1/2 cup of salted butter (almost melted)
  • 1 cup packed brown sugar
  • 2 tablespoons of cinnamon

Frosting

  • 6 ounces of cream cheese (softened)
  • 1/3 cup salted butter (softened)
  • 2 cups of powdered sugar
  • 1/2 tablespoon of vanilla or maple extract
  1. Combine cream cheese and salted butter. Blend well.
  2. Add the powdered sugar and extract.

Posted in Cooking | Tagged | Leave a comment

Homelab

Well, I figure I should list out my gear as I finally picked up my third R710 and got it running and attached. It’s a moderate setup compared to some I’ve seen here ๐Ÿ™‚ I’m less a hardware/network guy and more an OS and now Kubernetes guy.

Network:

  • High-Speed WiFi connection to the Internet (the black box on the bottom left).
  • Linksys EA9500 Wifi hub for the house.
  • HP 1910 24 Port Gigabit Ethernet Switch.
  • HP 1910 48 Port Gigabit Ethernet Switch.

Servers:

  • Nikodemus: Dell R710, 2 Intel 5680’s, 288G Ram, 14 TB Raid 5.
  • Slash: Dell R710, 2 Intel 5660’s, 288G Ram, 14 TB Raid 5.
  • Monkey: Dell R710, 2 Intel 5670’s, 288G Ram, 14 TB Raid 5.
  • Willow: Dell R410, 2 Intel 5649โ€™s, 16G Ram, 4 TB RAID 10.

Array:

  • Sun 2540 Fiber Array filled with 24 TB. It’s not on and I’ve not configured it yet other than to make sure it works as I haven’t needed the additional space yet.

UPS:

  • Two APC Back-UPS [XS]1500 split between the three servers for uninterrupted power. Lasts about 20 minutes. Sufficient time to run the Ansible playbooks to shut down all the servers before the power goes out.

Software:

I bought the VMware package from VMUG so I have license keys for a bunch of stuff. vCenter is limited to 6 CPUs so the third R710 finishes that up. I can get the 6.7 software but haven’t pulled the trigger on that yet. My next exploration is Distributed Switches and Ports (classes this past weekend) and then vSAN and VLANs.

All three servers are booting off an internal 16G USB thumb drive.

  • vSphere 6.5
  • vCenter 6.5

Most of what I’m doing fits into two categories. Personal stuff and a duplication of work stuff in order to improve my skills.

I have about 103 virtual machines as of the last time I checked. Most of my VMs are CentOS or Red Hat since I work in a Red Hat shop, and a few Ubuntu, one Solaris, and a couple of Windows workstations. I am going to add a few others like FreeBSD, Slackware, SUSE, and maybe Mint.

Personal:

  • pfSense. Firewall plus other internal stuff like DNS and Load Balancing. I have all servers cabled to the Internet and House Wifi so I can move pfSense to any of the three to restore access.
  • Jump Servers. I have three jump servers I use basically for utility type work. My Ansible playbooks are on these servers.
  • Hobby Software Development. This is kind of a dual purpose thing. I’m basically trying to duplicate how work builds projects by applying the same tools to my home development process.ย CI/CD: gitlab, jenkins, ansible tower, and artifactory.ย Development: original server, git server, and Laravel server for a couple of bigger programs
  • Identity Management server. Centralized user management. All servers are configured.
  • Spacewalk. I don’t want every server downloading packages from the ‘net. I’m on a high-speed wifi setup where I live. So I’m downloading the bulk of packages to the Spacewalk server and upgrading using it as the source.
  • Inventory. This is a local installation of the inventory program I wrote for my work servers. This has my local servers though. Basically it’s the ‘eat your own dog food’ sort of thing. ๐Ÿ™‚
  • Plex Servers. With almost 8 TB of space in use, I have two servers to split them between the R710’s. If I activate the 2540, I may combine them into one but there’s no good reason at this time. Iโ€™ve disabled the software for now. They are very chatty and are overwhelming the log server.ย Movie Server. About 3 or so TB of movies I’ve ripped from my collection.ย Television Server. About 4 TB of television shows I’ve ripped from my collection.
  • Backups. I have two backup servers. One for local/desktop backups via Samba and one for remote backups of my physical server which is hosted in Florida.
  • Windows XP. I have two pieces of hardware that are perfectly fine but only work on XP so I have an XP workstation so I can keep using the hardware.
  • Windows 7. Just to see if I could really ๐Ÿ™‚
  • Grafana. I’ve been graphing out portions of the environment but am still in the learning phase.

Work/Skills:

  • Work development server. The scripts and code I write at work, backed up to this server and also spread out amongst my servers.
  • Nagios Servers. I have 3. One to monitor my Florida server, One to monitor my personal servers (above), and one to monitor my Work type servers. All three monitor the other two servers so if one bails, Iโ€™ll get notifications.
  • Docker Server. Basically learning docker.
  • Kubernetes Servers. I have three Kubernetes clusters for various testing scenario. Three masters and three workers
  • Elastic Stack clusters. This is a Kibana, Logstash, and multiple Elastic Search servers. Basically centralized log management. Just like Kubernetes, three clusters for testing.
  • A Hashicorp Vault server for testing to see if itโ€™ll work for what we need at work (secrets management).
  • Salt. One salt master for testing. All servers are connected.
  • Terraform. One for testing.
  • Jira server. Basically trying to get familiar with the software
  • Confluence. Again, trying to get used to it. I normally use a Wiki but work is transferring things over to Confluence.
  • Wiki. This is a duplicate of my work wikis, basically copying all the documentation I’ve written over the past few years.
  • Solaris 2540. This manages my 2540 array.

My wife is a DBA so I have a few database servers up, partly for her and partly for my use.

  • Cassandra
  • Postgresql.
  • Postgresql – This one I stood up for Jira and Confluence
  • Microsoft SQL Server
  • MySQL Master/Master cluster. This is mainly used by my applications but there for my wife to test things on as well.

I will note that I’m 63 and have been mucking about with computers for almost 40 years now. I have several expired and active certifications. 3Com 3Wizard, Sun System and Network Admin, Cisco CCNA and CCNP, Red Hat RHCSA, and most recently a CKA.

Posted in Computers | Leave a comment

State of the Game Room – 2019

A reflection of the past 12 months of gaming. This includes board, card, and role playing.

In reviewing the Game Inventory I keep, I picked up some 259 games and expansions this year not counting dice. The bulk of these are Role Playing. As they tend to have lots of books, there are only a handful of actual RPGs with lots of books. For unique numbers, Board games will exceed everything else.

Role Playing

I’ve enjoyed RPGs since I got exposed to it in 1977 in an Army Recreation Center. I bought the box set of Dungeons and Dragons not long after and started making dungeons. I’ve run or played in numerous RPGs over the years with my main games being Advanced Dungeons and Dragons and Shadowrun. For AD&D, I snagged quite a few other RPGs in order to mine them for ideas for my main game. It wasn’t until I got into Shadowrun, that I really explored other settings with Paranoia being the third most run game for me. Over the past year, I’ve worked on Shadowrun 6th as a playtester and other Shadowrun books as a proofreader. I exposed my group to Conan 2d20 the RPG but we really didn’t get far into it. I think mainly the group (and me) just don’t have to time to read and prepare for RPGs any more. Especially new ones. The team has played Shadowrun in the past. Maybe a return to Shadowrun is in order, possibly sticking with the 20th Anniversary Edition as I’m most familiar with that one but maybe going back to 2nd Edition.

I did pick up several RPG books over the past year. I keep up on Dungeons and Dragons, probably more like a collector than someone who’s going to actually run any D&D games, although I am a fan of the Adventures in Middle-Earth series so perhaps there’s hope. Of course I picked up a few of the Conan 2d20 RPG books since the team was interested. And Genesys, especially Shadow of the Beanstock as it’s a Cyberpunk type setting.

Last Christmas my girlfriend bought me a box of miniatures for Shadowrun so that was quite cool. I also picked up several Shadowrun books and PDFs. The biggest purchase was getting my Star Wars RPGs updated. Turns out my Friendly Local Gaming Store (FLGS) hadn’t been keeping up on the releases. I stumbled on a posting somewhere and checked out Fantasy Flight Games to see what I was missing. And a close friend worked on the new Wrath and Glory Warhammer RPG so I got in on the kickstarter and I have the Collector’s Edition.

The ones in bold are new games that include the core rule book. The rest are expansions or items like miniatures or other non-rulebook accessories.

  • Alien – 2
  • Conan 2d20 – 16
  • Dungeons & Dragons – 15
  • Genesys – 6
  • Shadowrun – 23
  • Star Trek – 1
  • Star Wars – 61
  • Traveller – 2
  • Warhammer (Wrath & Glory) – 8

Card Games

We did play a few card games over the past year. Cards Against Humanity seems to pop up now and then plus others like Clank!, Race for the Galaxy, Love Letter, and Gloom in Space. I also kept up on my Arkham Horror Card Game even though we stopped playing that one last year.

For the Munchkin one below, I’d stumbled upon the Girl Genius on line comic again and one of the cartoons referenced a special Girl Genius Munchkin pack. I’m a huge fan of Phil Foglio’s art so I specially ordered it from my FLGS.

  • Arkham Horror – 15
  • Cards Against Humanity – 2
  • Clank! – 1
  • DC Deck Building Game – 1
  • Epic Spell Wars of the Battle Wizards – 1
  • Exploding Kittens – 1
  • Love Letter – 1
  • Munchkin – 1

Board Games

I did pick up quite a few board games this year and even played more than normal. The team seems to have more fun with a quick (or even lengthy) board game than spending time to learn and understand RPG rules.

By far, Zombicide had the most items come in this year. The team was interested in Zombicide and Jeanne and I even played a game that didn’t include the team. Zombicide is a several hours long game that can test your patience. Jeanne did an awesome job on our session saving her entire team when I was ready to abandon them and head on out.

A friend at work received two copies of the kickstarter Shadowrun Sprawl Ops, a Shadowrun specific board game. He gifted me with the second copy plus a copy of The 7th Continent. The Sprawl Ops rules weren’t the best and I had to do some research on the ‘net and Board Game Geek to get some clarity on the rules. Once we had it right, the team had quite a bit more fun with the game.

Wingspan was one of the more interesting purchases. My FLGS owner (Jamie) had saved a copy for me during all the hoopla over the distribution of the game. It had received a lot of attention because the game publisher had underestimated the demand for the game and had to make several print runs. The biggest issue was the FLGSs weren’t getting complete orders where Amazon.com was. The games were selling for the normal price and immediately being turned around for 3 and 4 times the cost over on E-Bay. I will say, the game was quite fun and we played it several times over the summer.

I’d picked up Clank! a couple or so years back but the name and that it was a deck building game was a bit of a turn off in general. Jeanne and I enjoyed the DC Deck Building game in the past but we really didn’t much like the Legendary Deck Building game so we were 50/50 on getting Clank!. I did pick up Clank! expansions and Clank! in Space. Jeanne and I played it and it turned out to be fun enough that Jeanne insisted on a second play (she’d lost and she’s very competitive). “Clank!” is simply the sound you’re making to alert the bad guy (Dragon) in the dungeon while you’re hunting through the caverns looking for artifacts. Clank! in Space is a similar game except that you’re on a spaceship stealing artifacts. I snagged Clank! Legacy this past week. There are several Legacy type games where as you play, you destroy cards, add stickers, and generally modify the board game as you complete missions. The games are ultimately playable however the 10 game series does make each person’s game somewhat unique. I look forward to running the team through the game. It might be a bit shorter than Zombicide, although that’s still on the list to be played.

Jeanne and I got married back in June and we had a board game themed wedding reception. I bought copies of Love Letter (of course), Ticket to Ride, and a new one for us, Sagrada. In this case, the box design was a bit of a turn-off. I’d seen it in the FLGS and Jamie recommended it so Jeanne and I snagged a copy so we knew how to play before the wedding. It’s not a bad game, kind of a Sudoku type game. You have a 6 space grid (6 across and 6 down) and roll dice to fill in the grids based on the underlying selected card, rules as defined by a couple of drawn cards, and general rules. It’s certainly a thinking game with less interaction with the rest of the players. We also had a copy of Cards Against Humanity.

Other games we’ve played over the past year: Bunny Kingdom, Gizmos, The Doom That Came To Atlantic City, Concept, Horrified, Isle of Skye, Photosynthesis, and Trains.

I’m not going to list all the board games, just the number of new and expansions.

  • Number of New Board Games: 36
  • Number of Expansions: 62

Pictures!

I need to get the games in order again and probably pick up another Kallex bookshelf.

Looking from the door to the window.


And looking from the window to the door
Posted in Gaming | Leave a comment

Shepherdโ€™s Pie

Preparing the Potatoes

  • 1 1/2 lb. potatoes, peeled
  • Kosher salt
  • 4 tbsp. melted butter
  • 1/4c. milk
  • 1/4c. sour cream
  • Freshly ground black pepper

Preparing the beef filling

  • 1 tbsp. extra-virgin olive oil
  • 1 large onion, chopped
  • 2 carrots, peeled and chopped
  • 2 cloves garlic, minced
  • 1 tsp. fresh thyme
  • 1 1/2 lb. ground beef
  • 1 c. frozen peas
  • 1 c. frozen corn
  • 2 tbsp. all-purpose flour
  • 2/3c. low-sodium chicken broth
  • 1 tbsp. freshly chopped parsley, for garnish

Directions

Preheat oven to 400 degrees.

Make mashed potatoes. In a large pot, cover potatoes with water and add a generous pinch of salt. Bring to a boil and cook until totally soft, about 16 to 18 minutes. Drain and return to the pot.

Use a potato masher to mash potatoes until they’re smooth. Add melted butter, milk, and sour cream. Mash together until fully incorporated, then season with salt and pepper. Set aside.

Make beef filling. In a large, ovenproof skillet over medium heat, heat the olive oil. Add onion, carrots, garlic, and thyme and cook until fragrant and softened, about 5 minutes. Add ground beef and cook until no longer pink, about 5 more minutes. Drain the fat.

Stir in the frozen peas and corn and cooked until warmed through, about 3 minutes. Season with salt and pepper.

Sprinkle meat with flour and stir to evenly distribute. Cook 1 minute more and add the chicken broth. Bring to a simmer and let the mixture thicken slightly. About 5 minutes more.

Top the beef mixture with an even layer of mashed potatoes and bake in the oven until there is little moisture and the mashed potatoes are golden. About 20 minutes will do it. Broil will make the potatoes a bit crispier.

Posted in Cooking | 1 Comment

My Tech Certifications – A History

I don’t as a rule chase technical certifications. As a technical person who’s been mucking about with computers since around 1981, and as someone who has been on the hiring side of the desk, certifications are similar to some college degrees. They might get you in the door, but you still have to pass the practical exam with the technical staff in order to get hired.

Don’t get me wrong, the certification at least gets you past the recruiter/HR rep. Probably. At least where I am, the recruiter has a list of questions plus you have to get past my manager’s review before it even gets into my hands for a yes/no vote.

I have several certifications over the years and some have been challenging. I basically have a goal for going after the certification and generally it’s to validate my existing skills and maybe pick up a couple of extra bits that are outside my current work environment.

Back in the 80’s, I was installing and managing Novell and 3Com Local Area Networks (LAN). At one medium sized company, I was the first full time LAN Manager. In order to get access to the inner support network, I took quite a few 3Com classes and eventually went for the final certification. The certification would give me access to CompuServe and desired support network.

I did pass of course, and being a gamer, I enjoyed the heck out of the certification title.

Certification 1: 3Com 3Wizard

I’ve taken quite a few various training courses over the years. IBM’s OS/2 class down in North Carolina. Novell training (remember the ‘Paper CNE’ ๐Ÿ™‚ ), and even MS-DOS 5 classes. About this time (early 90’s), I’d been on Usenet for 4 or 5 years. I’d written a Usenet news reader (Metal) and was very familiar with RFCs and how the Usenet server specifically worked. I had stumbled on Linux when Linus released it but I didn’t actually install a Linux server on an old 386 I had until Slackware came out with a crapload of diskettes. I had an internet connection at home on PSINet.

Basically I was poking at Linux.

In the mid 90’s, I was ready to change jobs. I had been moved from a single department to the overall organization (NASA HQ) and what I was going to be working on was going to be reduced from everything for the department to file and print and user management. I was approached by the Unix team and manager. “Hey, how about you shift to the Unix team?” It honestly took me a week to consider it but I eventually accepted. I was given ownership of the Usenet server ๐Ÿ™‚ and the Essential Sysadmin book and over 30 days, I devoured the book and even sent in a correction to the author (credit in the next edition ๐Ÿ™‚ ). After 2 years of digging in, documenting, and researching plus attending a couple of Solaris classes, I went for the Sun certification. This was really just so I could validate my skills. I didn’t need the certs for anything as there wasn’t a deeper support network you gained access to when you got it.

Certification 2: Sun Certified Solaris Administrator

Certification 3: Sun Certified Network Administrator

A few years later the subcontractor I was working for lost the Unix position. They were a programming shop (databases) and couldn’t justify the position. I was interested in learning more about networking and wanted to take a couple of classes. The new subcontractor offered me a chance at a boot camp for Cisco. I accepted and for several weeks, I attended the boot camp. I wasn’t working on any Cisco gear so basically concentrated on networking concepts more than anything else. I barely even took any notes ๐Ÿ™‚  But I also figured that since the company was paying for the class ($10,000), I should at least get the certifications. The CCNA certification was a single test on all the basics of Cisco devices and networking. The CCNP certification was multiple tests, each one focusing on each category vs an overall test like the CCNA one was. The farther away from the class I got, the harder it was to pass the tests. CCNA was quick and easy. I passed the next couple with one test. The next took a couple of tests. The last took 3 tests. But I did pass and get my certifications.

Certification 4: Cisco Certified Network Associate

Certification 5: Cisco Certified Network Professional

I did actually manage firewalls after I got the certification, but I really am a systems admin and the command line and concepts were outside my wheelhouse. I tried to take the refresher certification but they’d gone to hands on testing vs multiple choice and since I wasn’t actually managing Cisco gear, I failed.

I’d been running Red Hat Linux on my home firewall for a while but I switched to Mandrake for a bit, then Mandriva, then Ubuntu. I also set up a remote server in Florida running OpenBSD so still poking at operating systems and still a system admin sort of person. At my job now, I was hired because of my enterprise knowledge. Working with Solaris, AIX, Irix, and various Linux distros. Since Sun was purchased by Oracle and then abandoned, I’ve been moving more into Red Hat management. Getting deeper and deeper into it. We’re also using HP-UX and had a few Tru64 servers in addition to a FreeBSD server and Red Hat servers. I’d taken several Red Hat training courses, cluster management, performance tuning, etc and eventually decided to go for my certifications. It seems like I’ve been getting a cert or two every 10 years ๐Ÿ™‚  3Wizard in the 80’s. Sun in the 90’s. And Cisco in the 00’s. So I signed up for the Red Hat Certified System Administrator test and the Red Hat Certified Engineer test. It took two tries to get the RHCSA certificate. The first part of the test is to break into the test server. Took me 30 minutes the first time to remember how to do that. The RHCE test was a bit different. You had to create servers vs just use them as in the RHCSA test. Shoot, if I need to create a server, I don’t need to memorize how to do it. I document the process after research. Anyway, after two tries at the RHCE test, I dropped trying.

Certification 6: Red Hat Certified System Administrator

With Red Hat 8 out, I’ll give it a year and for the 20’s try for the RHCSA and RHCE again.

Here’s an odd thing though. These are all Operating System certifications. I’m a Systems Admin. I manage servers. I enjoy managing servers. I’ve considered studying for and getting certifications for MySQL for example since I do a lot of coding for one of my applications (and several smaller ones) and would like to expand my knowledge of databases. I’m sure I’m doing some things the hard way ๐Ÿ™‚  Work actually gave me (for free!) two Dell R710 servers as they were being replaced. The first one I set up to replace my Ubuntu gateway so it was a full install of Linux and a firewall. Basically a replacement. All my code was on it, my backups, web sites, etc. But the second server showed up and the guys on the team talked me into installing VMware’s Vsphere software to turn the server into a VMware server able to host multiple virtual servers. And I stepped up and signed up to the VMware Users Group (VMUG) because I could get a discount on Vcenter which lets me turn the two R710’s into a VMware cluster.

In addition, I took over control of the Kubernetes clusters at work. The Product Engineers had thrown it over the wall at Operations and it ha sat idle. After I took it over, I updated the scripts I’d initially created to deploy the original clusters to start building new clusters. I’ve been digging deeper and deeper into Kubernetes in order to be able to support it. On the Product Engineering side, they’re building containers and pods to be deployed into the Kubernetes environments so they’re familiar with Kubernetes with regards to deployments and some rudimentary management but they’re not building or supporting the clusters. I am. I’m it. My boss recently asked me, “who’s our support contract with for Kubernetes?” and my answer was, “me, just me.”

So I decided to try and take the Kubernetes exams. This is the first non-operating system exam and certification I’ve attempted. Note that I considered it for mysql and others, but never actually moved forward with them. For Kubernetes, since I’m it, I figured I should dig in deeper and get smarter. I took the exam and failed it. But I realized that they were looking for application development knowledge as well which as an Operations admin, I’m not involved in. So I took the Application Developer course and took the exam again last week and passed it. But since I was taking the AppDev course, I figured I’d take the AppDev test. But I failed that as well. The first time. I expect I’ll be able to pass it the second time I try (I have a year for the free retake).

Certification 7: Certified Kubernetes Administrator

Over the past few days, I’ve been touting the CKA cert. I even have a printed copy of the cert at my desk. It’s the first one I’ve taken that’s not Operating System specific.

Certification 8: Certified Kubernetes Application Developer

A Year Later: I started receiving a few emails from the Linux Foundation. Your second test opportunity is about to expire. So I sucked it up and spent a month studying for the CKAD. I’d done a lot more in the past year and felt I was better prepared to take the test. I retook the Linux Academy course and even picked up a separate book just for some extra, different guidance. The book did clarify one thing for me that I hadn’t quite groked, Labels. I mean, I know what a label is, but wasn’t fully clear on the functionality of it. Since there’s no container or pod identity, there’s no way to associate things with a task. I got it because I’d been using tags to group products together in order to run ansible playbooks against them. The containers don’t have a set IP address, don’t have a DNS name, they just have a label, ‘app: llamas’. So any container with the ‘app: llamas’ label, has specific rights. Anyway, I took the test and passed it so one more certification.

Certification 9: AWS Certified Cloud Practitioner

The AWS CCP exam is a core or entry level exam. I started taking the Linux Academy course and it was basically matching up the Amazon terminology with how things work in Operations. Once I had them matched, I was able to take the test less than a week later and pass it. I started studying the AWS Certified SysOps Associate exam and will follow it up with the AWS Certified DevOps Professional and then the Security Track. In the mean time though, I’m taking the OpenShift Administration classes. So who knows what the next certification will be?

Carl – 3Wizard, SCSA/SCNA, CCNA/CCNP, RHCSA, CKA, CKAD, AWS CCP

Posted in About Carl, Computers | Tagged , , , , , , , | Leave a comment

Beef and Bean Taco Skillet

  • 1 lb beef
  • 1 1.4oz packet taco seasoning
  • 1 16oz can pinto beans
  • 1 1.75oz can condensed tomato soup
  • 1/2 cup salsa
  • 1/4 cup water
  • 1/2 cup shredded cheese
  • 6 6โ€ flour tortillas

Cook beef in a 10-inch skillet over medium-high heat until well browned; break up any clumps of beef. Drain fat. Stir in taco seasoning. Ad beans, soup, salsa and water. Reduce to low heat and simmer for 10 minutes, stirring occasionally. To with cheese. Serve with flour tortillas.

Posted in Cooking | Leave a comment