Kubernetes Preparation Steps for 1.19.6

Upgrading Kubernetes Clusters

The purpose of this document is to provide the background information on what is being upgraded, what versions, and the steps required to prepare for the upgrade itself. These steps are only done once. Once all these steps have been completed and all the configurations checked into github and gitlab, all clusters are then ready to be upgraded.

Reference links to product documentation at the end of this document.

Upgrade Preparation Steps

Upgrades to the Sandbox environment are done a few weeks before the official release for more in depth testing. Checking the release docs, changelog, and general operational status for the various tools that are in use.

Sever Preparations

With the possibility of an upgrade to Spacewalk and to ensure the necessary software is installed prior to the upgrade, make sure all repositories are enabled and that the yum-plugin-versionlock software is installed.

Enable Repositories

Check the Spacewalk configuration and ensure that upgrades are coming from the local server and not from the internet.

Install yum versionlock

The critical components of Kubernetes are locked into place using the versionlock yum plugin. If not already installed, install it before beginning work.

# yum install yum-plugin-versionlock -y

Load Images

Next step is to load all the necessary Kubernetes, etcd, and additional images like coredns to the local repository so that all the clusters aren’t pulling all images from the internet. As a note, pause:3.1 has been upgraded to pause:3.2. Make sure you pull and update the image.

# docker pull k8s.gcr.io/etcd:3.4.13-0
3.4.13-0: Pulling from etcd
4000adbbc3eb: Pull complete
d72167780652: Pull complete
d60490a768b5: Pull complete
4a4b5535d134: Pull complete
0dac37e8b31a: Pull complete
Digest: sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
Status: Downloaded newer image for k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/etcd:3.4.13-0

# docker pull k8s.gcr.io/kube-apiserver:v1.19.6
v1.19.6: Pulling from kube-apiserver
f398b465657e: Pull complete
cbcdf8ef32b4: Pull complete
1ba2da83d184: Pull complete
Digest: sha256:5cf4a3622acbde74406a6b292d88e6d033070fc0f6e4cd50c13c182ba7c7a1ca
Status: Downloaded newer image for k8s.gcr.io/kube-apiserver:v1.19.6
k8s.gcr.io/kube-apiserver:v1.19.6

# docker pull k8s.gcr.io/kube-controller-manager:v1.19.6
v1.19.6: Pulling from kube-controller-manager
f398b465657e: Already exists
cbcdf8ef32b4: Already exists
22e45a96b75b: Pull complete
Digest: sha256:96c29073b29003f58faec22912aed45de831b4393eb4c8722fe1c3f5e4c296be
Status: Downloaded newer image for k8s.gcr.io/kube-controller-manager:v1.19.6
k8s.gcr.io/kube-controller-manager:v1.19.6

# docker pull k8s.gcr.io/kube-scheduler:v1.19.6
v1.19.6: Pulling from kube-scheduler
f398b465657e: Already exists
cbcdf8ef32b4: Already exists
8eda9f73d5d9: Pull complete
Digest: sha256:d96fdb88d032df719f6fb832aaafd3b90c688c216b7f8d3d01cd7f48664b6f37
Status: Downloaded newer image for k8s.gcr.io/kube-scheduler:v1.19.6
k8s.gcr.io/kube-scheduler:v1.19.6

# docker pull k8s.gcr.io/kube-proxy:v1.19.6
v1.19.6: Pulling from kube-proxy
4ba180b702c8: Already exists
85b604bcc41a: Pull complete
fafe7e2b354a: Pull complete
b2c4667c1ca7: Pull complete
c93c6a0c3ea5: Pull complete
beea6d17d8e9: Pull complete
9401490890f6: Pull complete
Digest: sha256:b0cb8f17f251f311da0d5681c8aa08cba83d85e6c520bf4d842e3c457f46ce92
Status: Downloaded newer image for k8s.gcr.io/kube-proxy:v1.19.6
k8s.gcr.io/kube-proxy:v1.19.6

# docker pull k8s.gcr.io/coredns:1.7.0
1.7.0: Pulling from coredns
c6568d217a00: Pull complete
6937ebe10f02: Pull complete
Digest: sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c
Status: Downloaded newer image for k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/coredns:1.7.0

# docker pull k8s.gcr.io/pause:3.2
3.2: Pulling from pause
c74f8866df09: Pull complete
Digest: sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f
Status: Downloaded newer image for k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.2

# docker image ls
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                   v1.19.6             dbcc366449b0        6 days ago          118MB
k8s.gcr.io/kube-apiserver               v1.19.6             5522f5e5fd7d        6 days ago          119MB
k8s.gcr.io/kube-controller-manager      v1.19.6             9dc349037b41        6 days ago          111MB
k8s.gcr.io/kube-scheduler               v1.19.6             bf39b6341770        6 days ago          45.6MB
k8s.gcr.io/etcd                         3.4.13-0            0369cf4303ff        3 months ago        253MB
k8s.gcr.io/coredns                      1.7.0               bfe3a36ebd25        6 months ago        45.2MB
k8s.gcr.io/pause                        3.2                 80d28bedfe5d        10 months ago       683kB

Next up is to tag all the images so they’ll be hosted locally on the bldr0cuomrepo1.internal.pri server.

# docker tag k8s.gcr.io/etcd:3.4.13-0 bldr0cuomrepo1.internal.pri:5000/etcd:3.4.13-0
# docker tag k8s.gcr.io/kube-apiserver:v1.19.6 bldr0cuomrepo1.internal.pri:5000/kube-apiserver:v1.19.6
# docker tag k8s.gcr.io/kube-controller-manager:v1.19.6 bldr0cuomrepo1.internal.pri:5000/kube-controller-manager:v1.19.6
# docker tag k8s.gcr.io/kube-scheduler:v1.19.6 bldr0cuomrepo1.internal.pri:5000/kube-scheduler:v1.19.6
# docker tag k8s.gcr.io/kube-proxy:v1.19.6 bldr0cuomrepo1.internal.pri:5000/kube-proxy:v1.19.6
# docker tag k8s.gcr.io/coredns:1.7.0 bldr0cuomrepo1.internal.pri:5000/coredns:1.7.0
# docker tag k8s.gcr.io/pause:3.2 bldr0cuomrepo1.internal.pri:5000/pause:3.2

# docker image ls
REPOSITORY                                                 TAG                 IMAGE ID            CREATED             SIZE
bldr0cuomrepo1.internal.pri:5000/kube-proxy                v1.19.6             dbcc366449b0        6 days ago          118MB
k8s.gcr.io/kube-proxy                                      v1.19.6             dbcc366449b0        6 days ago          118MB
bldr0cuomrepo1.internal.pri:5000/kube-controller-manager   v1.19.6             9dc349037b41        6 days ago          111MB
k8s.gcr.io/kube-controller-manager                         v1.19.6             9dc349037b41        6 days ago          111MB
bldr0cuomrepo1.internal.pri:5000/kube-scheduler            v1.19.6             bf39b6341770        6 days ago          45.6MB
k8s.gcr.io/kube-scheduler                                  v1.19.6             bf39b6341770        6 days ago          45.6MB
bldr0cuomrepo1.internal.pri:5000/kube-apiserver            v1.19.6             5522f5e5fd7d        6 days ago          119MB
k8s.gcr.io/kube-apiserver                                  v1.19.6             5522f5e5fd7d        6 days ago          119MB
bldr0cuomrepo1.internal.pri:5000/etcd                      3.4.13-0            0369cf4303ff        3 months ago        253MB
k8s.gcr.io/etcd                                            3.4.13-0            0369cf4303ff        3 months ago        253MB
bldr0cuomrepo1.internal.pri:5000/coredns                   1.7.0               bfe3a36ebd25        6 months ago        45.2MB
k8s.gcr.io/coredns                                         1.7.0               bfe3a36ebd25        6 months ago        45.2MB
bldr0cuomrepo1.internal.pri:5000/pause                     3.2                 80d28bedfe5d        10 months ago       683kB
k8s.gcr.io/pause                                           3.2                 80d28bedfe5d        10 months ago       683kB

The final step is to push them all up to the local repository.

# docker push bldr0cuomrepo1.internal.pri:5000/etcd:3.4.13-0
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/etcd]
bb63b9467928: Pushed
bfa5849f3d09: Pushed
1a4e46412eb0: Pushed
d61c79b29299: Pushed
d72a74c56330: Pushed
3.4.13-0: digest: sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a size: 1372

# docker push bldr0cuomrepo1.internal.pri:5000/kube-apiserver:v1.19.6
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/kube-apiserver]
3721be488f60: Pushed
597f1090d8e9: Pushed
e7ee84ae4d13: Pushed
v1.19.6: digest: sha256:165196f6df4953429054bad29571c4aee1700c5d370f6a7c4415293371320ca0 size: 949

# docker push bldr0cuomrepo1.internal.pri:5000/kube-controller-manager:v1.19.6
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/kube-controller-manager]
d3d1d4836f26: Pushed
597f1090d8e9: Mounted from kube-apiserver
e7ee84ae4d13: Mounted from kube-apiserver
v1.19.6: digest: sha256:c6631f1624152013ec188ca11ce42580fe34bb83aaef521cdffc89909316207e size: 949

# docker push bldr0cuomrepo1.internal.pri:5000/kube-scheduler:v1.19.6
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/kube-scheduler]
b468c9e3b9f6: Pushed
597f1090d8e9: Mounted from kube-controller-manager
e7ee84ae4d13: Mounted from kube-controller-manager
v1.19.6: digest: sha256:dfd0c6ea6ea3ce2ec29dad98b3891495f5df8271ca21bca8857cfee2ad18b66f size: 949

# docker push bldr0cuomrepo1.internal.pri:5000/kube-proxy:v1.19.6
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/kube-proxy]
d4aabbee649e: Pushed
78dd6c0504a7: Pushed
061bfb5cb861: Pushed
1b55846906e8: Pushed
b9b82a97c787: Pushed
b4e54f331697: Pushed
91e3a07063b3: Mounted from kube-scheduler
v1.19.6: digest: sha256:c4c840cba79da1a61172f77af81173faf19a2f5ee58f3ff8be3ba68a279b14a0 size: 1786

# docker push bldr0cuomrepo1.internal.pri:5000/coredns:1.7.0
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/coredns]
96d17b0b58a7: Pushed
225df95e717c: Pushed
1.7.0: digest: sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a size: 739

# docker push bldr0cuomrepo1.internal.pri:5000/pause:3.2
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/pause]
ba0dae6243cc: Pushed
3.2: digest: sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 size: 526

Software Preparations

This section describes the updates that need to be made to the various containers that are installed in the Kubernetes clusters. Most of the changes involve updating the location to point to my Docker Repository vs pulling directly from the Internet.

You’ll need to clone if new, or pull the current playbook repo from gitlab as all the work will be done in various directories under the kubernetes/configurations directory. You’ll want to do that before continuing. All subsequent sections assume you’re in the kubernetes/configurations directory.

$ git clone git@lnmt1cuomgitlab.internal.pri:external-unix/playbooks.git
$ git pull git@lnmt1cuomgitlab.internal.pri:external-unix/playbooks.git

Make sure you add and commit the changes to your repo.

$ git add [file]
$ git commit [file] -m "commit comment"

And once done with all the updates, push the changes back up to gitlab.

$ git push

Update calico.yaml

In the calico directory, run the following command to get the current calico.yaml file.

$ curl https://docs.projectcalico.org/manifests/calico.yaml -O

Basically grep out the image lines and pull the new images down to the local repository in order to retrieve the images locally.

# docker pull docker.io/calico/cni:v3.17.1
v3.17.1: Pulling from calico/cni
3d42ab7fd2aa: Pull complete
a0a1a170563e: Pull complete
4d26d217f6ba: Pull complete
Digest: sha256:3dc2506632843491864ce73a6e73d5bba7d0dc25ec0df00c1baa91d17549b068
Status: Downloaded newer image for calico/cni:v3.17.1
docker.io/calico/cni:v3.17.1

# docker pull docker.io/calico/pod2daemon-flexvol:v3.17.1
v3.17.1: Pulling from calico/pod2daemon-flexvol
1099d2df204e: Pull complete
9aef96ab1093: Pull complete
583d1a1aef56: Pull complete
3eb06e0bf22b: Pull complete
8362899d2e86: Pull complete
9fb2be1a9d3e: Pull complete
c54d4908d08c: Pull complete
Digest: sha256:48f277d41c35dae051d7dd6f0ec8f64ac7ee6650e27102a41b0203a0c2ce6c6b
Status: Downloaded newer image for calico/pod2daemon-flexvol:v3.17.1
docker.io/calico/pod2daemon-flexvol:v3.17.1

# docker pull docker.io/calico/node:v3.17.1
v3.17.1: Pulling from calico/node
a019d9c0ce8b: Pull complete
fa31af8ad59c: Pull complete
Digest: sha256:25e0b0495c0df3a7a06b6f9e92203c53e5b56c143ac1c885885ee84bf86285ff
Status: Downloaded newer image for calico/node:v3.17.1
docker.io/calico/node:v3.17.1

# docker pull docker.io/calico/kube-controllers:v3.17.1
v3.17.1: Pulling from calico/kube-controllers
c36c2fad477a: Pull complete
38fb4366911a: Pull complete
d7deb0c84128: Pull complete
c710f3356d3b: Pull complete
Digest: sha256:d27dd1780b265406782578ae55b5ff885b94765a36b4df43cdaa4a8592eba2db
Status: Downloaded newer image for calico/kube-controllers:v3.17.1
docker.io/calico/kube-controllers:v3.17.1

Then tag the images for local storage.

# docker tag calico/cni:v3.17.1 bldr0cuomrepo1.internal.pri:5000/cni:v3.17.1
# docker tag calico/pod2daemon-flexvol:v3.17.1 bldr0cuomrepo1.internal.pri:5000/pod2daemon-flexvol:v3.17.1
# docker tag calico/node:v3.17.1 bldr0cuomrepo1.internal.pri:5000/node:v3.17.1
# docker tag calico/kube-controllers:v3.17.1 bldr0cuomrepo1.internal.pri:5000/kube-controllers:v3.17.1

Then push them up to the local repository.

# docker push bldr0cuomrepo1.internal.pri:5000/cni:v3.17.1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/cni]
23a79ec53bb3: Pushed
40663f1967b3: Pushed
a13bf69d4d98: Pushed
v3.17.1: digest: sha256:4f8cbbaf93ef9c549021423ac804ac3e15e366c8a61cf6008b4737d924fe65e2 size: 946

# docker push bldr0cuomrepo1.internal.pri:5000/pod2daemon-flexvol:v3.17.1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/pod2daemon-flexvol]
fee23ca43586: Pushed
db5b7a686992: Pushed
1e5330946944: Pushed
aeedaec3fa39: Pushed
d0dcbddd6708: Pushed
50295429f9b9: Pushed
4f676ac8854c: Pushed
v3.17.1: digest: sha256:0e63dd25602907c54e43f479d00ea83d7c4388f9a69b1457358ae043edbb56cd size: 1788

# docker push bldr0cuomrepo1.internal.pri:5000/node:v3.17.1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/node]
3633b710791b: Pushed
40cad2715d16: Pushed
v3.17.1: digest: sha256:304dd23bcda5216026f1601cb61395792249f1c58c98771198f6e517b0f5c96b size: 737

# docker push bldr0cuomrepo1.internal.pri:5000/kube-controllers:v3.17.1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/kube-controllers]
a945e41cb5e1: Pushed
cd7170f5d387: Pushed
b3cb3ad89824: Pushed
d7ecfe7ff366: Pushed
v3.17.1: digest: sha256:95b53efaad09a3d09f43c4f950a1675f932c25bd3781e4fa533c3a3f9a16958c size: 1155

Edit the file, search for image: and insert in front of calico, the path to the local repository.

bldr0cuomrepo1.internal.pri:5000/

Make sure you follow the documentation to update calicoctl to 3.17.1.

Update metrics-server

In the metrics-server directory, run the following command to get the current components.yaml file:

$ wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml

Edit the file, search for image: and replace k8s.gcr.io with bldr0cuomrepo1.internal.pri:5000/

Download the new image and save it locally.

# docker pull k8s.gcr.io/metrics-server/metrics-server:v0.4.1
v0.4.1: Pulling from metrics-server/metrics-server
e59bd8947ac7: Pull complete
cdbcff7dade2: Pull complete
Digest: sha256:78035f05bcf7e0f9b401bae1ac62b5a505f95f9c2122b80cff73dcc04d58497e
Status: Downloaded newer image for k8s.gcr.io/metrics-server/metrics-server:v0.4.1
k8s.gcr.io/metrics-server/metrics-server:v0.4.1

Tag the image.

# docker tag k8s.gcr.io/metrics-server/metrics-server:v0.4.1 bldr0cuomrepo1.internal.pri:5000/metrics-server:v0.4.1

And push the newly tagged image.

# docker push bldr0cuomrepo1.internal.pri:5000/metrics-server:v0.4.1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/metrics-server]
7f4d330f3490: Pushed
7a5b9c0b4b14: Pushed
v0.4.1: digest: sha256:2009bb9ca86e8bdfc035a37561cf062f3e051c35823a5481fbd13533ce402fac size: 739

Update kube-state-metrics

The kube-state-metrics package isn’t updated this quarter.

Update filebeat-kubernetes.yaml

In the filebeat directory, run the following command to get the current filebeat-kubernetes.yaml file:

$ curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10.0/deploy/kubernetes/filebeat-kubernetes.yaml

Change all references in the filebeat-kubernetes.yaml file from kube-system to monitoring. If a new installation, create the monitoring namespace.

Update the local repository with the new docker image.

# docker pull docker.elastic.co/beats/filebeat:7.10.0
7.10.0: Pulling from beats/filebeat
f1feca467797: Pull complete
c88871268a93: Pull complete
10b07962f975: Pull complete
72140f4e331a: Pull complete
f0b0c2d74c55: Pull complete
f331a4a38275: Pull complete
85232249e0eb: Pull complete
cef8587fe8c4: Pull complete
0663fb8750a2: Pull complete
c573ab98e4ce: Pull complete
Digest: sha256:c8c612f37e093a4b7da6b0d5fbaf68a558642405d5be98c7ec76fb1169aa93fe
Status: Downloaded newer image for docker.elastic.co/beats/filebeat:7.10.0
docker.elastic.co/beats/filebeat:7.10.0

Tag the image appropriately.

# docker tag docker.elastic.co/beats/filebeat:7.10.0 bldr0cuomrepo1.internal.pri:5000/filebeat:7.10.0

Finally, push it up to the local repository.

# docker push bldr0cuomrepo1.internal.pri:5000/filebeat:7.10.0
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/filebeat]
ea58304a2317: Pushed
0dafc9982491: Pushed
27faaca5907d: Pushed
e4f67691198f: Pushed
b1e4fb67465f: Pushed
bb10e40dd1a4: Pushed
a80f7773385e: Pushed
3ccd96885c69: Pushed
565a72108ad2: Pushed
613be09ab3c0: Pushed
7.10.0: digest: sha256:16f8b41f68920f94fdc101e5af06c658d3a846168c0b76738097fd19cf6e32b3 size: 2405

Once the image is hosted locally, copy the file into each of the cluster directories and make the following changes.

DaemonSet Changes

In the filebeat folder are two files. A config file and an update file. These files automatically make changes to the filebeat-kubernetes.yaml file based on some of the changes that are performed below. The below changes are made to prepare for the script which populates the different clusters with correct information.

  • Switches the docker.elastic.co/beats image with bldr0cuomrepo1.internal.pri:5000
  • Replaces <elasticsearch> with the actual ELK Master server name
  • Switches the kube-system namespace with monitoring. You’ll need to ensure the monitoring namespace has been created before applying this .yaml file.
  • Replaces DEPLOY_ENV with the expected deployment environment name; dev, sqa, staging, or prod. These names are used in the ELK cluster to easily identify where the logs are sourced.

Change the values in the following lines to match:

        - name: ELASTICSEARCH_HOST
          value: "<elasticsearch>"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: ""
        - name: ELASTICSEARCH_PASSWORD
          value: ""

In addition, remove the following lines. They confuse the container if they exist.

        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:

Add the default username and password to the following lines as noted:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME:elastic}
      password: ${ELASTICSEARCH_PASSWORD:changeme}

ConfigMap Changes

In the ConfigMap section, activate the filebeat.autodiscover section by uncommenting it and delete the filebeat.inputs configuration section. In the filebeat.autodiscover section, make the following three changes as noted with comments.

filebeat.autodiscover:
  providers:
    - type: kubernetes
      host: ${NODE_NAME}                          # rename node to host
      hints.enabled: true
      hints.default_config.enabled: false         # add this line
      hints.default_config:
        type: container
        paths:
          - /var/log/containers/*${data.kubernetes.container.id}.log
        exclude_lines: ["^\\s+[\\-`('.|_]"]  # drop asciiart lines  # add this line

In the processors section, remove the cloud.id and cloud.auth lines, add the following lines, and change DEPLOY_ENV to the environment filebeat is being deployed to: dev, sqa, staging, or prod.

   - add_fields:
       target: ''
       fields:
         environment: 'DEPLOY_ENV'

Elastic Stack in Development

This Elastic Stack cluster is used by the Development Kubernetes clusters. Update the files in the bldr0-0 subdirectory.

- name: ELASTICSEARCH_HOST
  value: bldr0cuomifem1.internal.pri

Elastic Stack in QA

This Elastic Stack cluster is used by the QA Kubernetes clusters. Update the files in the cabo0-0 directory.

- name: ELASTICSEARCH_HOST
  value: cabo0cuomifem1.internal.pri

Elastic Stack in Staging

This Elastic Stack cluster is used by the Staging Kubernetes clusters. Update the files in the tato0-1 directory.

- name: ELASTICSEARCH_HOST
  value: tato0cuomifem1.internal.pri

Elastic Stack in Production

This Elastic Stack cluster is used by the Production Kubernetes cluster. Update the file in the lnmt1-2 directory.

- name: ELASTICSEARCH_HOST
  value: lnmt1cuelkmstr1.internal.pri
This entry was posted in Computers, Kubernetes and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *