Jinja2: Encountered Unknown Tag

Overview

I was running an Ansible playbook and one of the templates failed with the following error:

AnsibleError: template error while templating string: Encountered unknown tag 'd'

Jinja2 uses a brace + percent and a percent + brace to identify Jinja2 commands that are used when preprocessing a template.

This error simply is telling you that there’s a brace + percent that’s in the template that isn’t a Jinja2 command or “tag”. Search the file for a brace + percent and you should be able to locate the offending line.

Typically this might be related to a printf() type command where the output is something like this:1

message += "_e{%d,%d}:%s|%s" % (len(title), len(text), title, text)

Note specifically the brace + percent + d in the line. This is causing the problem.

Solution

Wrap the line (or lines) in a raw tag like so:

{% raw %}
			if title and text:
				message += "_e{%d,%d}:%s|%s" % (len(title), len(text), title, text)
{% endraw %}

And the application of the template succeeds.

Posted in ansible, Computers | Tagged , , , | Leave a comment

Development and Branches

Many many many years ago, I learned to program. It was on a Radio Shack Color Computer. It had a BASIC plug in pack and I think I could save to a cassette tape. I followed the BASIC programming book and learned to write code. My very first program was one that let you create a vehicle for Steve Jackson’s Car Wars. Heck, I was fortunate to discover the old BASIC program in an old backup CD from the 90’s and it’s now in my github repository.

My first couple of jobs after getting out of the Army in 1982 were programming related in one way or another. But I moved away from being a developer and into building local area networks (LANs) and writing scripts and programs to help with that. It let me be creative without the restrictions that came with writing code professionally.

In the mid-90’s, I switched from managing LANs to managing Unix servers. Solaris primarily but some HP-UX, Irix, Tru64, and Red Hat Linux. This exposed me to the real basic Revision Control System (RCS) that we used to manage configuration files on the Unix servers.

I began using RCS for my personal programming projects. Many years later, I wrote an Inventory program to help manage our servers. I of course used RCS to manage the changes. I had my own way to deploying finished code to production systems, similar to how Jenkins works if you’re familiar with that tool. In order to become familiar with current tools, I gradually shifted over to git.

It’s been an interesting run and now I’m working directly with developers and it’s giving me a better understanding of their needs. Because I want to help them do a better job, I’m learning their techniques and applying them again to my projects. I’ve set up a much better Jenkins pipeline and even used gitlab’s CI/CD pipeline.

Because we’re using git and github a lot more with the new job, I’m a bit more knowledgeable about development practices. One of which is branching. A release branch, development branch, feature branches, and hot-fix branches.

For our automation practices, doing much more than a main branch and a feature branch is probably over kill. We aren’t writing code, we’re creating automation and at times, need a much quicker turn around to get changes applied so we can continue with the task.

But for my personal stuff like my photo library, inventory, and other programs, I can take advantage of what I’m learning and apply it there.

I don’t have everything following this technique but the main programs are gradually migrating in that direction.

For my environment, I have two configurations. An environment that’s similar to a work setup where I have a development, QA, staging, and production pipeline. And my personal environment where I have a local installation, one for my docker work, and one for my remote or production like environment.

For each environment, I have a couple of development like servers. One with website development and one with more utilities and playbooks.

As for the pipeline, each of my programs has a Main or Release branch, a Development branch, and under that, Feature branches. In Jenkins, I have a trigger for the local or development environments that is based on the /dev branch. When I create a feature branch, when I merge it in, it’s merged with the dev branch. This then triggers the testing and deployment to my local development testing servers (bldr0cuomtool11 and ndld1cuomtool11).

When I’m ready to create a Release, I merge dev with the main branch and in Jenkins, the Docker and Remote servers are updated for the home environment. For the work like environment, code is deployed to the QA and Staging servers. The final deployment to Production is manually executed by me to simulate a live corporate configuration.

I like this process. My first stage is testing and if that passes, deployment to be reviewed. Then a release is created and the pipeline is followed. Pretty interesting.

Posted in CI/CD, Computers, Git, Jenkins | Leave a comment

Pork Tacos

This one is an easy and tasty meal and is awesome to the last taco.

This is a crock pot recipe. You’ll assemble the ingredients and put a 3 lb pork shoulder into the crock pot, adjust for the size of the pork shoulder.

  • 1 teaspoon kosher salt
  • 1 teaspoon black pepper
  • 1 tablespoon ground cumin
  • 2 teaspoons chili powder
  • 1 teaspoon garlic powder
  • 1 teaspoon onion powder
  • 1 teaspoon paprika
  • 1 teaspoon dried oregano
  • 2 teaspoons crushed red pepper

You’ll also need a cup of salsa.

Stir up the ingredients and pour them over the pork shoulder then cover the shoulder with the salsa. It’s a crock pot so set it to high for a few hours and then for overnight, set it on low.

For the ingredients of the taco. And as a note, this is to taste so you can add cilantro, lime juice, or other taco type ingredient.

  • 4 cups Cole slaw
  • 4 tablespoons Mayonnaise
  • 3 teaspoons Sriracha Sauce
  • Mexican Mix cheese
  • Soft Tortillas, 8” or so

Mix together the Mayonnaise and Sriracha Sauce then stir it into the Cole slaw. Lightly warm the Tortillas, combine and enjoy!

Posted in Cooking | Tagged | Leave a comment

Carl and The Llamas

In Concert

Practice

For the past 5 years, the group has played together. We’ve had almost weekly practice sessions and have added songs as we got better. We’re up to about 16 songs and still adding more. We even played at a couple of summer BBQs. We had planned on starting to gig in 2020 however with Covid, we put it on hold. At times we even didn’t have practice.

When our drummer Jonathan bailed in 2021 to Florida, we were disappointed but tried to find another drummer as a replacement. I created a Craigslist ad and received probably half a dozen or so responses but no takers.

I also decided to shift to a Longmont specific place where we can practice hoping that something more central would be more attractive to potential drummers. It cost a little money to rent the space but if we can get a drummer, it’d be worth it. I posted a second Craigslist ad and out of the half dozen responses, actually had a drummer decide to audition. She decided she wasn’t interested in our set list and bowed out after playing with us.

I joined the LeftHand Artist Group at the recommendation of Jensen Guitars in Longmont and posted up a request for a drummer but nothing came of it.

However, in the group someone posted that the Bootstrap Brewery in Longmont had an Open Mic Night every Monday night. I brought it up with the band and we decided to give it a try in an effort to attract a drummer.

Jeanne and I stopped in on a Monday and chatted with Dennis, the coordinator for music, and asked several questions. What do we need to bring and how to sign up being the main questions. We bring our amps, pedal boards, and instruments. They’ll provide a PA and microphones. We’ll have 20 minutes to play and signup starts at 5:30pm, sets start at 6pm. We’ll want to be there early in order to get a position as there are only 9 positions available and some times they can fill up quickly.

Preparation For The Show

We’ve been playing to Youtube drum tracks for the set list we have starting in January. I started extracting the tracks from Youtube and creating a numbered set list on my phone to have better control over the tracks. Jen, our singer, presses play on the song and we play to the drum track. Some of them are harder to play to than others due to solo starts by the guitarists. It causes a timing shift and we have to listen and perhaps drop a note or two to get back into sync with the track.

For the show, since we had 20 minutes, I reviewed our set list and picked out songs where the drummer starts either early or at the same time. I ran it by the band to see if there was others we should consider. I initially picked 4 songs. I will note that Killing In The Name is our signature song and our name is partly due to the song. That one had to be played.

  • Killing In The Name
  • Breaking The Law
  • Stacy’s Mom
  • Living Dead Girl

Since the first and last songs are in Drop D tuning and the two middle songs are E Standard tuning, I created a 10 second applause and retune track and a 5 second applause track to give us time to retune and let the audience applaud.

With the pauses and the length of the songs, it took a little over 12 minutes to play the four songs which gave us an additional 8 minutes for setup and tear down. I added a 5th song, Come Out And Play, and the total time was 17 minutes with the extra 5 second applause track. That gave us 3 minutes to set up and tear down. I think we can do that.

In preparation for the show, I’d created several T-Shirts promoting the band. Specific ones like Singer (Jen), Lead/Rhythm (Me), Rhythm/Lead (Andrew), and Bass (Eric) along with a Roadie/Groupie (Jeanne), Former Drummer (for Jon), and Drummer Needed shirts plus a Fan one specifically for Samantha. I even arranged for a Sound Technician (Morgan) to help us because we didn’t have a drummer. We’d want to make sure we sounded right with the drum tracks not too loud for us and us at a good volume for the venue. I also created a half page flyer to promote the band listing the band, Morgan’s shop, and where we were on the Internet.

The Sunday before the show, we practiced the five songs in addition to the tear down and set up and were certainly under 20 minutes by a few seconds. It’d be tight but it could be done. Easy Peasy.

At The Venue

I knew we needed to be there early to make sure we would get a slot. We’d been telling all our friends, family, and even coworkers that we were going to be playing. We didn’t want to get there and not get a slot and potentially disappoint all our guests.

When we arrived at around 4:40pm, we went in and someone was sitting in a chair at the sign up board. So, a line 🙂 We asked if this was the line and she said yes, pull up a chair. She was the mom of a band, ‘Intermission‘, and wanted to make sure she got a specific slot (7pm) because of other things the kids were doing. We were looking at 7pm as well as Eric is coming from a bit of a distance and didn’t expect to be here until around 6:30pm. This left us with slot 5 where she was taking slot 4. Jeanne sat with her most of the time as I got my gear set up, stand, guitar, pedal board, and amp, with plugs into the pedal board and amp and patch cables in the guitar, ready to jump up, plug in, and play. I did notice a meter on the stage left wall but facing the stage. It would jump around high 80’s, low 90’s. I assumed it was a decibel meter.

Andrew showed up a bit later, then after the sets started, Jen and Morgan arrived. Finally around 6:30pm or so, Eric arrived and we’re ready to go.

I spoke to Morgan and asked that we get a nod when it’s time for Stacy’s Mom because the applause might make hearing the start of the track difficult. I also spoke to Jen as she was to do some introduction for the band while we set up. And finally I spoke to Jeanne to make sure her tasks were understood. For the flyers, I spoke to one of the folks behind the bar as I wanted to pass out the flyers on the various tables. The owner (Tommy) agreed. Ultimately I wanted this to go smoothly and efficiently. Professionally.

At 5:30pm I actually took one of our fliers that had been folded up and taped it to slot 5. The lady signed up for slot 4 and the rest of the folks coming for Open Mic Night lined up to take a slot. There were so many that Dennis actually created 3 more slots.

Things were going pretty quickly in general. Intermission started a bit before their slot at 7pm. At around 7pm, Dennis came up and said Intermission was playing 4 songs and that he was going to let them complete it and that we might be starting right at our slot of 7:20pm. Perfectly fine I said. I also said that we had 5 songs but we were tight, tight and would be done in 20 minutes without a problem.

When the time came, we jumped up and started getting set up. We were in place in a minute or two max and ready to go.

But

We couldn’t hear the drum track. See the PA is in front of the band so it’s harder to hear. Morgan said the sound was a loud as it could be. I asked if the monitor could be turned on a couple of times but it never seemed to work. Later when I checked the picture of the mixer I sent to Morgan, the monitor wasn’t even plugged in. Morgan had downloaded our tracks from our website but in order to attach to the sound system via blue tooth, he borrowed my phone which also had the tracks.

While Morgan and Dennis was trying to work it out, we’re standing up there waiting. I started playing I Hate Everything About You just to keep my fingers going and spoke to the crowd about sound issues.

I started playing I Hate Everything About You and Jen started singing it. Then the rest joined in. All of a sudden, someone from the crowd jumps up and is playing drums right behind me (the house set). I will say that acoustic drums can be very very loud!

We made it through the song and it looked like the sound was working but the drummer kept on playing as we started in on Killing In The Name. He didn’t do too badly but when we got to the end, the drum track we brought was also going and just a few beats behind us!

We kept going with Breaking The Law, then Come Out And Play, and then Stacy’s Mom. Sadly I was nervous enough that I fumbled around far too much. I was pretty disappointed with my playing and at least once, I had to stop because I didn’t know where we were. The drummer kept going with us and I had to tell our tech to kill the drum tracks.

We finished with Stacy’s Mom and Jeanne jumped up and said, we’re done, break it down so we did and put things back by our table. We actually played 5 songs even though it wasn’t the 5 we had planned on but we didn’t plan on the sounds problems either.

Aftermath

There were apparently several separate conversations going on with regards to our efforts.

Jeanne said that someone commented that we were playing 4 songs. Since the venue said we had 20 minutes, I assumed we would be allowed to play as many as we could get in 20 minutes. Jeanne asked Dennis a couple of times if she should shut us down but he declined saying it was okay. Someone else said they’d pull us down but Jeanne said nope, he’s my husband and took matters in her own hands, shutting us down. Apparently, and somewhat based on what Dennis said earlier, 3 songs was how many you were supposed to play. Good to know.

In addition, Jeanne was told by Dennis that Killing In The Name wasn’t appropriate. Without context it could be either the subject of the song or that the lyrics weren’t appropriate, “Fuck you, I won’t do what you tell me!” and “Motherfucker!”

Finally we were too loud. Well, without knowing what the level max is, it’s difficult to adjust and honestly I wasn’t even looking at what I thought the decibel meter was showing. I was more interested in playing and to stop screwing up. Again, it’d be good to know what the level should be before the set so Morgan could adjust as necessary.

Jeanne asked if there was another venue he could recommend that our songs might be more acceptable but Dennis said that other than the first one and the decibel level, the songs were fine. Jeanne told me the bartenders were dancing behind the bar 🙂

I will note that the Brewery has one basic goal. To sell craft beer. I noticed that about half the crowd had left. To me, that means we were costing the Brewery business which was absolutely not my goal and I feel bad about that happening.

Additionally, one of the other musicians came up to me as I was sitting and he said our sitting at the board was a “dick move“. That it put us into a bad light and other musicians wouldn’t be charitable about our playing because of it. I did let him know that there was someone there when we got there and being new to all this, we didn’t know it wasn’t permitted. That Dennis and the staff didn’t tell us we couldn’t do it and when asked, said we were fine.

Jeanne let me know though that the lady that was there before us said she’d tried to sign up for an Open Mic Night in the past and the musicians got into a scrum and were fighting over the marker so she’d decided to sit at the board to ensure she would get a slot.

Perhaps a better solution than a disorderly jump up to sign up and fight over the marker might be a hat. Folks who want to play will put a slip of paper into the hat and at 5:30, Dennis would get up, pull names from the hat, and have folks select a slot. When done, Dennis would just say, “all done, better luck next Monday!” and the sets would start.

There are two groups to be considerate of though. The Brewery as I noted above is there to sell craft beer. Running customers away is bad for business.

The second group are the musicians around Longmont. We need to stick together and help each other out. Being a dick about signing up unnecessarily alienated at least one musician. I did approach him a few minutes later and apologized, explained again about the lady already at the board, and asked that he understand that we were new and didn’t know. To let us make and learn from our mistakes, we’d be better next time. He seemed good with that and I got a fist bump.

I did find and thank the drummer for helping us. I also talked to Dennis and said we’d do better next time, apologizing for the issues.

Jeanne said to ignore the mean things said by the guy (apparently he told Eric that we were “a disgrace to the music industry”) and others said we did quite well.

I think as far as playing, we did a good job. But we also have played through our (or “our”) gear. This is the first time we played through someone else’s gear that we didn’t have time to properly get set up. Plus playing with a drummer that’s “faking it” to help out.

Lessons Learned

Number one. Get details from the owner or music manager. What’s the number of songs? Do we have 20 minutes or are we to play 3 songs and have 20 minutes to play. What’s the decibel maximum. It shouldn’t be more than the venue can manage (and probably should let folks talk).

Number two. Understand the other unwritten rules such as forming a line for Open Mic Night. We don’t want to alienate other musicians in the area who might be able to help us get other gigs. We had one guy come up and provide feedback, however it was couched. Does that mean the other 20 folks who where there to play are just quietly annoyed with us and didn’t let us explain? I’d rather get the feedback we did get so we can try to explain than none at all.

Number three. In this case, I’m still a bit of a newbie but I think Morgan could have managed the sound a bit better. Not the problems we had but either shutting down the drum tracks or chasing off the rogue drummer we had. And understanding the sound requirements, turning us down if we’re too loud according to the venue. Morgan did say that our levels were fine so again, I’m a bit of a newbie 🙂

Finally

I will say that according to folks we spoke to, we actually played pretty well. It was the surrounding issues unrelated to the songs (mostly) that were the problem. With more knowledge and experience, we’ll do better next time.

Followup

I spoke to a few local musicians to get some feedback from folks with a bit more experience. One of the guys is a drummer for a tribute band and he’s been gigging for 23 years. In general they appreciated the professionalism we displayed and wouldn’t have a problem playing with us. I will note that Morgan has also said we play well and we wouldn’t have a problem getting a gig in Denver.

In general the comments were basically, you got up on stage and played. That’s more than some folks ever do so congrats there. I’m to remember than mostly, Open Mic Nights are amateur hour (which is us too of course) and to take any comments with a grain of salt. Learn from them but be careful about the criticism.

One guy did say we were his new favorite band, which honestly made me feel pretty good.

Posted in Bootstrap Brewery, Music | Tagged , , , , , , , | Leave a comment

Convert From CentOS 8 to CentOS Streams

Overview

This article provides brief instructions on how to convert and upgrade a CentOS 8 system.

Background

In December 2021, Red Hat retired the CentOS 8 AppSteam BaseOS, Extras, and the other CentOS mirrors in favor of going to a Streams model. In this model, CentOS becomes part of the path from Fedora to Red Hat Enterprise Linux instead of a redistribution of Red Hat Enterprise Linux. There are alternatives if we want to continue with the same model such as Arch Linux and Rocky Linux.

Conversion Process

It’s a pretty simple process to make the conversion. If the conversion is done after December 2021, you’ll need to modify the Extras repo. Otherwise you can simply run the commands that follow this quick edit.

Modify Extras

In the /etc/yum.repos.d directory, edit the CentOS-Linux-Extras.repo file, comment out the mirrorlist entry, uncomment the baseurl entry, and point the url to one of the mirror sites. In my case, since it’s a small file that needs to be added, I changed to mirror.clarkson.edu but any mirror will do.

cd /etc/yum.repos.d
sed -i "s/enabled=1/enabled=0/g" *
sed -i "s/enabled=0/enabled=1/g" CentOS-Extras.repo
sed -i "s/^mirrorlist/#mirrorlist/g" CentOS-Extras.repo
sed -i "s/^#baseurl/baseurl/g" CentOS-Extras.repo
sed -i "s/mirror.centos.org/mirror.clarkson.edu/g" CentOS-Extras.repo

Install The Stream

Next, install the centos-release-stream rpm.

dnf install centos-release-stream -y

Swap Repositories

Swap from the Linux to the Stream repositories

dnf swap centos-{linux,stream}-repos -y

Sync Distributions

This section will update or downgrade as appropriate, the installed packages.

dnf distro-sync -y

References

Posted in Computers | Tagged , | Leave a comment

Increase Ingress Routers

A problem I found with my OKD4 cluster is the HAProxy statistics were claiming 5 of my 7 worker nodes were red, aka down. After some searching, I found HAProxy is reporting via the ingress router pods. Further checking of the cluster showed only two ingress routers were running. You would think this should be a daemonset so the ingress router would be available on every worker. Two seems sufficient however, for example I have three physical hosts that are running my OKD4 cluster. If I have 7 workers spread across the three hosts and the two router pods are on one host and the host fails, then the application that uses the ingress router will need to wait until OKD4 realizes they’re gone and spins up two more ingress router pods.

At first I figured it was the deployment that needed to be updated. However updating the deployment replicas from 2 to 7 failed. The number of replicas reverted back to 2.

After some hunting, I found the solution. You actually have to patch the ingress operator not the deployment.

oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge 

And success. Now there are 7 ingress pods running on my cluster.

 openshift-ingress                                  router-default-6b8b455c59-56gk5                          1/1     Running     0          16d
openshift-ingress router-default-6b8b455c59-6z678 1/1 Running 0 16d
openshift-ingress router-default-6b8b455c59-dhrgx 1/1 Running 0 16d
openshift-ingress router-default-6b8b455c59-kgs5n 1/1 Running 0 16d
openshift-ingress router-default-6b8b455c59-ngvdx 1/1 Running 2 16d
openshift-ingress router-default-6b8b455c59-t8zmd 1/1 Running 0 16d
openshift-ingress router-default-6b8b455c59-wbh2z 1/1 Running 0 16d

References

  • https://access.redhat.com/solutions/5393521 – You need a Red Hat account to access this page.
  • https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html#nw-ingress-controller-configuration_configuring-ingress – Openshift Documentation
Posted in Computers, OpenShift | Tagged , , , | Leave a comment

Migrating KVM Guests

Overview

This article describes the process in migrating a Virtual Machine from one physical host to another.

Background

There are two methods of how the virtual machines were built on the current hosts. The old way is to create a LVM slice in the disk and lay a base image over the top of the using dd. The second process is more common where the images are created and stored as a file on the host.

Guest Shutdown

For any of the non Openshift (OCP) systems, you have a couple of methods of shutting down the systems. You can log in to the server and shut it down.

ssh tato0cuomifnag02
sudo su -
shutdown -t 0 now -h 

Or use virsh console from the underlying host to log in and shut it down. (Reminder the _domain and _pxe are assignments created by the new automation process):

virsh console tato0cuomifnag02
login: root
password:
shutdown -t 0 now -h  

Openshift/Kubernetes

An interesting difference between a Kubernetes Control Node and an OCP Control Node are the extra pods used to manage the OCP cluster. The oauth pod, registry pods, console pods, and others for example. This means that while a drain isn’t necessary on a Kubernetes Control Node, for an OCP Control Node, you should drain so any control pod such as oauth will continue to be available to the cluster.

This is a concern though as if a control node fails for whatever reason, the cluster may be unavailable until replacement pods are created. OCP should be aware of the loss of an important pod like oauth and start it up on a different master. I suspect it would occur eventually.

In any case, evict the control and worker node from the cluster before migrating it.

$ oc adm drain bldr0cuomocpwrk02.dev.internal.pri --delete-emptydir-data --ignore-daemonsets --force
node/bldr0cuomocpwrk02.dev.internal.pri evicted
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: openshift-marketplace/redhat-operators-8kqpc; ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-w5r84, openshift-dns/dns-default-th2ql, openshift-dns/node-resolver-vbw7d, openshift-image-registry/node-ca-j2nrk, openshift-ingress-canary/ingress-canary-d6l42, openshift-machine-config-operator/machine-config-daemon-z5hzf, openshift-monitoring/node-exporter-rqj52, openshift-multus/multus-additional-cni-plugins-h8vcd, openshift-multus/multus-mqg5z, openshift-multus/network-metrics-daemon-npcjh, openshift-network-diagnostics/network-check-target-lflxb, openshift-sdn/sdn-zgqrt
evicting pod openshift-monitoring/thanos-querier-7c8bb4cdbd-n97pv
evicting pod default/llamas-6-p84z2
evicting pod default/inventory-4-szhvw
evicting pod default/photo-manager-4-cqqbc
evicting pod openshift-marketplace/redhat-operators-8kqpc
evicting pod openshift-monitoring/alertmanager-main-1
evicting pod openshift-monitoring/prometheus-adapter-66ff97555b-x92r2
pod/redhat-operators-8kqpc evicted
pod/inventory-4-szhvw evicted
pod/alertmanager-main-1 evicted
pod/llamas-6-p84z2 evicted
pod/photo-manager-4-cqqbc evicted
pod/thanos-querier-7c8bb4cdbd-n97pv evicted
pod/prometheus-adapter-66ff97555b-x92r2 evicted
node/bldr0cuomocpwrk02.dev.internal.pri evicted
$ oc get nodes
NAME                                       STATUS                     ROLES    AGE   VERSION
bldr0cuomocpctl01.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpctl02.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpctl03.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpwrk01.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk02.dev.internal.pri   Ready,SchedulingDisabled   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk03.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk04.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk05.dev.internal.pri   Ready,SchedulingDisabled   worker   13d   v1.22.3+e790d7f

The delete-emptydir-data option is used when a pod is using the emptyDir storage method. Moving a pod using this method deletes any data in that emptyDir location.

The ignore-daemonsets option is used as if a pod is using daemonsets, it means the pod is running on every node and can’t be removed. You’re just saying that, yes you know there are pods using daemonsets and it’s fine if the node is cordoned.

The force option is used when there are pods that can’t be deleted.

Once evicted, you’ll log into each OCP/K8S server that you will be migrating and shut it down.

ssh tato0cuomocpbld01
sudo su -
cd /home/ocp4
ssh -i id_rsa core@tato0cuomocpctl01
sudo su -
shutdown -t 0 now -h 

Migrate LVM Guests

This process details the process of migrating an LVM built guest.

First identify the guests on the host so you know which one to migrate. For example, the upcoming event where the physical hosts are being moved to a different data center.

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     tato0cuomifnag01               running
 4     tato0cuomifnag02               running

For example, migrating tato0cuomifnag01. You’ll need to know what the path is in order to get the LVM information.

# ls -la /dev/pool2
total 0
drwxr-xr-x.  2 root root  200 Feb  7 01:40 .
drwxr-xr-x. 23 root root 4180 Feb  7 02:07 ..
lrwxrwxrwx.  1 root root    8 Feb  7 01:40 tato0cuomifnag01 -> ../dm-44
lrwxrwxrwx.  1 root root    8 Feb  7 01:06 tato0cuomifnag02 -> ../dm-45

Now you can run lvdisplay to get the size of the image. The value you want is the Current LE value.

# lvdisplay /dev/pool2/tato0cuomifnag02
  --- Logical volume ---
  LV Path                /dev/pool2/tato0cuomifnag02
  LV Name                tato0cuomifnag02
  VG Name                pool2
  LV UUID                MFBxt1-8yFR-EOd4-TVZD-nQlh-RUIu-GweC8c
  LV Write Access        read/write
  LV Creation host, time tato0cuomifnag02, 2018-01-30 15:02:24 -0600
  LV Status              available
  # open                 1
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:46

Create a new same sized LVM on the destination server.

lvcreate -l5120 -ntato0cuomifnag02 vg00

Run the following command to migrate the image. Obviously you need to be able to ssh to root on the destination server.

dd if=/dev/pool2/tato0cuomifnag02 | pv | ssh -C root@destination dd of=/dev/vg00/tato0cuomifnag02

The nice thing is the -C compresses the image and it’s an encrypted copy.

Migrate Images

This process defines the process of migrating a file and start it up on the other host.

When you shut down the guest, then per libvirt, the guest is stopped. But you’ll also need to stop the storage pool.

virsh pool-destroy tato0cuomifnag01_pool

Now that both the guest and the storage pool have been stopped, copy the image from the /opt/libvirt_images/tato0cuomifnag01_pool directory to the destination server. Use the /opt/libvirt-images directory as the target as it has sufficient space for larger images such as the katello server.

scp commoninit.iso [yourusername]@nikodemus:/opt/libvirt_images/
scp tato0cuomifnag01_amd64.qcow2 [yourusername]@nikodemus:/opt/libvirt_images/

On the destination server, create the pool directory and move the images into the /opt/libvirt-images/tato0cuomnag01_pool/ directory. You’ll need to set ownership and permissions as well.

mkdir /opt/libvirt_images/tato0cuomifnag01_pool
cd /opt/libvirt_images
mv commoninit.iso tato0cuomifnag01_pool/
mv tato0cuomifnag01_amd64.qcow2 tato0cuomifnag01_pool/
chown -R root:root
find . -type f -exec chown 644 {} \;

Extract Definitions

Once the images have been copied to the destination host, you’ll need to extract the domain and for the guests that are images, the storage desc riptions.

Extract the guest definition.

virsh dumpxml tato0cuomifnag01_domain > tato0cuomifnag01_domain.xml

For the guests that are images (the new automation process), extract the storage pool definition.

virsh pool-dumpxml tato0cuomifnag01_pool > ~/tato0cuomifnag01_pool.xml

Copy Definitions

Once you have the definitions, copy the xml files to the destination server.

scp tato0cuomifnag01.xml [yourusername]@nikodemus:/var/tmp
scp tato0cuomifnag01_pool.xml [yourusername]@nikodemus:/var/tmp

Import Definitions

Log into the destination server and import the domain definition. The LVM based guest may require editing of the xml file in case the source LVM slice is different than the destination LVM slice.

virsh define /var/tmp/tato0cuomifnag01.xml

For the image based guests, import the storage pool definition as well.

virsh pool-define /var/tmp/tato0cuomifnag01_pool.xml

Activate Guests

For the image based guests, activate the storage pool first. The guest won’t start if the storage pool hasn’t been started. Also configure it to automatically start when the underlying host boots.

virsh pool-start tato0cuomifnag01_pool
virsh pool-autostart tato0cuomifnag01_pool

Then start the guest.

virsh start tato0cuomifnag01_domain

Openshift/Kubernetes

Rejoin the migrated node to the cluster.

$ oc adm uncordon bldr0cuomocpwrk02.dev.internal.pri
node/bldr0cuomocpwrk02.dev.internal.pri uncordonedReferences

Then check the cluster status to see that the migrated node is up and Ready.

$ oc get nodes
NAME                                 STATUS  ROLES    AGE   VERSION
bldr0cuomocpctl01.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpctl02.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpctl03.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpwrk01.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk02.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk03.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk04.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk05.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f

Cleanup

Finally remove the xml files.

rm /var/tmp/tato0cuomifnag01.xml
rm /var/tmp/tato0cuomifnag01_pool.xml

Recovery

The Recovery process is very similar. In the event the physical host was replaced, we’ll need to migrate all the guests back over to the replacement host.

In order to determine what guests belong on the replaced host, check the installation repositories. Both the terraform and pxeboot repositories are complete installs on all physical hosts for the site. The directory structure is based on the hostname of the physical host. Simply log in to the current hosts, navigate to the repo’s site/hostname directory for the replaced host, and determine which guests need to be migrated back to the replaced host.

Once that’s determined, follow the above process to migrate the guests back to the replaced host.

Removal

After all the guests have been migrated back to the replaced host, you’ll need to remove the guests from the holding physical hosts.

virsh undefine [guest]
virsh pool-undefine [guest]
rm -rf /opt/libvirt_images/[guest]_pool

For LVM based guests, you’ll need to use the lvremove command.

Troubleshooting

Some information that’s helpful during the work.

If you accidentally pool-destroy (stop) the wrong pool, the guest doesn’t stop working. Remember the command simply marks the storage pool as inactive. It doesn’t actually shut down storage. As long as the guest is running, the pool will remain active to the guest. If you stop the guest and try to start it again and the storage pool is inactive, the guest will not start. To restart the storage pool, run pool-start for the storage pool and it’s active again.

References

  • virt-backup.pl – Alternative Python script to migrate LVM images.
  • https://docs.openshift.com/container-platform/4.9/nodes/nodes/nodes-nodes-working.html
Posted in Computers, KVM | Tagged , , , , | Leave a comment

Computers And Me

Over the past 12 months, I’ve deep dived into automation. I’d been investigating this for some time prior to that but this was work related. This involved research into using Terraform to automatically build virtual machines and Ansible to configure the virtual machines. I’ve used Ansible in the past but this again was a deep dive. Due to the method of building an Openshift Container Platform (OCP), I also used tftp and pxe to automatically build an OCP cluster.

As a result, I built 92 virtual machines including three OCP clusters, in 2 hours.

For perspective, a relatively recent project where I built 100 virtual machines using a more manual process, took 18 months.

To be clear, it took 12 months and a ton of experience in building machines manually to get to the point where I could build 92 machines in 120 minutes. But in that time I built the systems over and over again as I tested methods and broke environments. This also means I can now rebuild a system, several systems, or even a complete site in a very short period of time. Minutes instead of days.

I’ve been building systems for over 40 years now. From local area networks, personal gaming systems, systems for my clients, to various flavors of Unix and Linux, to cloud based systems such as Amazon and Google cloud services. I also have quite a few programming projects from back when I started all the way to present day. It’s great fun and keeps me on my toes.

My current home environment is pretty extensive. I use it as a lab where I can try things, break them, and try again. I’m running both a VMware vCenter cluster and a standalone server running Ubuntu to use KVM. Over 100 TB of storage, a TB of memory, and 144 CPU cores. I have several Kubernetes environments consisting of docker servers, docker repositories, Kubernetes clusters, Elastic Stack clusters, and tools like gitlab and jenkins. I’m currently researching some gitops tools such as ArgoCD and Flux. I also have quite a few underlying infrastructure type servers and development servers. Total of about 150 servers.

All this has helped me explore and gain experience in development practices and the current work I’m doing with automation and working with developers has increased my knowledge and skills. I look forward to continuing this path and exploring new technologies.

Posted in About Carl, Computers | Leave a comment

Recognition

I’m a computer geek. I’ve worked as a programmer, local area network installer, Unix Admin, and now a DevOps Engineer. Over the past 40 years, it’s understood that folks that work in the computer industry don’t get a lot of recognition for the work we do.

In many places we can get some sort of peer recognition. A tchotchke like a hat, a t-shirt, or other small company branded toy. I’ve received a few over the years. Some are better than others. I have a nice gym bag and a couple of fleece blankets the cats like to sleep on. Occasionally we even get a nice letter from a customer.

But the larger rewards are reserved for customer facing teams such as sales or customer service. This is understandable. These positions are the face of the company. They’re what customers “see”. Other positions in the company are less visible so less likely, without effort on the manager or employee’s part, to be rewarded for the work they do.

Being paid, being employed is reward enough.

I’ve found that the extra effort for computer professionals to get these company wide awards goes to the folks who spend nights and weekends working on a very visible to management project. And even then there’s a good chance the employee won’t be recognized.

It is expected though and appreciated when it does happen.

At the previous job, rewards were tuned to the recipient. Upper management would touch base with coworkers and even family to see what sorts of things might be a good reward. Some I recall were visits to the Vatican, Olympic level personal trainers for marathons, a new bull, a new pig, partial payment on an RV, and even flight school lessons.

These and others were awesome and inspired. But it made me, as a computer professional, a bit disconnected from the overall company. When the rewards are some cash thing, it can be ignored as it’s the same thing every time. But when it’s personal, it’s a lot more visible and you feel the lack.

Even worse, other things that you might ignore are more in your face. For example during an all hands, all the business units of the company were recognized but Operations isn’t a business unit and was totally ignored. Most of the time again it’s expected and generally not an issue. But in an environment where things are more personal, being ignored is kind of jarring.

So, what’s the point of this? I like the idea of personal presentations of awards. But ignoring the backbone of the company does generate some resentment especially when the only possible recognition is a result of many lost hours to weekends and after hours.

Posted in Computers | Tagged | Leave a comment

Gitlab Runners

Overview

This article provides local configuration details specific to the site. Links to the relevant documentation will also be provided.

Description

The gitlab-runner is a tool that uses the .gitlab-ci.yml file to build, test, and deploy to the target host. Each job is unique to itself but if it fails, all subsequent jobs do not run. A gitlab-runner is similar to a Jenkins Agent. You want to install it on a server different than the main gitlab server so workloads don’t impact access to gitlab itself.

Runner Installation

Installation is pretty easy. After you create the new runner server, you pull and install the runner for the server then register it.

Pull the runner binary

# curl -LJO "https://gitlab-runner-downloads.s3.amazonaws.com/latest/rpm/gitlab-runner_amd64.rpm"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  418M  100  418M    0     0  1583k      0  0:04:30  0:04:30 --:--:-- 1807k

Install the runner.

# rpm -ivh gitlab-runner_amd64.rpm
warning: gitlab-runner_amd64.rpm: Header V4 RSA/SHA512 Signature, key ID 35dfa027: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:gitlab-runner-14.6.0-1           ################################# [100%]
GitLab Runner: creating gitlab-runner...
Home directory skeleton not used
Runtime platform                                    arch=amd64 os=linux pid=11354 revision=5316d4ac version=14.6.0
gitlab-runner: the service is not installed
Runtime platform                                    arch=amd64 os=linux pid=11363 revision=5316d4ac version=14.6.0
gitlab-ci-multi-runner: the service is not installed
Runtime platform                                    arch=amd64 os=linux pid=11387 revision=5316d4ac version=14.6.0
Runtime platform                                    arch=amd64 os=linux pid=11423 revision=5316d4ac version=14.6.0
INFO: Docker installation not found, skipping clear-docker-cache

Then register the runner (this is internal to my homelab so the token being displayed isn’t an issue).

# gitlab-runner register
Runtime platform                                    arch=amd64 os=linux pid=11468 revision=5316d4ac version=14.6.0
Running in system-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
http://lnmt1cuomgitlab.internal.pri/
Enter the registration token:
Li7r2znM5yVedatwy7Uy
Enter a description for the runner:
[lnmt1cuomglrunr1.internal.pri]:
Enter tags for the runner (comma-separated):
local
Registering runner... succeeded                     runner=Li7r2znM
Enter an executor: docker, docker-ssh, virtualbox, docker+machine, docker-ssh+machine, kubernetes, custom, parallels, shell, ssh:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Final Configuration

Once it’s registered, you’ll need to create a rsa key pair and copy it to whatever target servers you intend to deploy jobs to. In my example, I have a local server where I can test to make sure things work, and the remote live site. Log in to the two servers to register the host key. I’m using php in this case so the server also needs to have php installed in order to do my minimal lint test of the php scripts.

Note to also get the rsa public keys for any of the artifact servers you pull from. For example I have artifacts on my two dev servers. The means all the gitlab-runner servers that pull from those two dev servers will need their rsa public keys added to the dev servers.

Create Jobs

You’ll need to create a .gitlab-ci.yml file in your repository that contains the steps required to build the project. In this case, I’m using my small Llamas band website for the example but it could be anything.

Here I define the stages I’ll be using for the deployment.

Each job is unique to the other jobs. Any task you want to do such as removing the .git directory, will need to be done in each stage. Using tags, you could point each stage to different runners.

stages:
  - test
  - deploy-local
  - deploy-remote

test-job:
  tags:
    - test
  stage: test
  script:
    - |
      for i in $(find "${CI_PROJECT_DIR}" -type f -name \*.php -print)
      do
        php -l ${i}
      done

deploy-local-job:
  tags:
    - home
  stage: deploy-local
  script:
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ svcacct@ndld1cuomtool11:/var/www/html/llamas/

deploy-remote-job:
  tags:
    - remote
  stage: deploy-remote
  script:
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ svcacct@remote:/usr/local/httpd/llamas/

Pipelines

When you check in the .gitlab-ci.yml file, a pipeline starts. In the project, on the left side, click on the CI/CD and then Pipelines to see the pipeline progress. Note that there is a CI Lint button where you can validate your .gitlab-ci.yml file.

You can see the failed pipeline, due to incorrect spacing for the test script (I verified in the CI Lint section). After fixing, the pipeline passed.

Clicking on the Passed or Failed button will take you to the Pipeline.

You can see the progress of the pipeline. Each stage can be rerun by clicking on the arrow-circle and you can see how the task worked by clicking on the stage.

This is the test-job stage. Line 2 shows it’s running on the dedicated runner server. It’s a ‘Shell’ executor. It pulls the git repo to the working directory. Then runs the quick php lint test on the three files.

Things to Think About

With each stage being a unique task, we could have a runner that only does testing. It would have all the necessary tools to test projects such as php in this case. You could also have a dedicated runner that has access to the local QA box but no access to any other server. Same with remote access. You create tags for test-runner, local-qa, and remote-live for example. Then the three stages in the above example would have appropriate functions.

References

  • https://docs.gitlab.com/runner/
  • https://docs.gitlab.com/runner/install/linux-manually.html
  • https://docs.gitlab.com/runner/register/index.html#linux
  • https://docs.gitlab.com/ee/ci/yaml/gitlab_ci_yaml.html

Posted in Computers, Git | Tagged , , | Leave a comment