Kubernetes Resource Management


There are two objectives to Resource Management. Ensure sufficient resources are available for all deployed products and determine when additional resources are required.

This document provides information based on the deployed Kubernetes cluster and is filtered for my specific configurations. For full details on Resource Management, be sure to follow the links in the References section at the end of this document.


There are two Objects that are created to manage resources in a namespace. The LimitRange Object and the ResourceQuota Object.

LimitRange Object

The LimitRange Object is a default value for containers in a namespace that don’t have their resources defined. To define this, you need to try and determine the resources used by the containers in the namespace. The command, kubectl top pods -n [namespace] will give you the idle value but you’ll have to try and determine the maximum value by monitoring the usage over time and be ready to adjust as necessary. A tool that monitors resources such as Grafana can provide that information.

ResourceQuota Object

The ResourceQuota Object is the total value of the defined resource requirements for the product. The product Architecture Document will define the minimum (Requests) and maximum (Limits) CPU and Memory values plus the minimum and maximum number of pods in a product. You then create the ResourceQuota Object with those values and apply it to the product namespace and restart the pods.

Resource Requests

Requests are the minimum value required for the container to start. It consists of two settings, CPU and Memory and are slices of the cluster resources which are configured as millicpus (m) and megabytes (Mi) or gigabytes (Gi). These values reserve the resources so they are not available to other containers. When cluster Resource Requests reaches 80%, additional worker nodes are required.

Resource Limits

Limits are the maximum value the container is expected to consume. Like Requests, it consists of two settings, CPU and Memory. This does not reserve cluster resources but should be used to determine cluster capacity. Since it doesn’t reserve cluster resources, the cluster can be overcommitted. When cluster Resource Limits reaches 80%, additional worker nodes are recommended.


The ResourceQuota Admission Controller needs to be added to the kube-apiserver manifest to enable Resource Management:

  - --enable-admission-plugins=ResourceQuota

Other Admission Controllers may already be configured. There is no required order so the new ResourceQuota Admission Controller can be anywhere in the option.

Available Resources

In order to understand when additional resources are required for a cluster, you must calculate the total resource availability of a cluster. This consists of the total CPUs and Memory of all worker nodes less 20% for overhead (Operating System, Kubernetes software, docker, and any additional agents). Other factors may increase the overhead on a worker node and will need to be taken into consideration.

LimitRange Sample Object

As with all Kubernetes definitions, the yaml file is fairly straightforward. You’ll need to determine the Requests and Limits values since they’re not defined in the containers. Then apply it to the cluster. In most cases this applies to a third party container.

apiVersion: v1
kind: LimitRange
  name: vendor-limit-range
  namespace: vendor-system
  - default:
      memory: 512Mi
      cpu: 512m
      memory: 256Mi
      cpu: 128m
    type: Container

ResourceQuota Sample Object

You’ll need to get the requirements from the Architect Document and calculate the requirements. You’ll note a slight difference in the definition between a LimitRange and a ResourceQuota Object.

apiVersion: v1
kind: ResourceQuota
  name: inventory-rq
  namespace: inventory
    limits.cpu: "24"
    limits.memory: 24Gi
    requests.cpu: "18"
    requests.memory: 18Gi

Resource Management

Important note, when ResourceQuota has been configured for a cluster, clusters that don’t have resources defined will not start. A recent example is a two container deployment that defined resources for the main container but not the init container. The main container failed as the init container didn’t start.

  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetAvailable
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate

When cluster resources have been exhausted, a couple of errors can be noted in the pod description.

Warning  FailedScheduling  78m (x11 over 80m)  default-scheduler  0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.


Error from server (Forbidden): error when creating "quota-mem-cpu-demo-2.yaml": pods "quota-mem-cpu-info-2" is forbidden: exceeded quota: mem-cpu-demo, requested: requests.memory=700Mi, used: requests.memory=600Mi, limited: requests.memory=1Gi

The events are informing you of a resource issue. Review the resource usage and adjust appropriately or add more resources to the cluster.

Finally adding a ResourceQuota to a namespace does not immediately go into effect. You need to restart the containers. After a ResourceQuota object is applied but in the status.used section, all the values are at zero (0), then the pods need to be restarted for the ResourceQuota to go into effect.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
- apiVersion: v1
  kind: ResourceQuota
      kubectl.kubernetes.io/last-applied-configuration: |
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17256997"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
      limits.cpu: "0"
      limits.memory: "0"
      requests.cpu: "0"
      requests.memory: "0"

After restarting the pods, the amount of resources used is then calculated.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
- apiVersion: v1
  kind: ResourceQuota
      kubectl.kubernetes.io/last-applied-configuration: |
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17258299"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
      limits.cpu: 2400m
      limits.memory: 2400Mi
      requests.cpu: 1200m
      requests.memory: 1800Mi


Posted in Computers, Kubernetes | Tagged | Leave a comment


Well, I figure I should list out my gear as I finally picked up my third R710 and got it running and attached. It’s a moderate setup compared to some I’ve seen here 🙂 I’m less a hardware/network guy and more an OS and now Kubernetes guy.


  • High-Speed WiFi connection to the Internet (the black box on the bottom left).
  • Linksys EA9500 Wifi hub for the house.
  • HP 1910 24 Port Gigabit Ethernet Switch.
  • HP 1910 48 Port Gigabit Ethernet Switch.


  • Nikodemus: Dell R710, 2 Intel 5680’s, 288G Ram, 14 TB Raid 5.
  • Slash: Dell R710, 2 Intel 5660’s, 288G Ram, 14 TB Raid 5.
  • Monkey: Dell R710, 2 Intel 5670’s, 288G Ram, 14 TB Raid 5.
  • Willow: Dell R410, 2 Intel 5649’s, 16G Ram, 4 TB RAID 10.


  • Sun 2540 Fiber Array filled with 24 TB. It’s not on and I’ve not configured it yet other than to make sure it works as I haven’t needed the additional space yet.


  • Two APC Back-UPS [XS]1500 split between the three servers for uninterrupted power. Lasts about 20 minutes. Sufficient time to run the Ansible playbooks to shut down all the servers before the power goes out.


I bought the VMware package from VMUG so I have license keys for a bunch of stuff. vCenter is limited to 6 CPUs so the third R710 finishes that up. I can get the 6.7 software but haven’t pulled the trigger on that yet. My next exploration is Distributed Switches and Ports (classes this past weekend) and then vSAN and VLANs.

All three servers are booting off an internal 16G USB thumb drive.

  • vSphere 6.5
  • vCenter 6.5

Most of what I’m doing fits into two categories. Personal stuff and a duplication of work stuff in order to improve my skills.

I have about 103 virtual machines as of the last time I checked. Most of my VMs are CentOS or Red Hat since I work in a Red Hat shop, and a few Ubuntu, one Solaris, and a couple of Windows workstations. I am going to add a few others like FreeBSD, Slackware, SUSE, and maybe Mint.


  • pfSense. Firewall plus other internal stuff like DNS and Load Balancing. I have all servers cabled to the Internet and House Wifi so I can move pfSense to any of the three to restore access.
  • Jump Servers. I have three jump servers I use basically for utility type work. My Ansible playbooks are on these servers.
  • Hobby Software Development. This is kind of a dual purpose thing. I’m basically trying to duplicate how work builds projects by applying the same tools to my home development process. CI/CD: gitlab, jenkins, ansible tower, and artifactory. Development: original server, git server, and Laravel server for a couple of bigger programs
  • Identity Management server. Centralized user management. All servers are configured.
  • Spacewalk. I don’t want every server downloading packages from the ‘net. I’m on a high-speed wifi setup where I live. So I’m downloading the bulk of packages to the Spacewalk server and upgrading using it as the source.
  • Inventory. This is a local installation of the inventory program I wrote for my work servers. This has my local servers though. Basically it’s the ‘eat your own dog food’ sort of thing. 🙂
  • Plex Servers. With almost 8 TB of space in use, I have two servers to split them between the R710’s. If I activate the 2540, I may combine them into one but there’s no good reason at this time. I’ve disabled the software for now. They are very chatty and are overwhelming the log server. Movie Server. About 3 or so TB of movies I’ve ripped from my collection. Television Server. About 4 TB of television shows I’ve ripped from my collection.
  • Backups. I have two backup servers. One for local/desktop backups via Samba and one for remote backups of my physical server which is hosted in Florida.
  • Windows XP. I have two pieces of hardware that are perfectly fine but only work on XP so I have an XP workstation so I can keep using the hardware.
  • Windows 7. Just to see if I could really 🙂
  • Grafana. I’ve been graphing out portions of the environment but am still in the learning phase.


  • Work development server. The scripts and code I write at work, backed up to this server and also spread out amongst my servers.
  • Nagios Servers. I have 3. One to monitor my Florida server, One to monitor my personal servers (above), and one to monitor my Work type servers. All three monitor the other two servers so if one bails, I’ll get notifications.
  • Docker Server. Basically learning docker.
  • Kubernetes Servers. I have three Kubernetes clusters for various testing scenario. Three masters and three workers
  • Elastic Stack clusters. This is a Kibana, Logstash, and multiple Elastic Search servers. Basically centralized log management. Just like Kubernetes, three clusters for testing.
  • A Hashicorp Vault server for testing to see if it’ll work for what we need at work (secrets management).
  • Salt. One salt master for testing. All servers are connected.
  • Terraform. One for testing.
  • Jira server. Basically trying to get familiar with the software
  • Confluence. Again, trying to get used to it. I normally use a Wiki but work is transferring things over to Confluence.
  • Wiki. This is a duplicate of my work wikis, basically copying all the documentation I’ve written over the past few years.
  • Solaris 2540. This manages my 2540 array.

My wife is a DBA so I have a few database servers up, partly for her and partly for my use.

  • Cassandra
  • Postgresql.
  • Postgresql – This one I stood up for Jira and Confluence
  • Microsoft SQL Server
  • MySQL Master/Master cluster. This is mainly used by my applications but there for my wife to test things on as well.

I will note that I’m 63 and have been mucking about with computers for almost 40 years now. I have several expired and active certifications. 3Com 3Wizard, Sun System and Network Admin, Cisco CCNA and CCNP, Red Hat RHCSA, and most recently a CKA.

Posted in Computers | Leave a comment

State of the Game Room – 2019

A reflection of the past 12 months of gaming. This includes board, card, and role playing.

In reviewing the Game Inventory I keep, I picked up some 259 games and expansions this year not counting dice. The bulk of these are Role Playing. As they tend to have lots of books, there are only a handful of actual RPGs with lots of books. For unique numbers, Board games will exceed everything else.

Role Playing

I’ve enjoyed RPGs since I got exposed to it in 1977 in an Army Recreation Center. I bought the box set of Dungeons and Dragons not long after and started making dungeons. I’ve run or played in numerous RPGs over the years with my main games being Advanced Dungeons and Dragons and Shadowrun. For AD&D, I snagged quite a few other RPGs in order to mine them for ideas for my main game. It wasn’t until I got into Shadowrun, that I really explored other settings with Paranoia being the third most run game for me. Over the past year, I’ve worked on Shadowrun 6th as a playtester and other Shadowrun books as a proofreader. I exposed my group to Conan 2d20 the RPG but we really didn’t get far into it. I think mainly the group (and me) just don’t have to time to read and prepare for RPGs any more. Especially new ones. The team has played Shadowrun in the past. Maybe a return to Shadowrun is in order, possibly sticking with the 20th Anniversary Edition as I’m most familiar with that one but maybe going back to 2nd Edition.

I did pick up several RPG books over the past year. I keep up on Dungeons and Dragons, probably more like a collector than someone who’s going to actually run any D&D games, although I am a fan of the Adventures in Middle-Earth series so perhaps there’s hope. Of course I picked up a few of the Conan 2d20 RPG books since the team was interested. And Genesys, especially Shadow of the Beanstock as it’s a Cyberpunk type setting.

Last Christmas my girlfriend bought me a box of miniatures for Shadowrun so that was quite cool. I also picked up several Shadowrun books and PDFs. The biggest purchase was getting my Star Wars RPGs updated. Turns out my Friendly Local Gaming Store (FLGS) hadn’t been keeping up on the releases. I stumbled on a posting somewhere and checked out Fantasy Flight Games to see what I was missing. And a close friend worked on the new Wrath and Glory Warhammer RPG so I got in on the kickstarter and I have the Collector’s Edition.

The ones in bold are new games that include the core rule book. The rest are expansions or items like miniatures or other non-rulebook accessories.

  • Alien – 2
  • Conan 2d20 – 16
  • Dungeons & Dragons – 15
  • Genesys – 6
  • Shadowrun – 23
  • Star Trek – 1
  • Star Wars – 61
  • Traveller – 2
  • Warhammer (Wrath & Glory) – 8

Card Games

We did play a few card games over the past year. Cards Against Humanity seems to pop up now and then plus others like Clank!, Race for the Galaxy, Love Letter, and Gloom in Space. I also kept up on my Arkham Horror Card Game even though we stopped playing that one last year.

For the Munchkin one below, I’d stumbled upon the Girl Genius on line comic again and one of the cartoons referenced a special Girl Genius Munchkin pack. I’m a huge fan of Phil Foglio’s art so I specially ordered it from my FLGS.

  • Arkham Horror – 15
  • Cards Against Humanity – 2
  • Clank! – 1
  • DC Deck Building Game – 1
  • Epic Spell Wars of the Battle Wizards – 1
  • Exploding Kittens – 1
  • Love Letter – 1
  • Munchkin – 1

Board Games

I did pick up quite a few board games this year and even played more than normal. The team seems to have more fun with a quick (or even lengthy) board game than spending time to learn and understand RPG rules.

By far, Zombicide had the most items come in this year. The team was interested in Zombicide and Jeanne and I even played a game that didn’t include the team. Zombicide is a several hours long game that can test your patience. Jeanne did an awesome job on our session saving her entire team when I was ready to abandon them and head on out.

A friend at work received two copies of the kickstarter Shadowrun Sprawl Ops, a Shadowrun specific board game. He gifted me with the second copy plus a copy of The 7th Continent. The Sprawl Ops rules weren’t the best and I had to do some research on the ‘net and Board Game Geek to get some clarity on the rules. Once we had it right, the team had quite a bit more fun with the game.

Wingspan was one of the more interesting purchases. My FLGS owner (Jamie) had saved a copy for me during all the hoopla over the distribution of the game. It had received a lot of attention because the game publisher had underestimated the demand for the game and had to make several print runs. The biggest issue was the FLGSs weren’t getting complete orders where Amazon.com was. The games were selling for the normal price and immediately being turned around for 3 and 4 times the cost over on E-Bay. I will say, the game was quite fun and we played it several times over the summer.

I’d picked up Clank! a couple or so years back but the name and that it was a deck building game was a bit of a turn off in general. Jeanne and I enjoyed the DC Deck Building game in the past but we really didn’t much like the Legendary Deck Building game so we were 50/50 on getting Clank!. I did pick up Clank! expansions and Clank! in Space. Jeanne and I played it and it turned out to be fun enough that Jeanne insisted on a second play (she’d lost and she’s very competitive). “Clank!” is simply the sound you’re making to alert the bad guy (Dragon) in the dungeon while you’re hunting through the caverns looking for artifacts. Clank! in Space is a similar game except that you’re on a spaceship stealing artifacts. I snagged Clank! Legacy this past week. There are several Legacy type games where as you play, you destroy cards, add stickers, and generally modify the board game as you complete missions. The games are ultimately playable however the 10 game series does make each person’s game somewhat unique. I look forward to running the team through the game. It might be a bit shorter than Zombicide, although that’s still on the list to be played.

Jeanne and I got married back in June and we had a board game themed wedding reception. I bought copies of Love Letter (of course), Ticket to Ride, and a new one for us, Sagrada. In this case, the box design was a bit of a turn-off. I’d seen it in the FLGS and Jamie recommended it so Jeanne and I snagged a copy so we knew how to play before the wedding. It’s not a bad game, kind of a Sudoku type game. You have a 6 space grid (6 across and 6 down) and roll dice to fill in the grids based on the underlying selected card, rules as defined by a couple of drawn cards, and general rules. It’s certainly a thinking game with less interaction with the rest of the players. We also had a copy of Cards Against Humanity.

Other games we’ve played over the past year: Bunny Kingdom, Gizmos, The Doom That Came To Atlantic City, Concept, Horrified, Isle of Skye, Photosynthesis, and Trains.

I’m not going to list all the board games, just the number of new and expansions.

  • Number of New Board Games: 36
  • Number of Expansions: 62


I need to get the games in order again and probably pick up another Kallex bookshelf.

Looking from the door to the window.

And looking from the window to the door
Posted in Gaming | 1 Comment

Shepherd’s Pie

Preparing the Potatoes

  • 1 1/2 lb. potatoes, peeled
  • Kosher salt
  • 4 tbsp. melted butter
  • 1/4c. milk
  • 1/4c. sour cream
  • Freshly ground black pepper

Preparing the beef filling

  • 1 tbsp. extra-virgin olive oil
  • 1 large onion, chopped
  • 2 carrots, peeled and chopped
  • 2 cloves garlic, minced
  • 1 tsp. fresh thyme
  • 1 1/2 lb. ground beef
  • 1 c. frozen peas
  • 1 c. frozen corn
  • 2 tbsp. all-purpose flour
  • 2/3c. low-sodium chicken broth
  • 1 tbsp. freshly chopped parsley, for garnish


Preheat oven to 400 degrees.

Make mashed potatoes. In a large pot, cover potatoes with water and add a generous pinch of salt. Bring to a boil and cook until totally soft, about 16 to 18 minutes. Drain and return to the pot.

Use a potato masher to mash potatoes until they’re smooth. Add melted butter, milk, and sour cream. Mash together until fully incorporated, then season with salt and pepper. Set aside.

Make beef filling. In a large, ovenproof skillet over medium heat, heat the olive oil. Add onion, carrots, garlic, and thyme and cook until fragrant and softened, about 5 minutes. Add ground beef and cook until no longer pink, about 5 more minutes. Drain the fat.

Stir in the frozen peas and corn and cooked until warmed through, about 3 minutes. Season with salt and pepper.

Sprinkle meat with flour and stir to evenly distribute. Cook 1 minute more and add the chicken broth. Bring to a simmer and let the mixture thicken slightly. About 5 minutes more.

Top the beef mixture with an even layer of mashed potatoes and bake in the oven until there is little moisture and the mashed potatoes are golden. About 20 minutes will do it. Broil will make the potatoes a bit crispier.

Posted in Cooking | 1 Comment

My Tech Certifications – A History

I don’t as a rule chase technical certifications. As a technical person who’s been mucking about with computers since around 1981, and as someone who has been on the hiring side of the desk, certifications are similar to some college degrees. They might get you in the door, but you still have to pass the practical exam with the technical staff in order to get hired.

Don’t get me wrong, the certification at least gets you past the recruiter/HR rep. Probably. At least where I am, the recruiter has a list of questions plus you have to get past my manager’s review before it even gets into my hands for a yes/no vote.

I have several certifications over the years and some have been challenging. I basically have a goal for going after the certification and generally it’s to validate my existing skills and maybe pick up a couple of extra bits that are outside my current work environment.

Back in the 80’s, I was installing and managing Novell and 3Com Local Area Networks (LAN). At one medium sized company, I was the first full time LAN Manager. In order to get access to the inner support network, I took quite a few 3Com classes and eventually went for the final certification. The certification would give me access to CompuServe and desired support network.

I did pass of course, and being a gamer, I enjoyed the heck out of the certification title.

Certification 1: 3Com 3Wizard

I’ve taken quite a few various training courses over the years. IBM’s OS/2 class down in North Carolina. Novell training (remember the ‘Paper CNE’ 🙂 ), and even MS-DOS 5 classes. About this time (early 90’s), I’d been on Usenet for 4 or 5 years. I’d written a Usenet news reader (Metal) and was very familiar with RFCs and how the Usenet server specifically worked. I had stumbled on Linux when Linus released it but I didn’t actually install a Linux server on an old 386 I had until Slackware came out with a crapload of diskettes. I had an internet connection at home on PSINet.

Basically I was poking at Linux.

In the mid 90’s, I was ready to change jobs. I had been moved from a single department to the overall organization (NASA HQ) and what I was going to be working on was going to be reduced from everything for the department to file and print and user management. I was approached by the Unix team and manager. “Hey, how about you shift to the Unix team?” It honestly took me a week to consider it but I eventually accepted. I was given ownership of the Usenet server 🙂 and the Essential Sysadmin book and over 30 days, I devoured the book and even sent in a correction to the author (credit in the next edition 🙂 ). After 2 years of digging in, documenting, and researching plus attending a couple of Solaris classes, I went for the Sun certification. This was really just so I could validate my skills. I didn’t need the certs for anything as there wasn’t a deeper support network you gained access to when you got it.

Certification 2: Sun Certified Solaris Administrator

Certification 3: Sun Certified Network Administrator

A few years later the subcontractor I was working for lost the Unix position. They were a programming shop (databases) and couldn’t justify the position. I was interested in learning more about networking and wanted to take a couple of classes. The new subcontractor offered me a chance at a boot camp for Cisco. I accepted and for several weeks, I attended the boot camp. I wasn’t working on any Cisco gear so basically concentrated on networking concepts more than anything else. I barely even took any notes 🙂  But I also figured that since the company was paying for the class ($10,000), I should at least get the certifications. The CCNA certification was a single test on all the basics of Cisco devices and networking. The CCNP certification was multiple tests, each one focusing on each category vs an overall test like the CCNA one was. The farther away from the class I got, the harder it was to pass the tests. CCNA was quick and easy. I passed the next couple with one test. The next took a couple of tests. The last took 3 tests. But I did pass and get my certifications.

Certification 4: Cisco Certified Network Associate

Certification 5: Cisco Certified Network Professional

I did actually manage firewalls after I got the certification, but I really am a systems admin and the command line and concepts were outside my wheelhouse. I tried to take the refresher certification but they’d gone to hands on testing vs multiple choice and since I wasn’t actually managing Cisco gear, I failed.

I’d been running Red Hat Linux on my home firewall for a while but I switched to Mandrake for a bit, then Mandriva, then Ubuntu. I also set up a remote server in Florida running OpenBSD so still poking at operating systems and still a system admin sort of person. At my job now, I was hired because of my enterprise knowledge. Working with Solaris, AIX, Irix, and various Linux distros. Since Sun was purchased by Oracle and then abandoned, I’ve been moving more into Red Hat management. Getting deeper and deeper into it. We’re also using HP-UX and had a few Tru64 servers in addition to a FreeBSD server and Red Hat servers. I’d taken several Red Hat training courses, cluster management, performance tuning, etc and eventually decided to go for my certifications. It seems like I’ve been getting a cert or two every 10 years 🙂  3Wizard in the 80’s. Sun in the 90’s. And Cisco in the 00’s. So I signed up for the Red Hat Certified System Administrator test and the Red Hat Certified Engineer test. It took two tries to get the RHCSA certificate. The first part of the test is to break into the test server. Took me 30 minutes the first time to remember how to do that. The RHCE test was a bit different. You had to create servers vs just use them as in the RHCSA test. Shoot, if I need to create a server, I don’t need to memorize how to do it. I document the process after research. Anyway, after two tries at the RHCE test, I dropped trying.

Certification 6: Red Hat Certified System Administrator

With Red Hat 8 out, I’ll give it a year and for the 20’s try for the RHCSA and RHCE again.

Here’s an odd thing though. These are all Operating System certifications. I’m a Systems Admin. I manage servers. I enjoy managing servers. I’ve considered studying for and getting certifications for MySQL for example since I do a lot of coding for one of my applications (and several smaller ones) and would like to expand my knowledge of databases. I’m sure I’m doing some things the hard way 🙂  Work actually gave me (for free!) two Dell R710 servers as they were being replaced. The first one I set up to replace my Ubuntu gateway so it was a full install of Linux and a firewall. Basically a replacement. All my code was on it, my backups, web sites, etc. But the second server showed up and the guys on the team talked me into installing VMware’s Vsphere software to turn the server into a VMware server able to host multiple virtual servers. And I stepped up and signed up to the VMware Users Group (VMUG) because I could get a discount on Vcenter which lets me turn the two R710’s into a VMware cluster.

In addition, I took over control of the Kubernetes clusters at work. The Product Engineers had thrown it over the wall at Operations and it ha sat idle. After I took it over, I updated the scripts I’d initially created to deploy the original clusters to start building new clusters. I’ve been digging deeper and deeper into Kubernetes in order to be able to support it. On the Product Engineering side, they’re building containers and pods to be deployed into the Kubernetes environments so they’re familiar with Kubernetes with regards to deployments and some rudimentary management but they’re not building or supporting the clusters. I am. I’m it. My boss recently asked me, “who’s our support contract with for Kubernetes?” and my answer was, “me, just me.”

So I decided to try and take the Kubernetes exams. This is the first non-operating system exam and certification I’ve attempted. Note that I considered it for mysql and others, but never actually moved forward with them. For Kubernetes, since I’m it, I figured I should dig in deeper and get smarter. I took the exam and failed it. But I realized that they were looking for application development knowledge as well which as an Operations admin, I’m not involved in. So I took the Application Developer course and took the exam again last week and passed it. But since I was taking the AppDev course, I figured I’d take the AppDev test. But I failed that as well. The first time. I expect I’ll be able to pass it the second time I try (I have a year for the free retake).

Certification 7: Certified Kubernetes Administrator

Over the past few days, I’ve been touting the CKA cert. I even have a printed copy of the cert at my desk. It’s the first one I’ve taken that’s not Operating System specific.

Certification 8: Certified Kubernetes Application Developer

A Year Later: I started receiving a few emails from the Linux Foundation. Your second test opportunity is about to expire. So I sucked it up and spent a month studying for the CKAD. I’d done a lot more in the past year and felt I was better prepared to take the test. I retook the Linux Academy course and even picked up a separate book just for some extra, different guidance. The book did clarify one thing for me that I hadn’t quite groked, Labels. I mean, I know what a label is, but wasn’t fully clear on the functionality of it. Since there’s no container or pod identity, there’s no way to associate things with a task. I got it because I’d been using tags to group products together in order to run ansible playbooks against them. The containers don’t have a set IP address, don’t have a DNS name, they just have a label, ‘app: llamas’. So any container with the ‘app: llamas’ label, has specific rights. Anyway, I took the test and passed it so one more certification.

Certification 9: AWS Certified Cloud Practitioner

The AWS CCP exam is a core or entry level exam. I started taking the Linux Academy course and it was basically matching up the Amazon terminology with how things work in Operations. Once I had them matched, I was able to take the test less than a week later and pass it. I started studying the AWS Certified SysOps Associate exam and will follow it up with the AWS Certified DevOps Professional and then the Security Track. In the mean time though, I’m taking the OpenShift Administration classes. So who knows what the next certification will be?


Posted in About Carl, Computers | Tagged , , , , , , , | Leave a comment

Beef and Bean Taco Skillet

  • 1 lb beef
  • 1 1.4oz packet taco seasoning
  • 1 16oz can pinto beans
  • 1 1.75oz can condensed tomato soup
  • 1/2 cup salsa
  • 1/4 cup water
  • 1/2 cup shredded cheese
  • 6 6” flour tortillas

Cook beef in a 10-inch skillet over medium-high heat until well browned; break up any clumps of beef. Drain fat. Stir in taco seasoning. Ad beans, soup, salsa and water. Reduce to low heat and simmer for 10 minutes, stirring occasionally. To with cheese. Serve with flour tortillas.

Posted in Cooking | Leave a comment

Recruiters Are Used Car Salesmen

For the past couple of years I’ve had my resume out and have occasionally interviewed. The interviews are actually particularly helpful as it gives me a view into what companies are looking for so I can focus on what skills I might need in the event of a lay off.

Of course the recruiters swarm like paparazzi. Virtually none of them have actually read my resume or profile, they just see my name on Linkedin, Dice, Indeed, Monster, or some other site, do a keyword search (sort of), and start spamming me with positions. These positions have little real relevance to my skill set.

They see ‘DevOps’ in my resume and spam me with every available DevOps position from Huntington Beach to the Jersey shore and every possible opportunity between 3 months to hire to 12 months and done.

Hey, I have a 3 month position in New Jersey for $20.00 an hour and a possibility to hire. Call me and we can talk!

Folks, I have a limit of about an hour for one way commuting. This means I’m self limited to these three locations. Telling me about a position that’s 90 minutes away one way or located downtown isn’t going to something I’m going to jump from my current job.

And skills. Windows Administration required. I haven’t done any Windows administration since the early 90’s. It’s not on my resume or profile anywhere.

Don’t worry about the Windows requirement. You can pick it up.

You don’t seem to understand. I’m not interested in learning how to manage Windows servers. I am interested in learning Powershell, looks interesting.

And Amazon Web Services. No AWS on my resume anywhere. Yes, I want to learn some cloud services but my only options are learning at home or getting into a position where I can learn AWS.

Recently I received some 15 different recruiters contacting me for the same position within commuting distance but it was a short term only position with no option to be hired at the end of the contract. My profile does say “Full Time Only”. It must have had a pretty good hiring bonus.

Many of these recruiter queries are obviously form letters. I get the same exact job posting with a slightly off font insertion of the recruiters name and my name.

I’ve had a couple of interviews that were arranged by recruiters. One last year had the gentleman actually meet me for lunch so we could chat. The interview he arranged looked good but fell through. He said he’d contact me again for new positions and I’ve not heard from him since. Another continues to send me positions. When I reply back with my location preference, he says that he just sends out the position to everyone on his list in the hopes something will stick.

I do reply to many of these and get an occasional response thanking me for my reply so there’s someone there.

So yes, recruiters are Used Car Salesmen. Just trying to get you into a position regardless of fitness or need so they can get paid.

Posted in Uncategorized | 1 Comment

Award Winning Guacamole

3 Haas avocados, halved, seeded and peeled

1 lime, juiced

1/2 teaspoon kosher salt

1/2 teaspoon ground cumin

1/2 teaspoon cayenne

1/2 medium onion, diced

1/2 jalapeno pepper, seeded and minced

2 Roma tomatoes, seeded and diced

1 tablespoon chopped cilantro

1 clove garlic, minced

In a large bowl place the scooped avocado pulp and lime juice, toss to coat. Drain, and reserve the lime juice, after all of the avocados have been coated. Using a potato masher add the salt, cumin, and cayenne and mash. Then, fold in the onions, tomatoes, cilantro, and garlic. Add 1 tablespoon of the reserved lime juice. Let sit at room temperature for 1 hour and then serve. 

Posted in Uncategorized | Tagged , | Leave a comment

Taco Soup

Recently I wanted to find something I could create with hamburger that wasn’t hamburgers or spaghetti. After some hunting on line, I found Taco Soup to sound pretty tasty actually. A couple of sessions later and I had what I thought was a pretty good recipe and simple to double for larger groups (when the band is over for example 🙂 ).

  • 2 pounds of hamburger
  • 1 15oz can of sweet corn, drained
  • 1 15 oz can of pinto beans, drained
  • 1 15 oz can of fire roasted diced tomatoes
  • 1 medium onion
  • 1 package of Original El Paso taco mix
  • 1 heaping 1/4 teaspoon of cayenne pepper to taste (medium hot; adjust as required)
  • 1 1/2 cup of water

In a medium pot, combine the corn, beans, tomatoes, package, cayenne pepper, and water and cook on medium heat.

Chop up the onion and in a frying pan, add oil (I use olive oil generally). Cook the chopped up onion for about 5 minutes, until they start to clear. Then add the hamburger. Cook the hamburger until done.

By then, the pot of ingredients should be slowly boiling. Add the hamburger and onion to the pot, cover and cook for 15 to 20 minutes.

Posted in Cooking | Leave a comment

Current Home Servers

This has come up several times and I can’t always remember all the servers I have set up for one reason or another. Many times it’s just because the subject I’m on has me listing the relevant servers plus a few I remember. But I’d like to have, at least, a snapshot in time of what I have running in case it comes up again. I can just point to my blog post 🙂

The physical environment consists of:

  • Dell R710
  • 192 Gigs of Ram
  • 2 8 Core Xenon x5550 2.67 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Dell R710
  • 288 Gigs of RAM
  • 2 6 Core Xenon X5660 2.8 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Sun 2540 Drive Array
  • 12 3 TB Drives

I also have a couple of UPSs which have enough power to keep the servers up for 20 minutes or so while I get things shut down along with a Gigabit business switch. Since we’re on High Speed WiFi for our Internet, this is all internal networking and not intended for external use.

I’ve installed VMware vSphere on the two R710’s and through VMug (the VMware User’s Group), I purchased a vCenter license to tie them both together.

I’ve since created some 45 servers (and destroyed a lot more over time) as I install servers to test various bits I’m interested in.

First off, my development environment. This consists of my source code and CI/CD stack.

  • Home Dev – Server using rcs and my scripts to manage code. This is my personal code. This also hosts the development review of my personal web sites. Testing changes and such.
  • Work Dev – Server using rcs and my scripts to manage code. This is a duplicate of my environment at work. I use the same scripts with just a different configuration file for the environment. Like Home Dev, this hosts the development review of my work web sites.
  • Git – Server using git for my code management. I’m gradually converting my code on both Home Dev and Work Dev to using git to manage code.
  • Gitlab – Part of the CI/CD stack, this is the destination for my git projects.
  • Artifactory – The artifacts server. This holds distribution packages, docker images, and general binaries for my web sites.
  • Jenkins – The orchestration tool. When changes occur on the Gitlab site, Jenkins pushes the changes up to my Production server hosted in Miami.
  • Photos – This is my old source code and picture site. Much of this has been migrated to my Home Dev server and is on the way to my Git server and CI/CD pipeline.

Next up are the database servers used for various tasks.

  • Cassandra – Used by Jeanne to learn the database. Several of the database servers are learning tools for either Jeanne or myself.
  • MySQL Cluster (2 Servers) – Used by me to learn and document creating a cluster and to start using it for my Docker and Kubernetes sessions.
  • Postgresql – Jeanne’s learning server.
  • Postgresql – Server used by both Jira and Confluence.
  • MS-SQL – Jeanne’s Microsoft learning server.

Monitoring Servers come up next.

  • Nagios Servers (3) – Used to monitor the three environments. The first monitors my remote Miami server. The second monitors site one. And the third monitors site two.

And the Docker and Kubernetes environment

  • Docker Server – Used to learn how to create containers using Docker.
  • Control Plane – The main Kubernetes server that manages the environment and Workers
  • Workers (3) – The workers that run Docker and the Kubernetes utilities.
  • ELK – The external logging server for Kubernetes and Docker. Since Docker containers are mutable, I wanted to have an external logging source to keep track of containers that might be experiencing problems.

Next Automation servers.

  • Ansible Tower – Site 1 Ansible server that also hosts Tower.
  • Ansible – Site 2 Ansible server. Used to automatically update the servers.
  • Salt – Configuration Management tool used to keep the server configurations consistent.
  • Terraform – Server for automatic builds of VMs.

Some utility or tool servers. Used to manage the environment.

  • Sun 2540 – VM used to manage the 2540 Drive Array
  • Jumpstart – Jumping off point to manage servers.
  • Tool – Site 1 Tool server. Scripts and such for the first site.
  • Tool – Site 2 Tool server. Scripts, etc…

More general administration servers.

  • Identity Management – A central login server.
  • Syslog – The main syslog server for the environments.
  • Spacewalk – The RPM Repository for the servers. Instead of each of the 45 servers going out to pull down updates, updates are pulled here and the servers pull from the Spacewalk server.
  • Jira – Agile server for managing workflow.
  • Confluence – Wiki like server. Tied into Jira.
  • Mail – Internal Mail server. Mainly as a destination for the other servers. Keeps me from sending email out into the ether.
  • Inventory – Server that keeps track of all the servers. Configuration management essentially.
  • pfSense – Firewall and gateway to the Internet

And finally the personal servers.

  • Plex Movie Server – Hosts about 3 TB of movies I’ve ripped from my collection.
  • Plex Television Server – Hosts about 3 TB of television shows I’ve ripped from my collection.
  • Backups – Backs up the remote Miami server.
  • Samba – Back up server for all the workstations here at Home.
  • Windows XP – A workstation used to be able to continue to use my HP scanner and Sony Handycam, both which only work with XP.
  • Windows 7 – No really reason other than I can.
  • Status – My Status Management software. Not really used right now.
Posted in Computers | Leave a comment