Managing Dell Fan Speeds

Since the Dell Servers are in a rack behind me, I wanted to better manage the fan speeds. Dell has an Error temperature, a Warning temperature, and the Ambient temperature. Basically where Dell tries to keep the server temperature. Errors are 2C lower and 47C higher, Warning is 8C lower and 42C higher, and Ambient is 25C. Of course, this means the fans change speed up and down throughout the day. This can be a little annoying if you’re trying to work 🙂 As such I did some hunting on line and found how I can manually manage the speeds.

First off, you need to have ipmitool installed. This lets you communicate with the Dell IPMI controller. Next up you need to enable IPMI in the Dell DRAC. It’s in the iDRAC under iDRAC Settings, Network/Security. Check the checkbox to Enable IPMI Over LAN and Apply.

It takes your iDRAC credentials and you then use ipmitool to first get the current temperature, then using hex, set the fan speeds you desire. I have two systems, the Dell R710 and a Dell R410. The options are slightly different between the two.

# /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
   -P [password] sensor reading 'Ambient Temp'
Ambient Temp      | 27

For the R410, you can’t query the line on the ipmitool line. You have to grep it out. Plus three entries are returned in my case but the last one is the important one.

# /bin/ipmitool -I lan -H [R410 ip address] -U [user] 
    -P [password] sensor | grep 'Ambient Temp' | tail -1
Ambient Temp     | 25.000     | degrees C  | ok    | na        | 3.000     | 8.000     | 42.000    | 47.000    | na

I’m really only modifying the fan speeds so the following table will let you make the necessary setting change. I basically don’t want the speeds to get above 50% of max so I didn’t figure out the rest. Maybe later 🙂

R710 Fan speed settings
   Speed         Setting
13.0k = 100% - 0110:0100 0x64
11.7k =  90% - 0101:1010 0x5A
10.4k =  80% - 0101:0000 0x50
 9.1k =  70% - 0100:0110 0x46
 7.8k =  60% - 0011:1100 0x3C
 6.5k =  50% - 0011:0010 0x32 30C
 5.2k =  40% - 0010:1000 0x28 28C
 3.9k =  30% - 0001:1110 0x1e 25C
 2.6k =  20% - 0001:0100 0x14 22C
 1.3k =  10% - 0000:1010 0x0a 12C
   0k =   0% - 0000:0000 0x00 10C

For the R410, the fans run a bit faster but use the same settings and percentages.

R410 fan speeds
   Speed         Setting
18,720 = 100% = 0110:0100 0x64
16,848 =  90% = 0101:1010 0x5A
14,976 =  80% = 0101:0000 0x50
13,104 =  70% = 0100:0110 0x46
11,232 =  60% = 0011:1100 0x3C
 9,360 =  50% = 0011:0010 0x32 30C
 7,488 =  40% = 0010:1000 0x28 28C
 5,616 =  30% = 0001:1110 0x1E 25C
 3,744 =  20% = 0001:0100 0x14 22C
 1,872 =  10% = 0000:1010 0x0A 12C
    0k =   0% = 0000:0000 0x00 10C

I have a script I use to manage the fan speeds throughout a working day or basically when I’m at the desktop practicing stuff, playing a game or whatever. First I take manual control of the IPMI configuration on the servers. Then make the appropriate fan change when servers reach certain temperatures. At 28C or higher, I kick the fans to 40%. At 24C or lower, I drop the fans to 20%. In between I keep the fan speeds at 30%. This is the same regardless of which system I’m running the script against.

In addition, between 8pm and 7am, I revert back to automatic management of the fans. This lets the server manage the temperatures mainly because I’m likely not at my desk.

First, get the current temperature. I showed you this above. Next, take over control of the fans.

$ /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x00

Then set the fan speed based on the temperature as noted above.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x02 0xff [speed]

Fan speed is in hex as displayed in the above table. 0x28 for 40%, 0x1E for 30% and 0x14 for 20%.

When you want to enable automatic management of the fans, the 8pm to 7am setting.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x01

And that’s it. The R710 uses lan instead of lanplus but works the same in all other ways. And of course you can script out whatever works best for your fan speed desires and piece of mind (and sanity 🙂 ).

Posted in Computers | Tagged , , , | Leave a comment

Adding Projects To Jenkins

A majority of my projects are simple websites or scripts. Nothing too complicated.

My binary files are located on my development server. Jenkins combines the git repo and binary files into a single distribution which is then sync’d up to the target host.

  1. Don’t forget, no spaces in project names.
  2. Create a standard, free flow project (the first option).
  3. When the configure page comes up, select ‘git’ and enter in the git information.
  4. Select Poll SCM. Either * * * * * for checking every minute or some derivation. My first project is minute by minute, the current one is every hour.
  5. Next, add a build step of Execute Shell and add the necessary lines to collect the site and then sync it to the target host.
  6. Save it
  7. Now on the target host, create a jenkins account and change the ownership of the target directory to ‘jenkins:jenkins’
  8. Finally, select ‘Build now’ in Jenkins.
  9. In the project page, click the down arrow for the current build and view the console.
  10. Resolve any errors

Posted in Computers, Jenkins | Tagged , | Leave a comment

Kubernetes WorkerNode Sizing

Overview

For the current Kubernetes clusters, I reviewed several industry best practices and went with a Worker Node resource configuration of 4 CPUs and 16 Gigabytes of RAM per worker node. In addition, I have a spreadsheet that describes all the nodes in each cluster in order to understand the resource requirements and properly extend the cluster or provision larger worker nodes.

This document describes the Pros and Cons of sizing Worker Nodes to help in understanding the decision behind the worker node design I went with.

Note that I’ve expanded my original document as the author of the article linked at the end of this post had me think about my own decision and confirmed my thoughts on the reasons behind my choices. In rewriting my document, I ignored the cloud considerations from the original article since the clusters I’m designing are on-prem and not being hosted in the cloud at this time. We may revisit the article at a later date should we migrate to the cloud.

Considerations

Probably the key piece of information you’ll require that will guide you in your decision are the resource requirements of the microservices that will be hosted on the clusters. If a monster “microservice” requires 2 vCPUs, then a 2 vCPU worker node won’t do the trick. Even a 4 vCPU node might be a bit problematic since there will likely be other microservices that will need to run on the workers. Not to mention any replication requirements or autoscaling of the product. These are things you’ll want to be aware of when deciding on a worker node size.

Large Worker Nodes

Let’s consider the Pros and Cons of a large worker node.

Pro: Fewer Servers

Simple enough, there are fewer servers to manage. Fewer accounts, fewer agents, and less overhead overall in managing the environment. There are fewer servers that will need care and feeding.

Pro: Resource Requirements

The control plane resource needs are fewer with fewer worker nodes. A 4 worker node cluster with 10 CPUs per node gives you a 40 CPU cluster or 40000 millicpus. If the Operating System and associated services reserve 5% of CPU per worker node, a 4 worker node cluster will lose about 2000 millicpus or 2 CPUs bringing the cluster to 38 CPUs for the hosted microservices. In addition, the control plane has less work to do. Every worker node is registered with the control plane so fewer workers means fewer networking requirements and a lighter load on the control plane.

Pro: Cost Savings

If you’re running a managed Kubernetes cluster such as rancher.io or OpenShift, you might be more efficient depending on the size of the smaller nodes. The number of CPUs in a cluster determines the benefit. Smaller nodes lets you be more flexible but could increase the CPU requirements and therefore cost more in the long term however putting in a 5th 10 CPU node might be less cost effective especially if you only need 2 or 4 more CPUs.

Con: Lots of Pods

Kubernetes does have a limit of about 110 pods on a node before the node starts having issues. With multiple large nodes, the likelihood of too many pods on a worker node increases which can affect the hosted services. The agents on the worker node have a lot more work to do. Docker is busier, kubelet is busier, in general the worker node works harder. Don’t forget the various probes performed by kubelet such as liveness, readiness, and startup probes.

Con: Replication

If you’re using Horizontal Pod Autoscaling (HPA) or just configure the deployment to have multiple pods for replication purposes, fewer nodes means your replication is reduced to the number of nodes. If you have a product with 4 replicas and only a 3 node cluster, you effectively only have 3 replicas. There are 4 running pods but if a node fails and it’s hosting two of the replicas, you’ve just lost 50% of your service.

Con: Bigger Impact

A pretty simple note here, with fewer nodes, if a node does fail more pods are affected. The service impact is potentially greater.

Smaller Worker Nodes

In reading the above Pros and Cons, you can probably figure out the Pros and Cons of smaller worker nodes but let’s list them anyway.

Pro: Smaller Impact

With more smaller nodes, if a node goes away, fewer pods and services are impacted. This can be important when we want to manage nodes such as when patching and upgrading.

Pro: Replication

This assumes you have a high number of pods for a service. More smaller nodes ensures you can both have multiple pods and can use HPA to automatically scale it out further if needed. And with smaller nodes, one node failing doesn’t significantly affect a service.

Con: More Servers

Yes, there are more servers to manage. More accounts, more agents, more resource usage. In an environment where Infrastructure as Code is the rule, more nodes shouldn’t have the same impact as if you were manually managing the nodes.

Con: Resource Availability

For more nodes, the control plane overhead increases and the overall cluster size might be affected. For example, my design of a 4 CPU worker node, a 10 node cluster is taking the same amount of resources as a 4 node 40 CPU cluster or 2 CPUs used for overhead across the cluster. But as noted, the cluster management overhead increases as more worker nodes are added. More networking as every node talks to every node.

Con: Pod Limits

Again, this is related to how many resources a product uses. With smaller nodes, there’s a chance of sufficient resource fragmentation that pods might not be able to start if the node is too small in ratio with microservice requirements. If the microservices are truly “micro”, then smaller nodes could be fine but if the microservices are moderately sized, nodes that are too small will have wasted resources and services may not start.

Conclusion

As noted initially, knowing what is needed with regards to the microservices, will give you the best guidelines on worker node size. Consider other factors as well such as the cost per CPU of a node when spread out vs all together. And don’t forget, you can add larger worker nodes to a cluster that might be too small to begin with. Especially if they’re virtual machines. Power down a node, increase the CPU and RAM requirements, and bring it back up.

References

https://learnk8s.io/kubernetes-node-size

Posted in Computers, Kubernetes | Leave a comment

Guitar Dreams

Probably a common dream for guitarists 🙂

We’re loading out to go to a gig. I ask someone to grab my stuff and we head over.

Small hole in the wall place, narrow and a little dark, large window at the front and I can see across the street a corner shop.

We’re playing but I can’t seem to get any volume. I walk up to a short/wide amp, black face and a little beat up with worn corners, turn down my guitar volume and turn up the amp, I go back and try to turn up the guitar but it’s still muted sounding. We’re playing Bon Jovi’s You Give Love A Bad Name.

I can’t seem to remember how to play and I’m slow, the neck is insanely wide, almost like I’m trying to play the body of the guitar, so I have to reach around to press on the strings. I hear where I’m supposed to be but can’t seem to get there.

The guys are looking at me as if to ask, “What the fuck?”

I realize my pedal board is missing. The guy must have missed grabbing it when he grabbed my guitar. I’m playing clean but getting no volume. The guitar has knobs that add distortion but still no volume.

Okay, the owner is annoyed, the customers are annoyed and don’t want to hear Bon Jovi. Okay, Killing In The Name. I tune my guitar to drop d but without my board, I don’t have the starting flanger sound. I give it my best shot though.

What’s this though, I hit the lead in notes, *flang* *flang* *flang* but it changes to *dah* *dah* *dah-da* *dah-da* *dah* (Bon Jovi). And still a too wide neck, and no volume.

The customers are down to a handful and the guys break out an envelope with song requests, two columns typewritten, about 8 songs in each column, the owner turns up the music player. I’m looking over the guys shoulders at the songs and hoping I can get the chord changes so I can muddle along.

It’s getting dark and the shop I can see across the street has their roll down cage in place.

Posted in Music | Tagged | Leave a comment

State Of The Game Room – 2018

During the past year, because of my band taking off, the game room was moved from the central room to the music room and the music room moved to the central room. There’s a lot more room for the guys to play and there’s sufficient room in the game room, formerly music room, to game. Currently I have a Shadowrun group that plays every Sunday. It does seem a touch tight but we’re really sitting around the table so it’s not terrible.

Of course, moving everything was a bit of a pain but I got it done in a week and the guys helped with the bigger Kallax shelves. Generally when I get new games, they’re stuck every which way on top of shelves or on the games in the shelves. Most recently, in preparation for the game room reorganize, I picked up a few more 1×4, 2×2, and 2×4 shelves to better organize the games. It does prove to show that games will fill every available space 🙂

I also took on the task of ensuring all my games were in my inventory. I also identified that I had found the game (there are about 15 or so that I haven’t been able to find since the move) and whether or not I’d played the game. I will remind folks who goggle at the list that it just shows that I’ve played the game, not how many times I’ve played it. Seeing a few hundred plays over 50 years doesn’t seem like a lot but taking into consideration dozens if not hundreds of plays for some games will put some perspective in the data 🙂

Since I updated the inventory, I can’t give you a definitive number of new games for the year. I’ve picked Gloomhaven as a starting point for 2018 as it was released in mid January. Since Gloomhaven, I’ve added 338 items to the inventory.

List of the RPG game system purchases:

* Shadowrun – The bulk will be fill-it-in digital purchases as my gaming group started this past summer and I wanted to have all the digital stuff ready for the group (I tend to play with a laptop and iPad vs a gigantic collection of books; but the books are available if needed).
* Dungeons and Dragons.
* Paranoia – I kickstarted the new Paranoia and hit NobleKnight to fill in my collection. I’d only picked up a few Mongoose books and needed to fill things in.
* Starfinder
* Star Trek – New system that looks interesting. I may spring it on the group at a later date.
* Conan 2d20 – I’m a big fan of Conan anyway so this was a must have
* Star Wars
* Pathfinder
* Genesys

That drops the list down to 83. List of the card game purchases and updates:

* Arkham Horror the card game
* Netrunner the card game
* Clank
* DC Deck Building
* Exploding Kittens
* Joking Hazard
* Munchkin

This brings down the number of board games to 57. Of those, actual new ones (vs hitting the used game store) are:

* Arboretum
* Betrayal Legacy
* Captain Sonar
* Cave vs Cave (Caverna)
* Chicago Express
* Dragon Castle
* Forbidden Sky
* Gloomhaven
* Founders of Gloomhaven
* Sanctum of Twilight (Mansions of Madness)
* Roll for the Galaxy
* The Rise of Fenris (Scythe)
* Shadowrun Zero Day
* Spoils of War
* Colonies (Terraforming Mars)
* Prelude (Terraforming Mars)
* Rails and Sails (Ticket to Ride)
* Triplanetary (Kicktarter)
* Green Hoarde (Zombicide)

With everything else, we only got to play a few of the above games (bolded). My band had its first gig in August so we were practicing hard 🙂 I’ve also spent more time on my Shadowrun group and especially the program I created and have been frantically updating for Shadowrun 5th Edition.

Some of the Reddit Boardgaming questions answered here:

How long have you been involved in the hobby?

Since I was young so about 50 years.

What would you change about your collection if you could?

Tough question. I get games because I’ve played them with others or they’re recommended. Since it’s generally just Jeanne and me, more two player games would be optimum. The only real change would be paying more attention to the number of players for the game. The Thing is an awesome game but really needs 6 or so players for it to work.

Which games might be leaving my collection soon?

Leave? You can’t leave me. None actually. I went through a phase back in the mid 90’s where I almost sold off what I had but the company (since bought by Noble Knight) only offered me pennies on the dollar (and only a few) and I had to ship it to them killing even more so I basically just kept them while I got more into video games. Now I’m glad I kept them, if only for nostalgia.

What haven’t I played?

Well, most of the games I bought before 2006 I’ve played. I gradually picked up more games between 2006 and 2012 and then really started exploring board games. I also have a pretty large collection of RPG books mainly because I used them for ideas for my AD&Dr1 game I ran for many years. In checking my game report, as of right now:

* Board Games: Inventory: 361, Played: 156, Percentage Played: 43.21
* Card Games: Inventory: 107, Played: 46, Percentage Played: 42.99
* Role Playing Games: Inventory: 276, Played: 31, Percentage Played: 11.23

The RPG Games stats are a bit misleading as well since I generally have multiple copies of core rule books for use the table or for various editions. For instance, I have probably 25 Shadowrun core books from 1st Edition up through 5th Edition. Same with D&D. I have several DM’s Guides from 1st Edition up through 5th Edition. Both include Collectable editions (the numbered Shadowrun books for instance) and even a few books for different languages. I have a Spanish and German Shadowrun core book.

In looking at my report, including every game, module, and expansion, I have 3,005 items. This doesn’t include dice though which exceed that by at least another 1,000 🙂

What are your favorite games?

This shifts so much that it’s hard to pin down. Cosmic Encounters, Car Wars, Shadowrun, Ticket to Ride, Splendor, Red Dragon Inn, Elder Sign, Eldritch Horror, Netrunner, Castles of Burgundy, Pandemic Iberia, Bunny Kingdom, DC Deck Building, the list goes on.

Favorite Boardgame of the past year?

I’d say Ticket to Ride Rails and Sails was the one we enjoyed playing the most. We introduced it to several others who have also purchased the game.

Most played boardgame?

Probably something like Cosmic Encounters outside something like Risk or Monopoly 🙂

What is your least favorite game?

That’s a tough question in that most games are somewhat fun and/or interesting. Probably the one that disappointed me most was Fragged. It’s a Doom video game set to board game and really doesn’t translate well. We played it once and decided we’d rather play the video game 🙂

I will say that we did try several of the Legendary card games and really had a hard time playing them. I particularly like Big Trouble in Little China but the Legendary game we played just killed the game itself for us and I haven’t picked up another since then.

Next boardgame purchase?

No real plans on a specific board game. I mainly look forward more to the next Shadowrun book.

On to the pictures! I have 4 5×5 Kallax shelves. Three 2×4 Kallax shelves. Two 2×2 Kallax shelves. And 5 1×4 Kallax shelves. That’s 152 Kallax squares (not all filled though). Pictures are basically 3 or 4 squares wide and two shelves high so there are a lot of pictures.

Full Gallery: Photo Gallery (Note that you can click on the pics here to see the full size pic.)

Come on in!

It is a bit narrow but it’s a game room. As long as tables and such fit, it should be good.

The top row here are some miscellaneous stuff including on the right, my Dad’s old chess set. This is the first shelf on the left as you enter the room.

Next shelf.

This is a side shelf to the second shelf. Just trying to make us of the space.

You can’t really see this shelf from the door due to the angle of the room.

One of the 2×4 + 2×2 shelves.

And the other 2×4 + 2×2 shelf.

This unit is one I might push the top shelf back a little. Lots of RPG books in this one so it’s a bit heavier than the others. That’s generally my Netrunner, Shadowrun, Arkham Horror, and Force of Will cards on the left.

And the final bookshelf. The top here also has miscellaneous stuff. Card decks I haven’t put into the above deck boxes, Cards Against Humanity, etc. On the right are a few more card decks including my Xxxenophile deck.

And behind the door, are my boxes for other miscellaneous stuff. Dice, card games, miniatures, Wings of War planes, playmats, etc.

And finally, my gaming table. Currently set up for Shadowrun. I have a second table I’m considering setting up for regular board gaming.

And that’s the list.

Posted in Gaming | 2 Comments

Differences between Shadowrun 4th/20th Anniversary Edition and Shadowrun 5th Edition

Just documenting the differences between the two since I ran 4th for quite a few years and am starting in on 5th. I’ll likely expand things as I further gather information but at least the initial parts will be bullet points.

* Lots of price changes in gear but mainly bringing 5th to 3rd Edition levels. Lots of the 4th prices were dropped quite a bit.
* Limits have been added. This puts a cap on the total number of successes you can achieve when rolling dice. There are several however the Physical, Mental, and Social are the main ones. They’re derived from your Attributes. When you need to roll 15 dice for something, the limit is the maximum number of successes you can count regardless of the roll.
* Accuracy has been added to weapons. Accuracy is basically the same as Limits in that it dictates the maximum number of successes you can achieve.
* Spell Damage Resistance is Force – (some number) where in 4th it was (Force / 2) – (some number). The (some number) values are mostly positive where in 4th there were some positive and some negative (some numbers).
* Melee weapon Damage is Strength + (some number) vs 4th which was (Strength / 2) + (some number).
* Initiative in 4th was (Intuition + Logic) number of d6 dice. In 5th it’s (Intuition + Logic) as a base number and you roll 1d6 to increase the base.
* Cyberware and Bioware would add Initiative Passes, speed giving you an extra turn. In 5th they add extra d6 rolls to the dice pool to a maximum of 5d6. Passes are now similar to 3rd where the totals would be in place, 22, 19, 16, 13, 10, 9, 7 for the first Initiative pass, then 10 is subtracted and folks who still have positive Initiative values (12, 9, 6, 3) would go again, then 10 is subtracted and the rest (2) goes one more time.
* Hacking in 4th was attempted for User, Admin, and Security levels. In 5th you’re creating “Marks” which give you the same level of access (3 Marks max) and 4 Marks was the owner of the system.
* There are Cyberdecks in addition to Commlinks. Commlinks revert to being equivalent to smartphones with basic functionality and ‘decks have the extra common and hacking programs.
* Cyberdecks have four attributes however they’re reconfigurable on boot of the device and through a program, reconfigurable while active.
* In 4th, you’d combine a program’s rank and the appropriate commlink value to have a Dice Pool in order to complete a program task. In 5th, Programs simply provide bonuses to a separate list of actions. In 4th for example, you’d use the Data Search active skill plus the Browse program to create the dice pool and you’d need n successes to succeed. In 5th the ‘deck has attributes you’d use such as Data Processing and combine it with the Computer active skill to create the Dice Pool. The Browse program still exists but its only function is to cut the time searching the Matrix in half.
* Armor has been combined into a single stat
* There’s a “Grid Overwatch Division” (GOD) that keeps an eye on the Matrix. Using a Deck over a longer period of time can attract their attention which is a bad thing.
* Glitches have a slight change. Where it was “half of your dice pool or more”, it’s now “more than half your dice pool”. So it’s slightly harder to glitch.

Posted in Gaming | Tagged | Leave a comment

Winning White Bean Chili

Allow 1 1/2 hours to prepare unless you’re a supper chopper!
Makes about 6 large servings

* 1 large onion, chopped
* 5 cloves garlic, minced
* 2 jalapeno peppers, chopped
* 1 tomatillo pepper, chopped
* 1 1/2 point of turkey breasts, chopped or turkey hamburger
* 2 4 ounce cans of chopped green chile peppers
* 3 tablespoons ground cumin
* 2 tablespoons ground coriander
* 1 tablespoon dried oregano
* 1 tablespoon dried cilantro
* dash of bay leaves
* 1/4 teaspoon ground cayenne pepper (or to taste)
* 1/4 teaspoon ground white pepper (or to taste)
* 4 cans great northern beans, drained and rinsed
* 2-3 cups of chicken broth (add as needed if the chili is too thick or there isn’t enough fluid)
* shredded Monterey Jack cheese (topper)

Directions for the crock pot

Have the crock pot out. In medium size frying pan, saute onion until soft and carmelizing. Add jalapeno, tomatillo, and garlic and saute a few minutes. Transfer to crock pot. Add a little oil to the pan, add turkey, cumin, coriander, white pepper, and cayenne and saute until the turkey turns white. Add the rest of the spices and canned chile peppers and saute a couple of minutes.

Transfer to the the crock pot.

Remove the pan from the burner and take one can of drained and rinsed beans and add it to the pan. Smash up the beans soaking up the oils and spices. Add to the crock pot. Add the rest of the beans to the crock pot and then add the broth until it’s about 1″ below the ingredients. Stir it all around until well mixed.

Cover and heat on low for 8 hours or high for 4 hours.

Posted in Cooking | Tagged , , , , , , | Leave a comment

Kubernetes 1.9.0 Installation

Overview

After discussion between my manager and the SysEng management team, it’s been decided that I’m to take over the Kubernetes infrastructure. The one SysEng who’s still here has put in his time and the belief is that management of Kubernetes should be an OpsEng task. Since I have the most experience due to helping to manage the existing servers plus working with the SysEng, I’m owning the environment. This documentation has been created to provide insights into my installation process and as a starting point when building a new cluster.

Script Environment

Like the 1.2 environment, the current clusters are being build with shell scripts. Also like the 1.2 environment, the scripts are specific to the cluster being built. You have to modify the set_variables.sh script for each cluster.

SysEng Scripts:

  • set_variables.sh – Set the variables used by the following scripts
  • functions.lib – Functions used by the following scripts
  • generate_certs.sh – Generate the ca and sub certs for all systems
  • generate_configs.sh – Generate the configuration files
  • copyfiles.sh – Copy the zipped files to the servers
  • bootstrap_cluster.sh – Dialog system to start etc and the control plane
  • configure_cluster.sh – Dialog system to let you manage users and RBAC
  • create_users.sh – Create users
  • deploy_addons.sh – Deploy the additional tools
  • kubectl_remote_access.sh – Setting up a config for the users to access the dashboard
  • nuke_env.sh – Remove all parts of Kubernetes

When I received the 1.9 scripts from SysEng, I created new scripts based on them but cluster agnostic and using a .config file to separate the different clusters. This let me also check them into our revision control infrastructure. As we’re using certificates in two different locations, I’ve split the process into three. The cert generation scripts and the cluster build and cluster admin scripts.

  • kubecerts – This package contains all the scripts used to build the CA along with all the server and user certificates. These files are initially located on the build server and are part of the kubernetes and kubeadm packages.
  • kubernetes – This package contains all the scripts used to build the cluster. These scripts are located in /var/tmp/kubernetes.
  • kubeadm – This package contains all the scripts used to manage users and namespaces. These files are located on the management server in the /usr/local/admin/kubernetes/[sitename] directory.

Kubernetes The Hard Way

I used Kelsey Hightower’s Kubernetes The Hard Way site to better understand the intricacies of Kubernetes that I didn’t know from managing the existing clusters and from working with SysEng. Kelsey Hightower works for Google and started his instruction with 1.8 so his site was quite spot on and needed for me to move forward.

Environment

As my workload is expanding, here are the environments. For the non Production sites, all sites are in the local offices of course. For Production the remote DR site is the remote DR site 🙂

  • Sandbox site
  • Dev local and remote DR site
  • QA local and remote DR site
  • Integration Lab local and remote DR site
  • Production local and remote DR site.

The Sandbox, Dev, and QA sites are three masters and three worker nodes. Integration Lab and Production sites are three masters and seven worker nodes.

I needed to create a new local Integration Lab site but was able to reuse the old Integration Lab DR site as it was spec’d for the old 1.2 installation but never used. I rebuilt them for the 1.9 installations.

Additional tools installed are as follows.

  • kubedns
  • heapster
  • influxdb
  • grafana
  • dashboard

The Configuration Files

Each site has its own configuration file located in the config directory.

There is a .config file for each of the packages as the kubernetes.config file might have a few extra things that the kubecerts.config file may or may not have.

You’ll use the install script located in the root of each of the packages to copy the site specific configuration file into the root as .config which is then used by all the scripts.

The kubecerts Scripts And Configurations

The kubecerts scripts use the Cloud Flare binaries.

Configurations

  • .config – This file contains the site name, CA information, Load Balancer information, Service IP, and a list of master and worker server names and IP addresses. This file is located in the installation root directory.

Scripts

  • version – contains the Kubernetes version.
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildcerts.sh – shell script that uses the .config data file to create master and worker directories and create certificates.
  • bin/cleanup – script that deletes it all so you can do it again.
  • bin/config – directory that contains the csr files, formatted in json which are used by the buildcerts.sh script to create certificates.
  • bin/config/encryption-config.yaml – which is needed by the kube-apiserver manifest.
  • config – directory that contains all the site config files.

Installation

You’ll run these commands on the build server.

  1. Run the install script to select the site you’ll be building certificates for.
  2. In the bin directory, run the buildcerts.sh shell script.
  3. The encryption key used by the encryption-config.yaml file occasionally returns an error due to an invalid character in the final string. Run the cleanup script to remove all the certs and start over with step 1.
  4. Using the tar command, create a site specific (named after the apiserver hostname) tar file.
  5. Copy the site.tar file into the /opt/static/kubernetes/sitecerts directory.

When the kubernetes script is run, it copies the site.tar file into the kubernetes directory and untars the certificates. Then the entire package is scp’d over to all nodes in the cluster where you then begin the kubernetes installation process.

The kubernetes Scripts and Configurations

The kubernetes static directory contains all the necessary binary files used when building the clusters. Make sure you have the various kubectl binaries, the kube-apiserver,kube-controller-manager, kube-scheduler, and kube-proxy binaries. In addition, you’ll need the cri and cni archives. And the etcd and etcdctl binary, used to manage etcd. As we’re also using the Cloud Flare binaries, make sure the cfssl and cfssljson binaries are also in place for the certificates.

Configurations

This configuration is a bit more complicated but very similar to the kubecerts one. It has the site name, CA information, Load Balancer information, Service IP, a list of master and worker nodes, plus path information for the various kubernetes certs and configurations, and some configuration options. Obviously see the .config base file for the information.

Scripts

  • version – contains the Kubernetes version
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildadmin.sa.sh – used to build the first cluster admin.
  • bin/installcerts.sh – installs all the generated certificates into their appropriate directories.
  • etcd/buildetcd.sh – installs and configures the etcd binary.
  • etcd/config – directory with the etcd.service configuration file.
  • master/buildmaster.sh – shell script that installs and configures a master server.
  • master/config – directory with the configuration files for the core services.
  • tools – various scripts used for managing the cluster.
  • worker/buildrepo.sh – this script pulls the certificate from the Artifactory server so images can be pulled.
    worker/buildworker.sh – script that installs and configures a worker node.
  • worker/config – various configuration files used when configuring a worker node.

Installation

You’ll be running scripts on every node of the cluster.

  1. Run the install script to select the site you’ll be installing the cluster for.
  2. In the bin directory, run the installcerts.sh shell script. This copies all the certs into the appropriate directories.

Master Servers

  1. In the etcd directory, run the buildetcd.sh shell script. This installs etcd.
  2. In the master directory, run the buildmaster.sh shell script. This installs the kube-apiserver, kube-controller-manager, and kube-scheduler services.

Once done, in the bin directory run the buildadmin.sa.sh shell script. This creates your serviceaccount and a kubeconfig file you use to access the servers from the management server.

Worker Nodes

  1. In the worker directory, run the buildworker.sh shell script.
  2. Then run the buildrepo.sh script. This ensures the certificate from the Artifactory server is installed on the node. Otherwise you can’t download images from Artifactory

The kubeadm Scripts and Configurations

There are two parts of this process. This part describes both parts however the installation step describes completing the installation of the cluster.

Configurations

The configuration file is is pretty minor containing the cluster name, the certificate details (users need new certs), Load Balancer, and Artifactory credentials.

Scripts

  • version – contains the Kubernetes version.
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildadmin.sa.sh – script used to create the cluster admin serviceaccount.
  • bin/builduser.sh – script use to create namespace specific users.
    bin/config – email text files used to notify users and admins of cluster management.
  • bindings/clusterrolebinding.yaml – the main file used to tie service accounts to the cluster.
  • config – directory with csr files for admins and users.
  • roles/clusterrole.yaml – role used to manage the cluster.
  • tools – directory with tools used to manage the cluster.
  • yaml/buildsystem.sh – shell script used to apply the additional yaml files to install tools.

Installation

  1. Run the install script to select the site you’ll be managing the cluster with
  2. In the bindings directory, run the kubectl apply -f clusterrolebinding command
  3. In the roles directory, run the kubectl apply -f clusterrole command
  4. In the roles directory, run the kubectl apply -f dashboard command
  5. In the yaml directory, run the buildsystem.sh shell script to install grafana, heapster, influxdb, kube-dns, and the kubernetes-dashbaord.

User Creation

The buildadmin.sa.sh and builduser.sh shell scripts are part of the overall kubeadm script package.

Admins

This is the process in the buildadmin.sa.sh shell script.

  1. Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
  2. Create a clusterrolebinding to the cluster-admin clusterrole.
  3. Create username.kubeconfig file in the /usr/local/admin/kubernetes/users directory.
  4. Generate a password from the password script
  5. Create a zip file using the password.
  6. Saves the file into the /var/www/html/kubernetes directory.
  7. Sends the user an email.
Users

This is the process in the builduser.sh shell script.

  1. Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
  2. As users are members of namespaces and only have read-only (or view) access to the cluster, a namespace is created.
  3. A rolebinding is created so the user has edit access to the namespace.
  4. Create a username.kubeocnfig file in the /usr/local/admin/kubernetes/users directory
  5. Generate a password from the password script
  6. Create a zip file using the password.
  7. The file is copied into /var/www/html/kubernetes.
  8. Sends the user an email with their password and the location of their encrypted kubeconfig file.

Completion

This completes the installation process. When done, you should have a functioning cluster and access to the cluster.

References

Posted in Computers, Kubernetes | Tagged | Leave a comment

Stuffed Chicken

Ingredients

  • 4 chicken breasts
  • 1 tablespoon olive oil
  • 1 tesapoon paprika
  • 1 teaspoon salt divided
  • ÂĽ teaspoon garlic powder
  • ÂĽ teaspoon onion powder
  • 4 ounces cream cheese, softened
  • ÂĽ cup grated Parmesan
  • 2 tablespoons mayonnaise
  • 1 ½ cups chopped fresh spinach
  • 1 teaspoon garlic, minced
  • ½ teaspoon red pepper flakes

Instructions

Preheat oven to 375 degrees.

Place the chicken breasts on a cutting board and drizzle with olive.

Add the paprika, 1/2 teaspoon salt, garlic powder, and onion powder to a small bowl and stir to combine. Sprinkle evenly over both sides of the chicken.

Use a sharp knife to cut a pocket into the side of each chicken breast. Set chicken aside.

Add cream cheese, Parmesan, mayonnaise, spinach, garlic, red pepper and remaining ½ teaspoon of salt to a small mixing bowl and stir well to combine.

Spoon the spinach mixture into each chicken breast evenly.
Place the chicken breasts in a 9×13 baking dish. Bake, uncovered, for 25 minutes or until chicken is cooked through.

Posted in Cooking | Tagged , | Leave a comment

DevOps Interview

Did my interview Friday. Interesting in general. No real OS questions but questions on Kubernetes and Docker and interest in Helm. Then an actual hands on test. Configure Jenkins to upload a file from a Github site to an Amazon Web Site (AWS).

Git is a revision control software tool, github and gitlab are management sites for git. I’ve used revision control for 21 years but started poking at git last year.

Jenkins is a deployment tool. You set up a configuration that pulls code from github/gitlab when something changes and copies it up to a hosting site like a web server. I’ve used Jenkins for a couple of simple web sites I own.

I have an account for AWS but only poked at it a little. I created a Kubernetes site on Google as a learning project but that’s it.

So I spent 90 minutes on google getting this going, searching for how to upload to AWS from Jenkins, etc. I had a couple of false starts and at least one “sit back and think a minute” moment. They finally called time at 4pm with the task incomplete. I’m pretty sure I would have gotten it within a few more minutes. They had “copy directory blah up” but directory was a level down from the workspace directory.

I will say that I learned quite a bit from this test 🙂 Even for my personal sites. I used the test to rework my Jenkins process to exclude .git and use a shell command vs a plugin (less complicated).

Probably no on the job (nothing heard so far) but I took something away and need to do a little more poking.

Posted in Computers | Leave a comment