Shepherd’s Pie

Preparing the Potatoes

  • 1 1/2 lb. potatoes, peeled
  • Kosher salt
  • 4 tbsp. melted butter
  • 1/4c. milk
  • 1/4c. sour cream
  • Freshly ground black pepper

Preparing the beef filling

  • 1 tbsp. extra-virgin olive oil
  • 1 large onion, chopped
  • 2 carrots, peeled and chopped
  • 2 cloves garlic, minced
  • 1 tsp. fresh thyme
  • 1 1/2 lb. ground beef
  • 1 c. frozen peas
  • 1 c. frozen corn
  • 2 tbsp. all-purpose flour
  • 2/3c. low-sodium chicken broth
  • 1 tbsp. freshly chopped parsley, for garnish

Directions

Preheat oven to 400 degrees.

Make mashed potatoes. In a large pot, cover potatoes with water and add a generous pinch of salt. Bring to a boil and cook until totally soft, about 16 to 18 minutes. Drain and return to the pot.

Use a potato masher to mash potatoes until they’re smooth. Add melted butter, milk, and sour cream. Mash together until fully incorporated, then season with salt and pepper. Set aside.

Make beef filling. In a large, ovenproof skillet over medium heat, heat the olive oil. Add onion, carrots, garlic, and thyme and cook until fragrant and softened, about 5 minutes. Add ground beef and cook until no longer pink, about 5 more minutes. Drain the fat.

Stir in the frozen peas and corn and cooked until warmed through, about 3 minutes. Season with salt and pepper.

Sprinkle meat with flour and stir to evenly distribute. Cook 1 minute more and add the chicken broth. Bring to a simmer and let the mixture thicken slightly. About 5 minutes more.

Top the beef mixture with an even layer of mashed potatoes and bake in the oven until there is little moisture and the mashed potatoes are golden. About 20 minutes will do it. Broil will make the potatoes a bit crispier.

Posted in Cooking | 1 Comment

My Tech Certifications – A History

I don’t as a rule chase technical certifications. As a technical person who’s been mucking about with computers since around 1981, and as someone who has been on the hiring side of the desk, certifications are similar to some college degrees. They might get you in the door, but you still have to pass the practical exam with the technical staff in order to get hired.

Don’t get me wrong, the certification at least gets you past the recruiter/HR rep. Probably. At least where I am, the recruiter has a list of questions plus you have to get past my manager’s review before it even gets into my hands for a yes/no vote.

I have several certifications over the years and some have been challenging. I basically have a goal for going after the certification and generally it’s to validate my existing skills and maybe pick up a couple of extra bits that are outside my current work environment.

Back in the 80’s, I was installing and managing Novell and 3Com Local Area Networks (LAN). At one medium sized company, I was the first full time LAN Manager. In order to get access to the inner support network, I took quite a few 3Com classes and eventually went for the final certification. The certification would give me access to CompuServe and desired support network.

I did pass of course, and being a gamer, I enjoyed the heck out of the certification title.

Certification 1: 3Com 3Wizard

I’ve taken quite a few various training courses over the years. IBM’s OS/2 class down in North Carolina. Novell training (remember the ‘Paper CNE’ πŸ™‚ ), and even MS-DOS 5 classes. About this time (early 90’s), I’d been on Usenet for 4 or 5 years. I’d written a Usenet news reader (Metal) and was very familiar with RFCs and how the Usenet server specifically worked. I had stumbled on Linux when Linus released it but I didn’t actually install a Linux server on an old 386 I had until Slackware came out with a crapload of diskettes. I had an internet connection at home on PSINet.

Basically I was poking at Linux.

In the mid 90’s, I was ready to change jobs. I had been moved from a single department to the overall organization (NASA HQ) and what I was going to be working on was going to be reduced from everything for the department to file and print and user management. I was approached by the Unix team and manager. “Hey, how about you shift to the Unix team?” It honestly took me a week to consider it but I eventually accepted. I was given ownership of the Usenet server πŸ™‚ and the Essential Sysadmin book and over 30 days, I devoured the book and even sent in a correction to the author (credit in the next edition πŸ™‚ ). After 2 years of digging in, documenting, and researching plus attending a couple of Solaris classes, I went for the Sun certification. This was really just so I could validate my skills. I didn’t need the certs for anything as there wasn’t a deeper support network you gained access to when you got it.

Certification 2: Sun Certified Solaris Administrator

Certification 3: Sun Certified Network Administrator

A few years later the subcontractor I was working for lost the Unix position. They were a programming shop (databases) and couldn’t justify the position. I was interested in learning more about networking and wanted to take a couple of classes. The new subcontractor offered me a chance at a boot camp for Cisco. I accepted and for several weeks, I attended the boot camp. I wasn’t working on any Cisco gear so basically concentrated on networking concepts more than anything else. I barely even took any notes πŸ™‚  But I also figured that since the company was paying for the class ($10,000), I should at least get the certifications. The CCNA certification was a single test on all the basics of Cisco devices and networking. The CCNP certification was multiple tests, each one focusing on each category vs an overall test like the CCNA one was. The farther away from the class I got, the harder it was to pass the tests. CCNA was quick and easy. I passed the next couple with one test. The next took a couple of tests. The last took 3 tests. But I did pass and get my certifications.

Certification 4: Cisco Certified Network Associate

Certification 5: Cisco Certified Network Professional

I did actually manage firewalls after I got the certification, but I really am a systems admin and the command line and concepts were outside my wheelhouse. I tried to take the refresher certification but they’d gone to hands on testing vs multiple choice and since I wasn’t actually managing Cisco gear, I failed.

I’d been running Red Hat Linux on my home firewall for a while but I switched to Mandrake for a bit, then Mandriva, then Ubuntu. I also set up a remote server in Florida running OpenBSD so still poking at operating systems and still a system admin sort of person. At my job now, I was hired because of my enterprise knowledge. Working with Solaris, AIX, Irix, and various Linux distros. Since Sun was purchased by Oracle and then abandoned, I’ve been moving more into Red Hat management. Getting deeper and deeper into it. We’re also using HP-UX and had a few Tru64 servers in addition to a FreeBSD server and Red Hat servers. I’d taken several Red Hat training courses, cluster management, performance tuning, etc and eventually decided to go for my certifications. It seems like I’ve been getting a cert or two every 10 years πŸ™‚  3Wizard in the 80’s. Sun in the 90’s. And Cisco in the 00’s. So I signed up for the Red Hat Certified System Administrator test and the Red Hat Certified Engineer test. It took two tries to get the RHCSA certificate. The first part of the test is to break into the test server. Took me 30 minutes the first time to remember how to do that. The RHCE test was a bit different. You had to create servers vs just use them as in the RHCSA test. Shoot, if I need to create a server, I don’t need to memorize how to do it. I document the process after research. Anyway, after two tries at the RHCE test, I dropped trying.

Certification 6: Red Hat Certified System Administrator

With Red Hat 8 out, I’ll give it a year and for the 20’s try for the RHCSA and RHCE again.

Here’s an odd thing though. These are all Operating System certifications. I’m a Systems Admin. I manage servers. I enjoy managing servers. I’ve considered studying for and getting certifications for MySQL for example since I do a lot of coding for one of my applications (and several smaller ones) and would like to expand my knowledge of databases. I’m sure I’m doing some things the hard way πŸ™‚  Work actually gave me (for free!) two Dell R710 servers as they were being replaced. The first one I set up to replace my Ubuntu gateway so it was a full install of Linux and a firewall. Basically a replacement. All my code was on it, my backups, web sites, etc. But the second server showed up and the guys on the team talked me into installing VMware’s Vsphere software to turn the server into a VMware server able to host multiple virtual servers. And I stepped up and signed up to the VMware Users Group (VMUG) because I could get a discount on Vcenter which lets me turn the two R710’s into a VMware cluster.

In addition, I took over control of the Kubernetes clusters at work. The Product Engineers had thrown it over the wall at Operations and it ha sat idle. After I took it over, I updated the scripts I’d initially created to deploy the original clusters to start building new clusters. I’ve been digging deeper and deeper into Kubernetes in order to be able to support it. On the Product Engineering side, they’re building containers and pods to be deployed into the Kubernetes environments so they’re familiar with Kubernetes with regards to deployments and some rudimentary management but they’re not building or supporting the clusters. I am. I’m it. My boss recently asked me, “who’s our support contract with for Kubernetes?” and my answer was, “me, just me.”

So I decided to try and take the Kubernetes exams. This is the first non-operating system exam and certification I’ve attempted. Note that I considered it for mysql and others, but never actually moved forward with them. For Kubernetes, since I’m it, I figured I should dig in deeper and get smarter. I took the exam and failed it. But I realized that they were looking for application development knowledge as well which as an Operations admin, I’m not involved in. So I took the Application Developer course and took the exam again last week and passed it. But since I was taking the AppDev course, I figured I’d take the AppDev test. But I failed that as well. The first time. I expect I’ll be able to pass it the second time I try (I have a year for the free retake).

Certification 7: Certified Kubernetes Administrator

Over the past few days, I’ve been touting the CKA cert. I even have a printed copy of the cert at my desk. It’s the first one I’ve taken that’s not Operating System specific.

Certification 8: Certified Kubernetes Application Developer

A Year Later: I started receiving a few emails from the Linux Foundation. Your second test opportunity is about to expire. So I sucked it up and spent a month studying for the CKAD. I’d done a lot more in the past year and felt I was better prepared to take the test. I retook the Linux Academy course and even picked up a separate book just for some extra, different guidance. The book did clarify one thing for me that I hadn’t quite groked, Labels. I mean, I know what a label is, but wasn’t fully clear on the functionality of it. Since there’s no container or pod identity, there’s no way to associate things with a task. I got it because I’d been using tags to group products together in order to run ansible playbooks against them. The containers don’t have a set IP address, don’t have a DNS name, they just have a label, ‘app: llamas’. So any container with the ‘app: llamas’ label, has specific rights. Anyway, I took the test and passed it so one more certification.

Certification 9: AWS Certified Cloud Practitioner

The AWS CCP exam is a core or entry level exam. I started taking the Linux Academy course and it was basically matching up the Amazon terminology with how things work in Operations. Once I had them matched, I was able to take the test less than a week later and pass it. I started studying the AWS Certified SysOps Associate exam and will follow it up with the AWS Certified DevOps Professional and then the Security Track. In the mean time though, I’m taking the OpenShift Administration classes. So who knows what the next certification will be?

Carl – 3Wizard, SCSA/SCNA, CCNA/CCNP, RHCSA, CKA, CKAD, AWS CCP

Posted in About Carl, Computers | Tagged , , , , , , , | Leave a comment

Beef and Bean Taco Skillet

  • 1 lb beef
  • 1 1.4oz packet taco seasoning
  • 1 16oz can pinto beans
  • 1 1.75oz can condensed tomato soup
  • 1/2 cup salsa
  • 1/4 cup water
  • 1/2 cup shredded cheese
  • 6 6” flour tortillas

Cook beef in a 10-inch skillet over medium-high heat until well browned; break up any clumps of beef. Drain fat. Stir in taco seasoning. Ad beans, soup, salsa and water. Reduce to low heat and simmer for 10 minutes, stirring occasionally. To with cheese. Serve with flour tortillas.

Posted in Cooking | Leave a comment

Recruiters Are Used Car Salesmen

For the past couple of years I’ve had my resume out and have occasionally interviewed. The interviews are actually particularly helpful as it gives me a view into what companies are looking for so I can focus on what skills I might need in the event of a lay off.

Of course the recruiters swarm like paparazzi. Virtually none of them have actually read my resume or profile, they just see my name on Linkedin, Dice, Indeed, Monster, or some other site, do a keyword search (sort of), and start spamming me with positions. These positions have little real relevance to my skill set.

They see ‘DevOps’ in my resume and spam me with every available DevOps position from Huntington Beach to the Jersey shore and every possible opportunity between 3 months to hire to 12 months and done.

Hey, I have a 3 month position in New Jersey for $20.00 an hour and a possibility to hire. Call me and we can talk!

Folks, I have a limit of about an hour for one way commuting. This means I’m self limited to these three locations. Telling me about a position that’s 90 minutes away one way or located downtown isn’t going to something I’m going to jump from my current job.

And skills. Windows Administration required. I haven’t done any Windows administration since the early 90’s. It’s not on my resume or profile anywhere.

Don’t worry about the Windows requirement. You can pick it up.

You don’t seem to understand. I’m not interested in learning how to manage Windows servers. I am interested in learning Powershell, looks interesting.

And Amazon Web Services. No AWS on my resume anywhere. Yes, I want to learn some cloud services but my only options are learning at home or getting into a position where I can learn AWS.

Recently I received some 15 different recruiters contacting me for the same position within commuting distance but it was a short term only position with no option to be hired at the end of the contract. My profile does say “Full Time Only”. It must have had a pretty good hiring bonus.

Many of these recruiter queries are obviously form letters. I get the same exact job posting with a slightly off font insertion of the recruiters name and my name.

I’ve had a couple of interviews that were arranged by recruiters. One last year had the gentleman actually meet me for lunch so we could chat. The interview he arranged looked good but fell through. He said he’d contact me again for new positions and I’ve not heard from him since. Another continues to send me positions. When I reply back with my location preference, he says that he just sends out the position to everyone on his list in the hopes something will stick.

I do reply to many of these and get an occasional response thanking me for my reply so there’s someone there.

So yes, recruiters are Used Car Salesmen. Just trying to get you into a position regardless of fitness or need so they can get paid.

Posted in Uncategorized | 1 Comment

Award Winning Guacamole

3 Haas avocados, halved, seeded and peeled

1 lime, juiced

1/2 teaspoon kosher salt

1/2 teaspoon ground cumin

1/2 teaspoon cayenne

1/2 medium onion, diced

1/2 jalapeno pepper, seeded and minced

2 Roma tomatoes, seeded and diced

1 tablespoon chopped cilantro

1 clove garlic, minced

In a large bowl place the scooped avocado pulp and lime juice, toss to coat. Drain, and reserve the lime juice, after all of the avocados have been coated. Using a potato masher add the salt, cumin, and cayenne and mash. Then, fold in the onions, tomatoes, cilantro, and garlic. Add 1 tablespoon of the reserved lime juice. Let sit at room temperature for 1 hour and then serve.Β 

Posted in Uncategorized | Tagged , | Leave a comment

Taco Soup

Recently I wanted to find something I could create with hamburger that wasn’t hamburgers or spaghetti. After some hunting on line, I found Taco Soup to sound pretty tasty actually. A couple of sessions later and I had what I thought was a pretty good recipe and simple to double for larger groups (when the band is over for example πŸ™‚ ).

  • 2 pounds of hamburger
  • 1 15oz can of sweet corn, drained
  • 1 15 oz can of pinto beans, drained
  • 1 15 oz can of fire roasted diced tomatoes
  • 1 medium onion
  • 1 package of Original El Paso taco mix
  • 1 heaping 1/4 teaspoon of cayenne pepper to taste (medium hot; adjust as required)
  • 1 1/2 cup of water

In a medium pot, combine the corn, beans, tomatoes, package, cayenne pepper, and water and cook on medium heat.

Chop up the onion and in a frying pan, add oil (I use olive oil generally). Cook the chopped up onion for about 5 minutes, until they start to clear. Then add the hamburger. Cook the hamburger until done.

By then, the pot of ingredients should be slowly boiling. Add the hamburger and onion to the pot, cover and cook for 15 to 20 minutes.

Posted in Cooking | Leave a comment

Current Home Servers

This has come up several times and I can’t always remember all the servers I have set up for one reason or another. Many times it’s just because the subject I’m on has me listing the relevant servers plus a few I remember. But I’d like to have, at least, a snapshot in time of what I have running in case it comes up again. I can just point to my blog post πŸ™‚

The physical environment consists of:

  • Dell R710
  • 192 Gigs of Ram
  • 2 8 Core Xenon x5550 2.67 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Dell R710
  • 288 Gigs of RAM
  • 2 6 Core Xenon X5660 2.8 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Sun 2540 Drive Array
  • 12 3 TB Drives

I also have a couple of UPSs which have enough power to keep the servers up for 20 minutes or so while I get things shut down along with a Gigabit business switch. Since we’re on High Speed WiFi for our Internet, this is all internal networking and not intended for external use.

I’ve installed VMware vSphere on the two R710’s and through VMug (the VMware User’s Group), I purchased a vCenter license to tie them both together.

I’ve since created some 45 servers (and destroyed a lot more over time) as I install servers to test various bits I’m interested in.

First off, my development environment. This consists of my source code and CI/CD stack.

  • Home Dev – Server using rcs and my scripts to manage code. This is my personal code. This also hosts the development review of my personal web sites. Testing changes and such.
  • Work Dev – Server using rcs and my scripts to manage code. This is a duplicate of my environment at work. I use the same scripts with just a different configuration file for the environment. Like Home Dev, this hosts the development review of my work web sites.
  • Git – Server using git for my code management. I’m gradually converting my code on both Home Dev and Work Dev to using git to manage code.
  • Gitlab – Part of the CI/CD stack, this is the destination for my git projects.
  • Artifactory – The artifacts server. This holds distribution packages, docker images, and general binaries for my web sites.
  • Jenkins – The orchestration tool. When changes occur on the Gitlab site, Jenkins pushes the changes up to my Production server hosted in Miami.
  • Photos – This is my old source code and picture site. Much of this has been migrated to my Home Dev server and is on the way to my Git server and CI/CD pipeline.

Next up are the database servers used for various tasks.

  • Cassandra – Used by Jeanne to learn the database. Several of the database servers are learning tools for either Jeanne or myself.
  • MySQL Cluster (2 Servers) – Used by me to learn and document creating a cluster and to start using it for my Docker and Kubernetes sessions.
  • Postgresql – Jeanne’s learning server.
  • Postgresql – Server used by both Jira and Confluence.
  • MS-SQL – Jeanne’s Microsoft learning server.

Monitoring Servers come up next.

  • Nagios Servers (3) – Used to monitor the three environments. The first monitors my remote Miami server. The second monitors site one. And the third monitors site two.

And the Docker and Kubernetes environment

  • Docker Server – Used to learn how to create containers using Docker.
  • Control Plane – The main Kubernetes server that manages the environment and Workers
  • Workers (3) – The workers that run Docker and the Kubernetes utilities.
  • ELK – The external logging server for Kubernetes and Docker. Since Docker containers are mutable, I wanted to have an external logging source to keep track of containers that might be experiencing problems.

Next Automation servers.

  • Ansible Tower – Site 1 Ansible server that also hosts Tower.
  • Ansible – Site 2 Ansible server. Used to automatically update the servers.
  • Salt – Configuration Management tool used to keep the server configurations consistent.
  • Terraform – Server for automatic builds of VMs.

Some utility or tool servers. Used to manage the environment.

  • Sun 2540 – VM used to manage the 2540 Drive Array
  • Jumpstart – Jumping off point to manage servers.
  • Tool – Site 1 Tool server. Scripts and such for the first site.
  • Tool – Site 2 Tool server. Scripts, etc…

More general administration servers.

  • Identity Management – A central login server.
  • Syslog – The main syslog server for the environments.
  • Spacewalk – The RPM Repository for the servers. Instead of each of the 45 servers going out to pull down updates, updates are pulled here and the servers pull from the Spacewalk server.
  • Jira – Agile server for managing workflow.
  • Confluence – Wiki like server. Tied into Jira.
  • Mail – Internal Mail server. Mainly as a destination for the other servers. Keeps me from sending email out into the ether.
  • Inventory – Server that keeps track of all the servers. Configuration management essentially.
  • pfSense – Firewall and gateway to the Internet

And finally the personal servers.

  • Plex Movie Server – Hosts about 3 TB of movies I’ve ripped from my collection.
  • Plex Television Server – Hosts about 3 TB of television shows I’ve ripped from my collection.
  • Backups – Backs up the remote Miami server.
  • Samba – Back up server for all the workstations here at Home.
  • Windows XP – A workstation used to be able to continue to use my HP scanner and Sony Handycam, both which only work with XP.
  • Windows 7 – No really reason other than I can.
  • Status – My Status Management software. Not really used right now.
Posted in Computers | Leave a comment

Managing Dell Fan Speeds

Since the Dell Servers are in a rack behind me, I wanted to better manage the fan speeds. Dell has an Error temperature, a Warning temperature, and the Ambient temperature. Basically where Dell tries to keep the server temperature. Errors are 2C lower and 47C higher, Warning is 8C lower and 42C higher, and Ambient is 25C. Of course, this means the fans change speed up and down throughout the day. This can be a little annoying if you’re trying to work πŸ™‚ As such I did some hunting on line and found how I can manually manage the speeds.

First off, you need to have ipmitool installed. This lets you communicate with the Dell IPMI controller. Next up you need to enable IPMI in the Dell DRAC. It’s in the iDRAC under iDRAC Settings, Network/Security. Check the checkbox to Enable IPMI Over LAN and Apply.

It takes your iDRAC credentials and you then use ipmitool to first get the current temperature, then using hex, set the fan speeds you desire. I have two systems, the Dell R710 and a Dell R410. The options are slightly different between the two.

# /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
   -P [password] sensor reading 'Ambient Temp'
Ambient Temp      | 27

For the R410, you can’t query the line on the ipmitool line. You have to grep it out. Plus three entries are returned in my case but the last one is the important one.

# /bin/ipmitool -I lan -H [R410 ip address] -U [user] 
    -P [password] sensor | grep 'Ambient Temp' | tail -1
Ambient Temp     | 25.000     | degrees C  | ok    | na        | 3.000     | 8.000     | 42.000    | 47.000    | na

I’m really only modifying the fan speeds so the following table will let you make the necessary setting change. I basically don’t want the speeds to get above 50% of max so I didn’t figure out the rest. Maybe later πŸ™‚

R710 Fan speed settings
   Speed         Setting
13.0k = 100% - 0110:0100 0x64
11.7k =  90% - 0101:1010 0x5A
10.4k =  80% - 0101:0000 0x50
 9.1k =  70% - 0100:0110 0x46
 7.8k =  60% - 0011:1100 0x3C
 6.5k =  50% - 0011:0010 0x32 30C
 5.2k =  40% - 0010:1000 0x28 28C
 3.9k =  30% - 0001:1110 0x1e 25C
 2.6k =  20% - 0001:0100 0x14 22C
 1.3k =  10% - 0000:1010 0x0a 12C
   0k =   0% - 0000:0000 0x00 10C

For the R410, the fans run a bit faster but use the same settings and percentages.

R410 fan speeds
   Speed         Setting
18,720 = 100% = 0110:0100 0x64
16,848 =  90% = 0101:1010 0x5A
14,976 =  80% = 0101:0000 0x50
13,104 =  70% = 0100:0110 0x46
11,232 =  60% = 0011:1100 0x3C
 9,360 =  50% = 0011:0010 0x32 30C
 7,488 =  40% = 0010:1000 0x28 28C
 5,616 =  30% = 0001:1110 0x1E 25C
 3,744 =  20% = 0001:0100 0x14 22C
 1,872 =  10% = 0000:1010 0x0A 12C
    0k =   0% = 0000:0000 0x00 10C

I have a script I use to manage the fan speeds throughout a working day or basically when I’m at the desktop practicing stuff, playing a game or whatever. First I take manual control of the IPMI configuration on the servers. Then make the appropriate fan change when servers reach certain temperatures. At 28C or higher, I kick the fans to 40%. At 24C or lower, I drop the fans to 20%. In between I keep the fan speeds at 30%. This is the same regardless of which system I’m running the script against.

In addition, between 8pm and 7am, I revert back to automatic management of the fans. This lets the server manage the temperatures mainly because I’m likely not at my desk.

First, get the current temperature. I showed you this above. Next, take over control of the fans.

$ /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x00

Then set the fan speed based on the temperature as noted above.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x02 0xff [speed]

Fan speed is in hex as displayed in the above table. 0x28 for 40%, 0x1E for 30% and 0x14 for 20%.

When you want to enable automatic management of the fans, the 8pm to 7am setting.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x01

And that’s it. The R710 uses lan instead of lanplus but works the same in all other ways. And of course you can script out whatever works best for your fan speed desires and piece of mind (and sanity πŸ™‚ ).

Posted in Computers | Tagged , , , | Leave a comment

Adding Projects To Jenkins

A majority of my projects are simple websites or scripts. Nothing too complicated.

My binary files are located on my development server. Jenkins combines the git repo and binary files into a single distribution which is then sync’d up to the target host.

  1. Don’t forget, no spaces in project names.
  2. Create a standard, free flow project (the first option).
  3. When the configure page comes up, select ‘git’ and enter in the git information.
  4. Select Poll SCM. Either * * * * * for checking every minute or some derivation. My first project is minute by minute, the current one is every hour.
  5. Next, add a build step of Execute Shell and add the necessary lines to collect the site and then sync it to the target host.
  6. Save it
  7. Now on the target host, create a jenkins account and change the ownership of the target directory to ‘jenkins:jenkins’
  8. Finally, select ‘Build now’ in Jenkins.
  9. In the project page, click the down arrow for the current build and view the console.
  10. Resolve any errors

Posted in Computers, Jenkins | Tagged , | Leave a comment

Kubernetes WorkerNode Sizing

Overview

For the current Kubernetes clusters, I reviewed several industry best practices and went with a Worker Node resource configuration of 4 CPUs and 16 Gigabytes of RAM per worker node. In addition, I have a spreadsheet that describes all the nodes in each cluster in order to understand the resource requirements and properly extend the cluster or provision larger worker nodes.

This document describes the Pros and Cons of sizing Worker Nodes to help in understanding the decision behind the worker node design I went with.

Note that I’ve expanded my original document as the author of the article linked at the end of this post had me think about my own decision and confirmed my thoughts on the reasons behind my choices. In rewriting my document, I ignored the cloud considerations from the original article since the clusters I’m designing are on-prem and not being hosted in the cloud at this time. We may revisit the article at a later date should we migrate to the cloud.

Considerations

Probably the key piece of information you’ll require that will guide you in your decision are the resource requirements of the microservices that will be hosted on the clusters. If a monster “microservice” requires 2 vCPUs, then a 2 vCPU worker node won’t do the trick. Even a 4 vCPU node might be a bit problematic since there will likely be other microservices that will need to run on the workers. Not to mention any replication requirements or autoscaling of the product. These are things you’ll want to be aware of when deciding on a worker node size.

Large Worker Nodes

Let’s consider the Pros and Cons of a large worker node.

Pro: Fewer Servers

Simple enough, there are fewer servers to manage. Fewer accounts, fewer agents, and less overhead overall in managing the environment. There are fewer servers that will need care and feeding.

Pro: Resource Requirements

The control plane resource needs are fewer with fewer worker nodes. A 4 worker node cluster with 10 CPUs per node gives you a 40 CPU cluster or 40000 millicpus. If the Operating System and associated services reserve 5% of CPU per worker node, a 4 worker node cluster will lose about 2000 millicpus or 2 CPUs bringing the cluster to 38 CPUs for the hosted microservices. In addition, the control plane has less work to do. Every worker node is registered with the control plane so fewer workers means fewer networking requirements and a lighter load on the control plane.

Pro: Cost Savings

If you’re running a managed Kubernetes cluster such as rancher.io or OpenShift, you might be more efficient depending on the size of the smaller nodes. The number of CPUs in a cluster determines the benefit. Smaller nodes lets you be more flexible but could increase the CPU requirements and therefore cost more in the long term however putting in a 5th 10 CPU node might be less cost effective especially if you only need 2 or 4 more CPUs.

Con: Lots of Pods

Kubernetes does have a limit of about 110 pods on a node before the node starts having issues. With multiple large nodes, the likelihood of too many pods on a worker node increases which can affect the hosted services. The agents on the worker node have a lot more work to do. Docker is busier, kubelet is busier, in general the worker node works harder. Don’t forget the various probes performed by kubelet such as liveness, readiness, and startup probes.

Con: Replication

If you’re using Horizontal Pod Autoscaling (HPA) or just configure the deployment to have multiple pods for replication purposes, fewer nodes means your replication is reduced to the number of nodes. If you have a product with 4 replicas and only a 3 node cluster, you effectively only have 3 replicas. There are 4 running pods but if a node fails and it’s hosting two of the replicas, you’ve just lost 50% of your service.

Con: Bigger Impact

A pretty simple note here, with fewer nodes, if a node does fail more pods are affected. The service impact is potentially greater.

Smaller Worker Nodes

In reading the above Pros and Cons, you can probably figure out the Pros and Cons of smaller worker nodes but let’s list them anyway.

Pro: Smaller Impact

With more smaller nodes, if a node goes away, fewer pods and services are impacted. This can be important when we want to manage nodes such as when patching and upgrading.

Pro: Replication

This assumes you have a high number of pods for a service. More smaller nodes ensures you can both have multiple pods and can use HPA to automatically scale it out further if needed. And with smaller nodes, one node failing doesn’t significantly affect a service.

Con: More Servers

Yes, there are more servers to manage. More accounts, more agents, more resource usage. In an environment where Infrastructure as Code is the rule, more nodes shouldn’t have the same impact as if you were manually managing the nodes.

Con: Resource Availability

For more nodes, the control plane overhead increases and the overall cluster size might be affected. For example, my design of a 4 CPU worker node, a 10 node cluster is taking the same amount of resources as a 4 node 40 CPU cluster or 2 CPUs used for overhead across the cluster. But as noted, the cluster management overhead increases as more worker nodes are added. More networking as every node talks to every node.

Con: Pod Limits

Again, this is related to how many resources a product uses. With smaller nodes, there’s a chance of sufficient resource fragmentation that pods might not be able to start if the node is too small in ratio with microservice requirements. If the microservices are truly “micro”, then smaller nodes could be fine but if the microservices are moderately sized, nodes that are too small will have wasted resources and services may not start.

Conclusion

As noted initially, knowing what is needed with regards to the microservices, will give you the best guidelines on worker node size. Consider other factors as well such as the cost per CPU of a node when spread out vs all together. And don’t forget, you can add larger worker nodes to a cluster that might be too small to begin with. Especially if they’re virtual machines. Power down a node, increase the CPU and RAM requirements, and bring it back up.

References

https://learnk8s.io/kubernetes-node-size

Posted in Computers, Kubernetes | Leave a comment