Winning White Bean Chili

Allow 1 1/2 hours to prepare unless you’re a supper chopper!
Makes about 6 large servings

* 1 large onion, chopped
* 5 cloves garlic, minced
* 2 jalapeno peppers, chopped
* 1 tomatillo pepper, chopped
* 1 1/2 point of turkey breasts, chopped or turkey hamburger
* 2 4 ounce cans of chopped green chile peppers
* 3 tablespoons ground cumin
* 2 tablespoons ground coriander
* 1 tablespoon dried oregano
* 1 tablespoon dried cilantro
* dash of bay leaves
* 1/4 teaspoon ground cayenne pepper (or to taste)
* 1/4 teaspoon ground white pepper (or to taste)
* 4 cans great northern beans, drained and rinsed
* 2-3 cups of chicken broth (add as needed if the chili is too thick or there isn’t enough fluid)
* shredded Monterey Jack cheese (topper)

Directions for the crock pot

Have the crock pot out. In medium size frying pan, saute onion until soft and carmelizing. Add jalapeno, tomatillo, and garlic and saute a few minutes. Transfer to crock pot. Add a little oil to the pan, add turkey, cumin, coriander, white pepper, and cayenne and saute until the turkey turns white. Add the rest of the spices and canned chile peppers and saute a couple of minutes.

Transfer to the the crock pot.

Remove the pan from the burner and take one can of drained and rinsed beans and add it to the pan. Smash up the beans soaking up the oils and spices. Add to the crock pot. Add the rest of the beans to the crock pot and then add the broth until it’s about 1″ below the ingredients. Stir it all around until well mixed.

Cover and heat on low for 8 hours or high for 4 hours.

Posted in Cooking | Tagged , , , , , , | Leave a comment

Kubernetes 1.9.0 Installation

Overview

After discussion between my manager and the SysEng management team, it’s been decided that I’m to take over the Kubernetes infrastructure. The one SysEng who’s still here has put in his time and the belief is that management of Kubernetes should be an OpsEng task. Since I have the most experience due to helping to manage the existing servers plus working with the SysEng, I’m owning the environment. This documentation has been created to provide insights into my installation process and as a starting point when building a new cluster.

Script Environment

Like the 1.2 environment, the current clusters are being build with shell scripts. Also like the 1.2 environment, the scripts are specific to the cluster being built. You have to modify the set_variables.sh script for each cluster.

SysEng Scripts:

  • set_variables.sh – Set the variables used by the following scripts
  • functions.lib – Functions used by the following scripts
  • generate_certs.sh – Generate the ca and sub certs for all systems
  • generate_configs.sh – Generate the configuration files
  • copyfiles.sh – Copy the zipped files to the servers
  • bootstrap_cluster.sh – Dialog system to start etc and the control plane
  • configure_cluster.sh – Dialog system to let you manage users and RBAC
  • create_users.sh – Create users
  • deploy_addons.sh – Deploy the additional tools
  • kubectl_remote_access.sh – Setting up a config for the users to access the dashboard
  • nuke_env.sh – Remove all parts of Kubernetes

When I received the 1.9 scripts from SysEng, I created new scripts based on them but cluster agnostic and using a .config file to separate the different clusters. This let me also check them into our revision control infrastructure. As we’re using certificates in two different locations, I’ve split the process into three. The cert generation scripts and the cluster build and cluster admin scripts.

  • kubecerts – This package contains all the scripts used to build the CA along with all the server and user certificates. These files are initially located on the build server and are part of the kubernetes and kubeadm packages.
  • kubernetes – This package contains all the scripts used to build the cluster. These scripts are located in /var/tmp/kubernetes.
  • kubeadm – This package contains all the scripts used to manage users and namespaces. These files are located on the management server in the /usr/local/admin/kubernetes/[sitename] directory.

Kubernetes The Hard Way

I used Kelsey Hightower’s Kubernetes The Hard Way site to better understand the intricacies of Kubernetes that I didn’t know from managing the existing clusters and from working with SysEng. Kelsey Hightower works for Google and started his instruction with 1.8 so his site was quite spot on and needed for me to move forward.

Environment

As my workload is expanding, here are the environments. For the non Production sites, all sites are in the local offices of course. For Production the remote DR site is the remote DR site šŸ™‚

  • Sandbox site
  • Dev local and remote DR site
  • QA local and remote DR site
  • Integration Lab local and remote DR site
  • Production local and remote DR site.

The Sandbox, Dev, and QA sites are three masters and three worker nodes. Integration Lab and Production sites are three masters and seven worker nodes.

I needed to create a new local Integration Lab site but was able to reuse the old Integration Lab DR site as it was spec’d for the old 1.2 installation but never used. I rebuilt them for the 1.9 installations.

Additional tools installed are as follows.

  • kubedns
  • heapster
  • influxdb
  • grafana
  • dashboard

The Configuration Files

Each site has its own configuration file located in the config directory.

There is a .config file for each of the packages as the kubernetes.config file might have a few extra things that the kubecerts.config file may or may not have.

You’ll use the install script located in the root of each of the packages to copy the site specific configuration file into the root as .config which is then used by all the scripts.

The kubecerts Scripts And Configurations

The kubecerts scripts use the Cloud Flare binaries.

Configurations

  • .config – This file contains the site name, CA information, Load Balancer information, Service IP, and a list of master and worker server names and IP addresses. This file is located in the installation root directory.

Scripts

  • version – contains the Kubernetes version.
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildcerts.sh – shell script that uses the .config data file to create master and worker directories and create certificates.
  • bin/cleanup – script that deletes it all so you can do it again.
  • bin/config – directory that contains the csr files, formatted in json which are used by the buildcerts.sh script to create certificates.
  • bin/config/encryption-config.yaml – which is needed by the kube-apiserver manifest.
  • config – directory that contains all the site config files.

Installation

You’ll run these commands on the build server.

  1. Run the install script to select the site you’ll be building certificates for.
  2. In the bin directory, run the buildcerts.sh shell script.
  3. The encryption key used by the encryption-config.yaml file occasionally returns an error due to an invalid character in the final string. Run the cleanup script to remove all the certs and start over with step 1.
  4. Using the tar command, create a site specific (named after the apiserver hostname) tar file.
  5. Copy the site.tar file into the /opt/static/kubernetes/sitecerts directory.

When the kubernetes script is run, it copies the site.tar file into the kubernetes directory and untars the certificates. Then the entire package is scp’d over to all nodes in the cluster where you then begin the kubernetes installation process.

The kubernetes Scripts and Configurations

The kubernetes static directory contains all the necessary binary files used when building the clusters. Make sure you have the various kubectl binaries, the kube-apiserver,kube-controller-manager, kube-scheduler, and kube-proxy binaries. In addition, you’ll need the cri and cni archives. And the etcd and etcdctl binary, used to manage etcd. As we’re also using the Cloud Flare binaries, make sure the cfssl and cfssljson binaries are also in place for the certificates.

Configurations

This configuration is a bit more complicated but very similar to the kubecerts one. It has the site name, CA information, Load Balancer information, Service IP, a list of master and worker nodes, plus path information for the various kubernetes certs and configurations, and some configuration options. Obviously see the .config base file for the information.

Scripts

  • version – contains the Kubernetes version
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildadmin.sa.sh – used to build the first cluster admin.
  • bin/installcerts.sh – installs all the generated certificates into their appropriate directories.
  • etcd/buildetcd.sh – installs and configures the etcd binary.
  • etcd/config – directory with the etcd.service configuration file.
  • master/buildmaster.sh – shell script that installs and configures a master server.
  • master/config – directory with the configuration files for the core services.
  • tools – various scripts used for managing the cluster.
  • worker/buildrepo.sh – this script pulls the certificate from the Artifactory server so images can be pulled.
    worker/buildworker.sh – script that installs and configures a worker node.
  • worker/config – various configuration files used when configuring a worker node.

Installation

You’ll be running scripts on every node of the cluster.

  1. Run the install script to select the site you’ll be installing the cluster for.
  2. In the bin directory, run the installcerts.sh shell script. This copies all the certs into the appropriate directories.

Master Servers

  1. In the etcd directory, run the buildetcd.sh shell script. This installs etcd.
  2. In the master directory, run the buildmaster.sh shell script. This installs the kube-apiserver, kube-controller-manager, and kube-scheduler services.

Once done, in the bin directory run the buildadmin.sa.sh shell script. This creates your serviceaccount and a kubeconfig file you use to access the servers from the management server.

Worker Nodes

  1. In the worker directory, run the buildworker.sh shell script.
  2. Then run the buildrepo.sh script. This ensures the certificate from the Artifactory server is installed on the node. Otherwise you can’t download images from Artifactory

The kubeadm Scripts and Configurations

There are two parts of this process. This part describes both parts however the installation step describes completing the installation of the cluster.

Configurations

The configuration file is is pretty minor containing the cluster name, the certificate details (users need new certs), Load Balancer, and Artifactory credentials.

Scripts

  • version – contains the Kubernetes version.
  • install – shell script that selects the correct site configuration file from the config directory for the cluster you’re on.
  • bin/buildadmin.sa.sh – script used to create the cluster admin serviceaccount.
  • bin/builduser.sh – script use to create namespace specific users.
    bin/config – email text files used to notify users and admins of cluster management.
  • bindings/clusterrolebinding.yaml – the main file used to tie service accounts to the cluster.
  • config – directory with csr files for admins and users.
  • roles/clusterrole.yaml – role used to manage the cluster.
  • tools – directory with tools used to manage the cluster.
  • yaml/buildsystem.sh – shell script used to apply the additional yaml files to install tools.

Installation

  1. Run the install script to select the site you’ll be managing the cluster with
  2. In the bindings directory, run the kubectl apply -f clusterrolebinding command
  3. In the roles directory, run the kubectl apply -f clusterrole command
  4. In the roles directory, run the kubectl apply -f dashboard command
  5. In the yaml directory, run the buildsystem.sh shell script to install grafana, heapster, influxdb, kube-dns, and the kubernetes-dashbaord.

User Creation

The buildadmin.sa.sh and builduser.sh shell scripts are part of the overall kubeadm script package.

Admins

This is the process in the buildadmin.sa.sh shell script.

  1. Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
  2. Create a clusterrolebinding to the cluster-admin clusterrole.
  3. Create username.kubeconfig file in the /usr/local/admin/kubernetes/users directory.
  4. Generate a password from the password script
  5. Create a zip file using the password.
  6. Saves the file into the /var/www/html/kubernetes directory.
  7. Sends the user an email.
Users

This is the process in the builduser.sh shell script.

  1. Create a serviceaccount. Since we don’t have an auth interface, users are created as service accounts.
  2. As users are members of namespaces and only have read-only (or view) access to the cluster, a namespace is created.
  3. A rolebinding is created so the user has edit access to the namespace.
  4. Create a username.kubeocnfig file in the /usr/local/admin/kubernetes/users directory
  5. Generate a password from the password script
  6. Create a zip file using the password.
  7. The file is copied into /var/www/html/kubernetes.
  8. Sends the user an email with their password and the location of their encrypted kubeconfig file.

Completion

This completes the installation process. When done, you should have a functioning cluster and access to the cluster.

References

Posted in Computers, Kubernetes | Tagged | Leave a comment

Stuffed Chicken

Ingredients

  • 4 chicken breasts
  • 1 tablespoon olive oil
  • 1 tesapoon paprika
  • 1 teaspoon salt divided
  • Ā¼ teaspoon garlic powder
  • Ā¼ teaspoon onion powder
  • 4 ounces cream cheese, softened
  • Ā¼ cup grated Parmesan
  • 2 tablespoons mayonnaise
  • 1 Ā½ cups chopped fresh spinach
  • 1 teaspoon garlic, minced
  • Ā½ teaspoon red pepper flakes

Instructions

Preheat oven to 375 degrees.

Place the chicken breasts on a cutting board and drizzle with olive.

Add the paprika, 1/2 teaspoon salt, garlic powder, and onion powder to a small bowl and stir to combine. Sprinkle evenly over both sides of the chicken.

Use a sharp knife to cut a pocket into the side of each chicken breast. Set chicken aside.

Add cream cheese, Parmesan, mayonnaise, spinach, garlic, red pepper and remaining Ā½ teaspoon of salt to a small mixing bowl and stir well to combine.

Spoon the spinach mixture into each chicken breast evenly.
Place the chicken breasts in a 9×13 baking dish. Bake, uncovered, for 25 minutes or until chicken is cooked through.

Posted in Cooking | Tagged , | Leave a comment

DevOps Interview

Did my interview Friday. Interesting in general. No real OS questions but questions on Kubernetes and Docker and interest in Helm. Then an actual hands on test. Configure Jenkins to upload a file from a Github site to an Amazon Web Site (AWS).

Git is a revision control software tool, github and gitlab are management sites for git. Iā€™ve used revision control for 21 years but started poking at git last year.

Jenkins is a deployment tool. You set up a configuration that pulls code from github/gitlab when something changes and copies it up to a hosting site like a web server. Iā€™ve used Jenkins for a couple of simple web sites I own.

I have an account for AWS but only poked at it a little. I created a Kubernetes site on Google as a learning project but thatā€™s it.

So I spent 90 minutes on google getting this going, searching for how to upload to AWS from Jenkins, etc. I had a couple of false starts and at least one ā€œsit back and think a minuteā€ moment. They finally called time at 4pm with the task incomplete. Iā€™m pretty sure I would have gotten it within a few more minutes. They had ā€œcopy directory blah upā€ but directory was a level down from the workspace directory.

I will say that I learned quite a bit from this test 🙂 Even for my personal sites. I used the test to rework my Jenkins process to exclude .git and use a shell command vs a plugin (less complicated).

Probably no on the job (nothing heard so far) but I took something away and need to do a little more poking.

Posted in Computers | Leave a comment

Bacon Wrapped Chicken

Two chicken breasts
6 slices of bacon

Preheat oven to 400F

Combine:

1 1/2 teaspoon thyme
1/2 teaspoon sugar
Heaping 1/8 teaspoon cayenne
1 teaspoon salt
1/2 teaspoon black pepper

Mix it up and use it as a rub for the chicken.

Fold one slice of bacon in half and under each chicken breast. Wrap the other two around the chicken with the ends under the folded bacon on the bottom.

Put the two pieces into a non-stick frying pan with the heat on medium. You’ll flip it periodically to crisp the bacon on both the top and bottom of the chicken. The first cook should keep the bottom together for the flip.

Once the bacon is crispy/cooked, put the breasts onto a cookie sheet and slide into the oven. Cook them for approximately 15 minutes or when you insert a thermometer it hits 160F.

The main problem with breasts is people cook them too long which is why you test with a thermometer. While the juice from my chicked was a little pink, it was cooked and nice and juicy.

I also cooked some potatoes for about 30 minutes, probably overcooking them a touch but it kept the potatoes moist even for leftovers.

Posted in Cooking | Tagged , | Leave a comment

Installing Container Linux on VMware

I have a VMware vCenter configuration at home. Two R710 servers connected as a cluster (VMUG subscription).

I was having a little trouble with installing CoreOS (aka Container Linux) and in general have issues due to most tutorials using Vagrant, Virtual Box, AWS, GCS, or even VMware Wrokstation. None of them are a problem honestly but I do have a bit more of an involved local setup and there doesn’t seem to be much in the way of how to’s if you don’t have a VMware configuration.

Anyway, setting up CoreOS took a couple of tries but I got it done. It’s not all that hard honestly.

1. Download the CoreOS iso at https://coreos.com/releases
2. In vCenter, Create a new virtual machine.
3. Configure with 2 CPUs, 2 Gigs of RAM, and 40 Gigs of disk. It only needs 8 apparently but updates are by snapshot to new partitions so it’s nice to have space.
4. Under the virtual machine settings, VM Options, Boot Options, click the ‘Force BIOS setup’ checkbox.
5. Open a console.
6. Boot the VM.
7. You’ll be at a BIOS screen, under the VMRC menu, select the downloaded coreos iso.
8. Under boot, make sure you’re booting to CD
9. Save and boot.
10. Once it’s up and at a core@localhost prompt, you’ll need to create a password.

sudo openssl passwd -1 > cloud-config-file

11. Edit the cloud-config-file like so. The interpreter must find the first line as it is or it’ll fail:

#cloud-config
users:
  - name: 'login-name'
    passwd: 'openssl generated passwd'
    groups:
      - sudo
      - docker
hostname: hostname.domain.name

12. Install coreos by running this command:

sudo coreos-install -d /dev/sda -C stable -c cloud-config-file

13. Under VMRC, unmount the iso.
14. Reboot
15. Log in.
16. The system starts in DHCP mode if you didn’t configure it in the cloud config file. To give it a static ip, in ”’/etc/systemd/network”’, create a file called 00-ens192.network. It can be any file name but starting with numbers will order the startup if you configure other bits of the system.

[Match]
Name=[interfacename]

[Network]
DNS=[dns]
Address=[ipaddress]
Gateway=[gatewayaddress]

And done

Posted in Computers | Leave a comment

Presenting Homelabs To Potential Employers

A recent posting mentioned documenting your HomeLab on your Resume to show you’re going above and beyond what your job requires. Shows initiative and interest. I know when we’re looking at candidates, we want to find people who have external geeky interests like a HomeLab. In some interviews, people have been asked to diagram out their HomeLab and are dinged if they don’t have one or can’t document it on the fly.

But an alternative view was also posted in that is there a Work/Life balance. Are you taking the time to do other things outside of tech? Is tech your sole hobby?

Personally my main hobby is table top gaming. Board Games, War Games, Card Games, Dice Games, and Role Playing Games. Games. In fact, it partially got me into computers. That and I was a typesetter on a computerized typesetting machine back in 1982 or so. Typesetting up a Dungeons & Dragons character sheet got me thinking about it and then creating a Car Wars vehicle generation program in Basic got me a job. But I also tour on a Motorcycle, Ski, Snowshoe, and yes, learn new technologies with my HomeLab.

Since the mid-90’s, my home environment has consisted of at least a separate Firewall system (generally my previous desktop when I upgraded). Back in the early 00’s, I signed up for an external server hosted in Miami where I could put pictures and continue to maintain my Unix knowledge (an OpenBSD server, now a CentOS one).

A few years back I was gifted two Dell R710s that were being replaced at work. I’ve since installed vSphere to have a virtualized environment, pfSense (firewall plus) as the first Virtual server, and then a bunch of servers to isolate the things I do for fun and to learn new tech. Recently I signed up for the VMware Users Group (VMUG) and their Advantage program. This gives me access to the vCenter software and the Operations options to let me turn my two ESX servers into a cluster (and mimicking work much better). We’re moving towards a Private Cloud using vRealize technology so I have access to that too (not installed… yet).

I have a bunch of different servers, syslog, Space Walk, Samba, a couple of development servers, MySQL, several web servers, a CI/CD pipeline is being staged (Jenkins, Artifactory, Ansible, and GitLab), and I’m in the process of rebuilding my Kubernetes servers (3 Masters/3 Workers).

I’ve recently been tasked with taking ownership of the Kubernetes development process, in part due to my explorations of Kubernetes at home. I’m moving the server management scripts over to a GitLab server at work that I built. Again, in part due to me setting it up at home (I have a Revision Control System (RCS) system at home and work now and moved to git).

It’s helping me get more familiar with the new DevOps ideas plus the CI/CD pipeline tools, Orchestration with Kubernetes, and Microservices with Docker.

The question though is, should this kind of information be better documented on a resume? Should there be a HomeLab section or a Learning section, something that shows your interest, motivation, and desire even if your job doesn’t require you to use these tools?

There’s also the idea that such things be documented in a Cover Letter. You see the position, supply your resume, and then in the Cover, call out your HomeLab. It might be a better place for some positions where you’re looking at a specific job. Over the years though, I’ve seen positions where you supply only your resume to a company database which is then mined when a position opens. Tough to supply a cover for a job you don’t know about.

It’s something to think about.

Posted in Computers | 2 Comments

If A Tree Falls In The Forest

I write scripts to better manage the Unix servers we’re responsible for. Shell scripts, perl scripts, whatever it takes to get the information needed to stay on top of the environment. Be proactive.

Generally scripts are written when a problem is discovered. As we’re responsible for the support lab and production servers (not the development or engineering labs) and since only production servers are monitored, there are many scripts that duplicate the functions of monitoring. Unlike production, we don’t need to immediately respond to a failed device in the support lab. But we do need to eventually respond. Scripts are then written to handle that.

The monitoring environment has its own unique issues as well. It’s a reactive configuration in general. You can configure things to provide warnings but it can only warn or alert on issues it’s aware of. And still there’s the issue of not available in the support lab.

I’ve been programming in one manner or another (basic, C, perl, shell, php, etc) since 1982 or so (Timex/Sinclair). One of the things I always tried to understand and correct were compile time warnings and errors. Of course errors needed to be taken care of, but to me, warnings weren’t to be ignored. As such server management scripts are more like code than 6 or 7 lines of whatever commands are needed to accomplish the task. Variables, comments, white space, indentation, etc. The Inventory program I wrote (Solaris/Apache/Mysql/PHP; new one is LAMP) looks like this:

Creating list of files in the Code Repository
Blank Lines: 25687
Comments: 10121
Actual Code: 100775
Total Lines: 136946
Number of Scripts: 702

136,000 lines of code of which 25% are blank lines or comments. The data gathering scripts is up to 11,000 or so lines in 72 scripts and another 100 or so scripts that aren’t managed via a source code repository.

The process generally consists of data captures on each server which is pulled to a central server, then reports are written and made available. One of the more common methods is a comparison file. A verified file compared to the current file. A difference greater than 0 means something’s changed and should be looked at. With some of the data being gathered being pretty fluid, being able to check for every little issue can be pretty daunting so it’s a comparison. This works pretty well overall but of course there’s setup (every server data file has to be reviewed and confirmed) and regular review of the files vs just an alert.

Another tool is the log review process. Logs are pulled from each server to a central location, then processed and filtered to remove the noise (the filter file is editable of course), and the final result concatenated into a single log file which then can be reviewed for potential issues. In most cases the log entries are inconsequential. Noise. As such they are added to the filter file which reduces the final log file size. This can become valuable in situations like the lab where monitoring isn’t installed or for situations where the team doesn’t want to get alerted (paged) but do want to be aware.

But there’s a question. Are the scripts actually valuable? Is the time spent reviewing the daily output greater than the time fixing problems should they occur? These things start off as good intentions but over time become a monolith of scripts and data. At what point do you step back and think, “there should be a better way”?

How I determined it was when I was moved to a new team. In the past, I wrote the scripts to help but if I’m the only one looking at the scripts, is it really helping the team? In moving to the new team, I still have access to the tools but I’m hands off. I can see an error that happened 18 months ago that hasn’t been corrected. I found a failed drive in the support lab 6 months later. I found a failed system fan who knows how long ago it had failed.

There was an attempt to even use Nagios as a view into the environment but there are so many issues that again, working on them becomes overwhelming.

The newest process is to have a master script check quite a few things on individual servers and present that to admins who log in to work on other tasks. Reviewing that shows over 100,000 checks on 1,200 systems and about 23,000 things that need to be investigated and corrected.

But is the problem the scripts aren’t well known enough? Did I fail to transition the use of them? Certainly I’ve passed the knowledge along in email notifications (how the failure was determined) over time and the scripts are internally documented as well as documented on the script wiki.

If a tree falls in the forest and no one is around, does it make a sound?

If a script reports an error and no one looks at the output, is there a problem?

The question then becomes, how do I transfer control to the team? I’ve never said, “don’t touch” and have always said, “make changes, feel free” but I suspect there’s still hesitation to make changes to the scripts I’ve created.

The second question is related more to if the scripts are useful. Just because I found a use for them doesn’t mean the signal to noise ratio is valuable, same as the time to review vs the time to research and resolve.

Finally if the scripts are useful but the resulting data unwieldy, what’s an alternate method of presenting the information that’s more useful. The individual server check scripts seem to be a better solution than a centralized master report with 23,000+ lines but a review shows limited use of the review process upon login.

Is it time for a meeting to review the scripts? Time to see if there is a use, is it valuable, can it be trimmed down. Is there just so much work to manage it that the scripts, while useful, just can’t be addressed due to the reduction of staff (3 admins for 1,200 systems).

Posted in Computers | 4 Comments

Yearly State of the Game Room 2017

And here we are again. The yearly State of the Game Room. With the move to a new place, I got to move all the games into a single room which is a positive thing and lets me display them all rather than jumping between a couple of rooms to locate a game. I also have a lot better lighting in this room making gaming a lot easier.

I’ve added a good 6 Kallex shelf squares of games over the year. Bunny Kingdom and The Thing are two of the ones we enjoyed the most and the ones we’ll be taking to the end of year bash. Another one we played was The Others. It was a pretty interesting game with lots of bits but it really got a bit long a couple of times. I can do a couple or three hours of a game before I start wanting to end it.

List of games I can remember for the year by looking at the shelves. I’m sure I missed a couple especially since we moved this year. I do have an inventory system so I can keep track but I haven’t been keeping up on it. I’m working on getting it updated over the next few weeks or so. Maybe I’ll have a better list for next year šŸ™‚

* 7th Sea RPG
* 7 Wonders: Duel Pantheon
* Apocalypse Prevention, Inc RPG
* Arkham Horror the Card Game
* Blood Bowl
* Bottom of the 9th
* Bunny Kingdom
* a few Call of Cthulhu books
* Castles of Burgundy Card Game
* Charterstone
* Clank
* Conan RPG
* Conan Board Game
* Dark Souls
* DC Deck Building Multiverse Box
* Dead of Winter Warring Colonies
* Dice Forge
* a few Dungons and Dragons books
* Elder Sign expansions
* Eldritch Horror expansions
* Evolution Climate
* Exit
* Fallout
* Fate of the Elder Gods
* Feast For Odin
* First Martian
* Five Tribes expansions
* Fragged RPG
* Gaia Project
* Jaipur
* Jump Drive
* Kemet
* Kindom Builder
* Kingdominos
* Kingsport expansion
* Legendary expansion
* Lovecraft Letters
* Magic Maze
* Mansions of Madness 2nd Edition expansions
* Massive Darkness
* Mountains of Madness
* My Little Pony RPG
* Netrunner packs
* New Angeles
* The Others
* Pandemic Second Season
* Pandemic Iberia
* Pandemic Rising Tide
* Paranoia RPG (Kickstarter)
* a few Pathfinder books
* Queendominos
* a couple of Red Dragon Inn expansions
* a few Savage Worlds books
* Scythe expansions
* Secret Hitler
* a few Shadowrun 5th books
* Sherlock Holmes expansions
* Cities of Spendor
* Talon
* Terraforming Mars
* The Thing
* Ticket to Ride Germany
* Time Stories
* Twilight Imperium
* via Nebula
* Villages of Valeria
* Whistle Stop
* Whitechapel expansion
* Whitehall
* a few X-Wing minis.

After getting things organized and making room for the next year, I have pictures.







Posted in Gaming | 1 Comment

‘If You Were Smart, You’d Be In Engineering’

This is a comment I heard a few years back from a manager on the Engineering side of the house. Of course it’s stated in a bad way, but the words behind it is that to advance in technology, your next step should be in Engineering (or Development) and you should be working towards that.

As a Systems Admin for many many years, I’ve found that I’m good at the job. Managing systems and providing solutions in Operations to improve the environment. It also plays to my own strengths and weaknesses. Troubleshooting problems, fixing them, and moving on vs spending all my time on a single product as a programmer or engineer. I’ve had discussions with prior managers about advancement both in Engineering and in Management. I’ve even attended Management Training. After discussion, it was felt that I should work with my strengths.

A few years back, I was moved into an Operations Engineer role. The company started implementing a Plan, Build, Run model and I was moved into the Build team. No real reason was given for the choices. The concept was to move a portion of the team away from day to day maintenance duties in order to focus on Business related Product deployments. It’s taking too long for the entire team to get Products into the field, what with being on call and dealing with maintenance issues like failing hardware so let’s pull three people away from the team to be dedicated on Business related Projects.

Sadly there was no realization that the delays are in part due to multiple groups in Operations having their own priorities. Delays were more due to the inability of the groups to fully align on a project. With three business groups each with their own priorities, the time to deploy really didn’t change much and there were fewer people to do the work.

As we moved forward, the company redefined the Build role. Titles change and we’re Enterprise Systems Engineers. One of the new goals was to move away from server builds and into server automation. Build one or two servers for the project in order to create automation and then turn the rest of the project as a Turn Key solution, over to the Run team to complete.

In addition, a new project came along and was designed to follow the Continuous Integration/Continuous Delivery (CI/CD) process. The concept is to deploy software using an incremental process vs the current 6 month to 18 month process.

The current process is to gather together all the change requests that identify problems in the product plus the product enhancement requests and spend time creating the upgraded product. It can take months to get this assembled, get code written and tested, fix problems that crop up, get it through QA testing, and then into Operations hands. And as noted before, deployment itself can take several months due to other projects on my plate and tasks on other groups plates.

The CI/CD process means that there’s a pipeline between Development into Operations. A flow, picture a stream. A problem is discovered, it’s passed back to development, they fix it, put it into the flow, automated testing is performed to make sure it doesn’t break anything, it’s put into the production support environment, then production. Rather than taking months to assemble and fix a large release, you have a single minor release and you can identify, fix, and deploy it into production in a matter of minutes. Even better, the Developer or Engineer would be on call for their product. If something happened, they’d be paged along with the teams, could potentially fix it right then, put it into the flow and let the fix get deployed. Automated testing is key here though and part of the fix is to fix or add new tests to make sure it doesn’t happen again or even with other products.

This process is pretty similar to how a single programmer working on a personal project for example might manage their program(s). I have a project I’ve been working on for years and has reached about 140,000 lines of code (actually about 98,000 lines of actual program with the remaining 42,000 lines being comments or blank lines). I’ve created a personal pipeline, without automated testing though. And an identified problem can be fixed and deployed in a few minutes. New requests can generally take about 30 minutes to complete and push out to the production locations.

This CI/CD oriented project was interesting. Plus it included creating a Kubernetes environment. I’d had someone comment about DevOps several years earlier so I’d been poking about at articles and even reading The Phoenix Project, a book about CI/CD and DevOps. A second project followed using CI/CD and Kubernetes. A part of the CI/CD process was utilizing Ansible and Ansible Tower, a tool I’ve been interested in using for a few years.

As a Systems Administrator, one of our main responsibilities is to automate. If you’re doing a task more than once, automate it. Write a script. I started out hobby programming, then working part time and then full time, back in the early 80’s. Even though I moved into system administration, I’ve been hobby programming and writing programs to help with work ever since. I believe it’s been helpful in my career as a Server Administrator. I have deeper knowledge about how programs work and can write more effective scripts to manage the environment.

In the new Build environment, it’s been even more helpful. It lets me focus even more on learning automation techniques with cloud computing (I’ve created Amazon Web Services and Google Cloud Services accounts) and working towards automating server builds. Years back I started creating “Gold Image” virtual machine templates. I also started in on scripting server installations. The rest of the team has jumped in and we have an even better installation script framework. Lately I’ve been working on a server validation script. For the almost 1,000 systems we manage, the script is reporting on 85,000 successful server checks with errors called out that can be corrected incrementally by the team (a local login report plus a central reporting structure). As part of this, I’ve started creating Ansible Playbooks and added scripts to our Script framework to make changes to systems. Eventually we’ll be able to use this script to create an automation process to make sure servers are built correctly the first time.

It’s been fun. It’s a challenge and sort of a new way of doing things. I say “sort of” because it still falls into the “automate everything” aspect of being a Systems Admin. With Virtual Machines and Docker Containers, we should be able to further automate server builds. With scripts defining how the environment should look, we should be able to create an automatic build process. Plus we should be able to create a configuration management environment as well. A central location defining a server configuration that ensures the server configuration is stable. You make a change on the server and the configuration script changes it back to match the defined configuration. Changes have to be made centrally.

DevOps though. It seems like this is really “Development learns how to do Operations”. “If you were smart, you’d be in Engineering.” Many of the articles I read, many of the posts on the various forums, many of the discussions I’ve followed on Youtube and in person really seem to be focused on Development becoming Operations. There doesn’t seem to be a lot of work on how to make Operations part of the process. It seems to be assumed that a Systems Admin will move into Engineering or Development or will move on to an environment that still uses the old methods of managing servers. Alternately we’ll be more like COBOL programmers. A niche group that gets called in for Y2K like problems.

I like writing code and I’m certainly pursuing this knowledge. I have a couple of ESX servers at home with almost 100 VMs to test Kubernetes/Docker, work on how to automate server deployments, work on different pipeline like products like Jenkins and Gitlab. I use it for my own sites in an effort to get knowledgeable about such a role and am starting to use them at work. Not DevOps as that’s not a position or role. That’s a philosophy of working together. A “DevOps Engineer” is more along the lines of an Engineer working on pipeline like products. Working to get Product automated testing in place but also working on getting Operations related automated testing in place.

If you’re not taking time to learn this on your own and the company isn’t working on getting you knowledgeable, then you’re going to fall by the wayside.

Posted in Computers | Tagged , , , , , , , , | Leave a comment