Recruiters Are Used Car Salesmen

For the past couple of years I’ve had my resume out and have occasionally interviewed. The interviews are actually particularly helpful as it gives me a view into what companies are looking for so I can focus on what skills I might need in the event of a lay off.

Of course the recruiters swarm like paparazzi. Virtually none of them have actually read my resume or profile, they just see my name on Linkedin, Dice, Indeed, Monster, or some other site, do a keyword search (sort of), and start spamming me with positions. These positions have little real relevance to my skill set.

They see ‘DevOps’ in my resume and spam me with every available DevOps position from Huntington Beach to the Jersey shore and every possible opportunity between 3 months to hire to 12 months and done.

Hey, I have a 3 month position in New Jersey for $20.00 an hour and a possibility to hire. Call me and we can talk!

Folks, I have a limit of about an hour for one way commuting. This means I’m self limited to these three locations. Telling me about a position that’s 90 minutes away one way or located downtown isn’t going to something I’m going to jump from my current job.

And skills. Windows Administration required. I haven’t done any Windows administration since the early 90’s. It’s not on my resume or profile anywhere.

Don’t worry about the Windows requirement. You can pick it up.

You don’t seem to understand. I’m not interested in learning how to manage Windows servers. I am interested in learning Powershell, looks interesting.

And Amazon Web Services. No AWS on my resume anywhere. Yes, I want to learn some cloud services but my only options are learning at home or getting into a position where I can learn AWS.

Recently I received some 15 different recruiters contacting me for the same position within commuting distance but it was a short term only position with no option to be hired at the end of the contract. My profile does say “Full Time Only”. It must have had a pretty good hiring bonus.

Many of these recruiter queries are obviously form letters. I get the same exact job posting with a slightly off font insertion of the recruiters name and my name.

I’ve had a couple of interviews that were arranged by recruiters. One last year had the gentleman actually meet me for lunch so we could chat. The interview he arranged looked good but fell through. He said he’d contact me again for new positions and I’ve not heard from him since. Another continues to send me positions. When I reply back with my location preference, he says that he just sends out the position to everyone on his list in the hopes something will stick.

I do reply to many of these and get an occasional response thanking me for my reply so there’s someone there.

So yes, recruiters are Used Car Salesmen. Just trying to get you into a position regardless of fitness or need so they can get paid.

Posted in Uncategorized | 1 Comment

Award Winning Guacamole

3 Haas avocados, halved, seeded and peeled

1 lime, juiced

1/2 teaspoon kosher salt

1/2 teaspoon ground cumin

1/2 teaspoon cayenne

1/2 medium onion, diced

1/2 jalapeno pepper, seeded and minced

2 Roma tomatoes, seeded and diced

1 tablespoon chopped cilantro

1 clove garlic, minced

In a large bowl place the scooped avocado pulp and lime juice, toss to coat. Drain, and reserve the lime juice, after all of the avocados have been coated. Using a potato masher add the salt, cumin, and cayenne and mash. Then, fold in the onions, tomatoes, cilantro, and garlic. Add 1 tablespoon of the reserved lime juice. Let sit at room temperature for 1 hour and then serve. 

Posted in Uncategorized | Tagged , | Leave a comment

Taco Soup

Recently I wanted to find something I could create with hamburger that wasn’t hamburgers or spaghetti. After some hunting on line, I found Taco Soup to sound pretty tasty actually. A couple of sessions later and I had what I thought was a pretty good recipe and simple to double for larger groups (when the band is over for example 🙂 ).

  • 2 pounds of hamburger
  • 1 15oz can of sweet corn, drained
  • 1 15 oz can of pinto beans, drained
  • 1 15 oz can of fire roasted diced tomatoes
  • 1 medium onion
  • 1 package of Original El Paso taco mix
  • 1 heaping 1/4 teaspoon of cayenne pepper to taste (medium hot; adjust as required)
  • 1 1/2 cup of water

In a medium pot, combine the corn, beans, tomatoes, package, cayenne pepper, and water and cook on medium heat.

Chop up the onion and in a frying pan, add oil (I use olive oil generally). Cook the chopped up onion for about 5 minutes, until they start to clear. Then add the hamburger. Cook the hamburger until done.

By then, the pot of ingredients should be slowly boiling. Add the hamburger and onion to the pot, cover and cook for 15 to 20 minutes.

Posted in Cooking | Leave a comment

Current Home Servers

This has come up several times and I can’t always remember all the servers I have set up for one reason or another. Many times it’s just because the subject I’m on has me listing the relevant servers plus a few I remember. But I’d like to have, at least, a snapshot in time of what I have running in case it comes up again. I can just point to my blog post 🙂

The physical environment consists of:

  • Dell R710
  • 192 Gigs of Ram
  • 2 8 Core Xenon x5550 2.67 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Dell R710
  • 288 Gigs of RAM
  • 2 6 Core Xenon X5660 2.8 GHz CPUs
  • 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror)
  • 4 3 TB 7,200 RPM Drives set up as a RAID 5
  • 4 Onboard Ethernet Ports
  • 1 4 Port 1 Gigabit PCI Card
  • 1 10 Gigabit PCI Card
  • 2 2 port Fiber HBA PCI Cards
  • Sun 2540 Drive Array
  • 12 3 TB Drives

I also have a couple of UPSs which have enough power to keep the servers up for 20 minutes or so while I get things shut down along with a Gigabit business switch. Since we’re on High Speed WiFi for our Internet, this is all internal networking and not intended for external use.

I’ve installed VMware vSphere on the two R710’s and through VMug (the VMware User’s Group), I purchased a vCenter license to tie them both together.

I’ve since created some 45 servers (and destroyed a lot more over time) as I install servers to test various bits I’m interested in.

First off, my development environment. This consists of my source code and CI/CD stack.

  • Home Dev – Server using rcs and my scripts to manage code. This is my personal code. This also hosts the development review of my personal web sites. Testing changes and such.
  • Work Dev – Server using rcs and my scripts to manage code. This is a duplicate of my environment at work. I use the same scripts with just a different configuration file for the environment. Like Home Dev, this hosts the development review of my work web sites.
  • Git – Server using git for my code management. I’m gradually converting my code on both Home Dev and Work Dev to using git to manage code.
  • Gitlab – Part of the CI/CD stack, this is the destination for my git projects.
  • Artifactory – The artifacts server. This holds distribution packages, docker images, and general binaries for my web sites.
  • Jenkins – The orchestration tool. When changes occur on the Gitlab site, Jenkins pushes the changes up to my Production server hosted in Miami.
  • Photos – This is my old source code and picture site. Much of this has been migrated to my Home Dev server and is on the way to my Git server and CI/CD pipeline.

Next up are the database servers used for various tasks.

  • Cassandra – Used by Jeanne to learn the database. Several of the database servers are learning tools for either Jeanne or myself.
  • MySQL Cluster (2 Servers) – Used by me to learn and document creating a cluster and to start using it for my Docker and Kubernetes sessions.
  • Postgresql – Jeanne’s learning server.
  • Postgresql – Server used by both Jira and Confluence.
  • MS-SQL – Jeanne’s Microsoft learning server.

Monitoring Servers come up next.

  • Nagios Servers (3) – Used to monitor the three environments. The first monitors my remote Miami server. The second monitors site one. And the third monitors site two.

And the Docker and Kubernetes environment

  • Docker Server – Used to learn how to create containers using Docker.
  • Control Plane – The main Kubernetes server that manages the environment and Workers
  • Workers (3) – The workers that run Docker and the Kubernetes utilities.
  • ELK – The external logging server for Kubernetes and Docker. Since Docker containers are mutable, I wanted to have an external logging source to keep track of containers that might be experiencing problems.

Next Automation servers.

  • Ansible Tower – Site 1 Ansible server that also hosts Tower.
  • Ansible – Site 2 Ansible server. Used to automatically update the servers.
  • Salt – Configuration Management tool used to keep the server configurations consistent.
  • Terraform – Server for automatic builds of VMs.

Some utility or tool servers. Used to manage the environment.

  • Sun 2540 – VM used to manage the 2540 Drive Array
  • Jumpstart – Jumping off point to manage servers.
  • Tool – Site 1 Tool server. Scripts and such for the first site.
  • Tool – Site 2 Tool server. Scripts, etc…

More general administration servers.

  • Identity Management – A central login server.
  • Syslog – The main syslog server for the environments.
  • Spacewalk – The RPM Repository for the servers. Instead of each of the 45 servers going out to pull down updates, updates are pulled here and the servers pull from the Spacewalk server.
  • Jira – Agile server for managing workflow.
  • Confluence – Wiki like server. Tied into Jira.
  • Mail – Internal Mail server. Mainly as a destination for the other servers. Keeps me from sending email out into the ether.
  • Inventory – Server that keeps track of all the servers. Configuration management essentially.
  • pfSense – Firewall and gateway to the Internet

And finally the personal servers.

  • Plex Movie Server – Hosts about 3 TB of movies I’ve ripped from my collection.
  • Plex Television Server – Hosts about 3 TB of television shows I’ve ripped from my collection.
  • Backups – Backs up the remote Miami server.
  • Samba – Back up server for all the workstations here at Home.
  • Windows XP – A workstation used to be able to continue to use my HP scanner and Sony Handycam, both which only work with XP.
  • Windows 7 – No really reason other than I can.
  • Status – My Status Management software. Not really used right now.
Posted in Computers | Leave a comment

Managing Dell Fan Speeds

Since the Dell Servers are in a rack behind me, I wanted to better manage the fan speeds. Dell has an Error temperature, a Warning temperature, and the Ambient temperature. Basically where Dell tries to keep the server temperature. Errors are 2C lower and 47C higher, Warning is 8C lower and 42C higher, and Ambient is 25C. Of course, this means the fans change speed up and down throughout the day. This can be a little annoying if you’re trying to work 🙂 As such I did some hunting on line and found how I can manually manage the speeds.

First off, you need to have ipmitool installed. This lets you communicate with the Dell IPMI controller. Next up you need to enable IPMI in the Dell DRAC. It’s in the iDRAC under iDRAC Settings, Network/Security. Check the checkbox to Enable IPMI Over LAN and Apply.

It takes your iDRAC credentials and you then use ipmitool to first get the current temperature, then using hex, set the fan speeds you desire. I have two systems, the Dell R710 and a Dell R410. The options are slightly different between the two.

# /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
   -P [password] sensor reading 'Ambient Temp'
Ambient Temp      | 27

For the R410, you can’t query the line on the ipmitool line. You have to grep it out. Plus three entries are returned in my case but the last one is the important one.

# /bin/ipmitool -I lan -H [R410 ip address] -U [user] 
    -P [password] sensor | grep 'Ambient Temp' | tail -1
Ambient Temp     | 25.000     | degrees C  | ok    | na        | 3.000     | 8.000     | 42.000    | 47.000    | na

I’m really only modifying the fan speeds so the following table will let you make the necessary setting change. I basically don’t want the speeds to get above 50% of max so I didn’t figure out the rest. Maybe later 🙂

R710 Fan speed settings
   Speed         Setting
13.0k = 100% - 0110:0100 0x64
11.7k =  90% - 0101:1010 0x5A
10.4k =  80% - 0101:0000 0x50
 9.1k =  70% - 0100:0110 0x46
 7.8k =  60% - 0011:1100 0x3C
 6.5k =  50% - 0011:0010 0x32 30C
 5.2k =  40% - 0010:1000 0x28 28C
 3.9k =  30% - 0001:1110 0x1e 25C
 2.6k =  20% - 0001:0100 0x14 22C
 1.3k =  10% - 0000:1010 0x0a 12C
   0k =   0% - 0000:0000 0x00 10C

For the R410, the fans run a bit faster but use the same settings and percentages.

R410 fan speeds
   Speed         Setting
18,720 = 100% = 0110:0100 0x64
16,848 =  90% = 0101:1010 0x5A
14,976 =  80% = 0101:0000 0x50
13,104 =  70% = 0100:0110 0x46
11,232 =  60% = 0011:1100 0x3C
 9,360 =  50% = 0011:0010 0x32 30C
 7,488 =  40% = 0010:1000 0x28 28C
 5,616 =  30% = 0001:1110 0x1E 25C
 3,744 =  20% = 0001:0100 0x14 22C
 1,872 =  10% = 0000:1010 0x0A 12C
    0k =   0% = 0000:0000 0x00 10C

I have a script I use to manage the fan speeds throughout a working day or basically when I’m at the desktop practicing stuff, playing a game or whatever. First I take manual control of the IPMI configuration on the servers. Then make the appropriate fan change when servers reach certain temperatures. At 28C or higher, I kick the fans to 40%. At 24C or lower, I drop the fans to 20%. In between I keep the fan speeds at 30%. This is the same regardless of which system I’m running the script against.

In addition, between 8pm and 7am, I revert back to automatic management of the fans. This lets the server manage the temperatures mainly because I’m likely not at my desk.

First, get the current temperature. I showed you this above. Next, take over control of the fans.

$ /bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x00

Then set the fan speed based on the temperature as noted above.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x02 0xff [speed]

Fan speed is in hex as displayed in the above table. 0x28 for 40%, 0x1E for 30% and 0x14 for 20%.

When you want to enable automatic management of the fans, the 8pm to 7am setting.

/bin/ipmitool -I lanplus -H [R710 ip address] -U [user] \
    -P [password] raw 0x30 0x30 0x01 0x01

And that’s it. The R710 uses lan instead of lanplus but works the same in all other ways. And of course you can script out whatever works best for your fan speed desires and piece of mind (and sanity 🙂 ).

Posted in Computers | Tagged , , , | Leave a comment

Adding Projects To Jenkins

A majority of my projects are simple websites or scripts. Nothing too complicated.

My binary files are located on my development server. Jenkins combines the git repo and binary files into a single distribution which is then sync’d up to the target host.

  1. Don’t forget, no spaces in project names.
  2. Create a standard, free flow project (the first option).
  3. When the configure page comes up, select ‘git’ and enter in the git information.
  4. Select Poll SCM. Either * * * * * for checking every minute or some derivation. My first project is minute by minute, the current one is every hour.
  5. Next, add a build step of Execute Shell and add the necessary lines to collect the site and then sync it to the target host.
  6. Save it
  7. Now on the target host, create a jenkins account and change the ownership of the target directory to ‘jenkins:jenkins’
  8. Finally, select ‘Build now’ in Jenkins.
  9. In the project page, click the down arrow for the current build and view the console.
  10. Resolve any errors

Posted in Computers, Jenkins | Tagged , | Leave a comment

Kubernetes WorkerNode Sizing

Overview

For the current Kubernetes clusters, I reviewed several industry best practices and went with a Worker Node resource configuration of 4 CPUs and 16 Gigabytes of RAM per worker node. In addition, I have a spreadsheet that describes all the nodes in each cluster in order to understand the resource requirements and properly extend the cluster or provision larger worker nodes.

This document describes the Pros and Cons of sizing Worker Nodes to help in understanding the decision behind the worker node design I went with.

Note that I’ve expanded my original document as the author of the article linked at the end of this post had me think about my own decision and confirmed my thoughts on the reasons behind my choices. In rewriting my document, I ignored the cloud considerations from the original article since the clusters I’m designing are on-prem and not being hosted in the cloud at this time. We may revisit the article at a later date should we migrate to the cloud.

Considerations

Probably the key piece of information you’ll require that will guide you in your decision are the resource requirements of the microservices that will be hosted on the clusters. If a monster “microservice” requires 2 vCPUs, then a 2 vCPU worker node won’t do the trick. Even a 4 vCPU node might be a bit problematic since there will likely be other microservices that will need to run on the workers. Not to mention any replication requirements or autoscaling of the product. These are things you’ll want to be aware of when deciding on a worker node size.

Large Worker Nodes

Let’s consider the Pros and Cons of a large worker node.

Pro: Fewer Servers

Simple enough, there are fewer servers to manage. Fewer accounts, fewer agents, and less overhead overall in managing the environment. There are fewer servers that will need care and feeding.

Pro: Resource Requirements

The control plane resource needs are fewer with fewer worker nodes. A 4 worker node cluster with 10 CPUs per node gives you a 40 CPU cluster or 40000 millicpus. If the Operating System and associated services reserve 5% of CPU per worker node, a 4 worker node cluster will lose about 2000 millicpus or 2 CPUs bringing the cluster to 38 CPUs for the hosted microservices. In addition, the control plane has less work to do. Every worker node is registered with the control plane so fewer workers means fewer networking requirements and a lighter load on the control plane.

Pro: Cost Savings

If you’re running a managed Kubernetes cluster such as rancher.io or OpenShift, you might be more efficient depending on the size of the smaller nodes. The number of CPUs in a cluster determines the benefit. Smaller nodes lets you be more flexible but could increase the CPU requirements and therefore cost more in the long term however putting in a 5th 10 CPU node might be less cost effective especially if you only need 2 or 4 more CPUs.

Con: Lots of Pods

Kubernetes does have a limit of about 110 pods on a node before the node starts having issues. With multiple large nodes, the likelihood of too many pods on a worker node increases which can affect the hosted services. The agents on the worker node have a lot more work to do. Docker is busier, kubelet is busier, in general the worker node works harder. Don’t forget the various probes performed by kubelet such as liveness, readiness, and startup probes.

Con: Replication

If you’re using Horizontal Pod Autoscaling (HPA) or just configure the deployment to have multiple pods for replication purposes, fewer nodes means your replication is reduced to the number of nodes. If you have a product with 4 replicas and only a 3 node cluster, you effectively only have 3 replicas. There are 4 running pods but if a node fails and it’s hosting two of the replicas, you’ve just lost 50% of your service.

Con: Bigger Impact

A pretty simple note here, with fewer nodes, if a node does fail more pods are affected. The service impact is potentially greater.

Smaller Worker Nodes

In reading the above Pros and Cons, you can probably figure out the Pros and Cons of smaller worker nodes but let’s list them anyway.

Pro: Smaller Impact

With more smaller nodes, if a node goes away, fewer pods and services are impacted. This can be important when we want to manage nodes such as when patching and upgrading.

Pro: Replication

This assumes you have a high number of pods for a service. More smaller nodes ensures you can both have multiple pods and can use HPA to automatically scale it out further if needed. And with smaller nodes, one node failing doesn’t significantly affect a service.

Con: More Servers

Yes, there are more servers to manage. More accounts, more agents, more resource usage. In an environment where Infrastructure as Code is the rule, more nodes shouldn’t have the same impact as if you were manually managing the nodes.

Con: Resource Availability

For more nodes, the control plane overhead increases and the overall cluster size might be affected. For example, my design of a 4 CPU worker node, a 10 node cluster is taking the same amount of resources as a 4 node 40 CPU cluster or 2 CPUs used for overhead across the cluster. But as noted, the cluster management overhead increases as more worker nodes are added. More networking as every node talks to every node.

Con: Pod Limits

Again, this is related to how many resources a product uses. With smaller nodes, there’s a chance of sufficient resource fragmentation that pods might not be able to start if the node is too small in ratio with microservice requirements. If the microservices are truly “micro”, then smaller nodes could be fine but if the microservices are moderately sized, nodes that are too small will have wasted resources and services may not start.

Conclusion

As noted initially, knowing what is needed with regards to the microservices, will give you the best guidelines on worker node size. Consider other factors as well such as the cost per CPU of a node when spread out vs all together. And don’t forget, you can add larger worker nodes to a cluster that might be too small to begin with. Especially if they’re virtual machines. Power down a node, increase the CPU and RAM requirements, and bring it back up.

References

https://learnk8s.io/kubernetes-node-size

Posted in Computers, Kubernetes | Leave a comment

Guitar Dreams

Probably a common dream for guitarists 🙂

We’re loading out to go to a gig. I ask someone to grab my stuff and we head over.

Small hole in the wall place, narrow and a little dark, large window at the front and I can see across the street a corner shop.

We’re playing but I can’t seem to get any volume. I walk up to a short/wide amp, black face and a little beat up with worn corners, turn down my guitar volume and turn up the amp, I go back and try to turn up the guitar but it’s still muted sounding. We’re playing Bon Jovi’s You Give Love A Bad Name.

I can’t seem to remember how to play and I’m slow, the neck is insanely wide, almost like I’m trying to play the body of the guitar, so I have to reach around to press on the strings. I hear where I’m supposed to be but can’t seem to get there.

The guys are looking at me as if to ask, “What the fuck?”

I realize my pedal board is missing. The guy must have missed grabbing it when he grabbed my guitar. I’m playing clean but getting no volume. The guitar has knobs that add distortion but still no volume.

Okay, the owner is annoyed, the customers are annoyed and don’t want to hear Bon Jovi. Okay, Killing In The Name. I tune my guitar to drop d but without my board, I don’t have the starting flanger sound. I give it my best shot though.

What’s this though, I hit the lead in notes, *flang* *flang* *flang* but it changes to *dah* *dah* *dah-da* *dah-da* *dah* (Bon Jovi). And still a too wide neck, and no volume.

The customers are down to a handful and the guys break out an envelope with song requests, two columns typewritten, about 8 songs in each column, the owner turns up the music player. I’m looking over the guys shoulders at the songs and hoping I can get the chord changes so I can muddle along.

It’s getting dark and the shop I can see across the street has their roll down cage in place.

Posted in Music | Tagged | Leave a comment

State Of The Game Room – 2018

During the past year, because of my band taking off, the game room was moved from the central room to the music room and the music room moved to the central room. There’s a lot more room for the guys to play and there’s sufficient room in the game room, formerly music room, to game. Currently I have a Shadowrun group that plays every Sunday. It does seem a touch tight but we’re really sitting around the table so it’s not terrible.

Of course, moving everything was a bit of a pain but I got it done in a week and the guys helped with the bigger Kallax shelves. Generally when I get new games, they’re stuck every which way on top of shelves or on the games in the shelves. Most recently, in preparation for the game room reorganize, I picked up a few more 1×4, 2×2, and 2×4 shelves to better organize the games. It does prove to show that games will fill every available space 🙂

I also took on the task of ensuring all my games were in my inventory. I also identified that I had found the game (there are about 15 or so that I haven’t been able to find since the move) and whether or not I’d played the game. I will remind folks who goggle at the list that it just shows that I’ve played the game, not how many times I’ve played it. Seeing a few hundred plays over 50 years doesn’t seem like a lot but taking into consideration dozens if not hundreds of plays for some games will put some perspective in the data 🙂

Since I updated the inventory, I can’t give you a definitive number of new games for the year. I’ve picked Gloomhaven as a starting point for 2018 as it was released in mid January. Since Gloomhaven, I’ve added 338 items to the inventory.

List of the RPG game system purchases:

* Shadowrun – The bulk will be fill-it-in digital purchases as my gaming group started this past summer and I wanted to have all the digital stuff ready for the group (I tend to play with a laptop and iPad vs a gigantic collection of books; but the books are available if needed).
* Dungeons and Dragons.
* Paranoia – I kickstarted the new Paranoia and hit NobleKnight to fill in my collection. I’d only picked up a few Mongoose books and needed to fill things in.
* Starfinder
* Star Trek – New system that looks interesting. I may spring it on the group at a later date.
* Conan 2d20 – I’m a big fan of Conan anyway so this was a must have
* Star Wars
* Pathfinder
* Genesys

That drops the list down to 83. List of the card game purchases and updates:

* Arkham Horror the card game
* Netrunner the card game
* Clank
* DC Deck Building
* Exploding Kittens
* Joking Hazard
* Munchkin

This brings down the number of board games to 57. Of those, actual new ones (vs hitting the used game store) are:

* Arboretum
* Betrayal Legacy
* Captain Sonar
* Cave vs Cave (Caverna)
* Chicago Express
* Dragon Castle
* Forbidden Sky
* Gloomhaven
* Founders of Gloomhaven
* Sanctum of Twilight (Mansions of Madness)
* Roll for the Galaxy
* The Rise of Fenris (Scythe)
* Shadowrun Zero Day
* Spoils of War
* Colonies (Terraforming Mars)
* Prelude (Terraforming Mars)
* Rails and Sails (Ticket to Ride)
* Triplanetary (Kicktarter)
* Green Hoarde (Zombicide)

With everything else, we only got to play a few of the above games (bolded). My band had its first gig in August so we were practicing hard 🙂 I’ve also spent more time on my Shadowrun group and especially the program I created and have been frantically updating for Shadowrun 5th Edition.

Some of the Reddit Boardgaming questions answered here:

How long have you been involved in the hobby?

Since I was young so about 50 years.

What would you change about your collection if you could?

Tough question. I get games because I’ve played them with others or they’re recommended. Since it’s generally just Jeanne and me, more two player games would be optimum. The only real change would be paying more attention to the number of players for the game. The Thing is an awesome game but really needs 6 or so players for it to work.

Which games might be leaving my collection soon?

Leave? You can’t leave me. None actually. I went through a phase back in the mid 90’s where I almost sold off what I had but the company (since bought by Noble Knight) only offered me pennies on the dollar (and only a few) and I had to ship it to them killing even more so I basically just kept them while I got more into video games. Now I’m glad I kept them, if only for nostalgia.

What haven’t I played?

Well, most of the games I bought before 2006 I’ve played. I gradually picked up more games between 2006 and 2012 and then really started exploring board games. I also have a pretty large collection of RPG books mainly because I used them for ideas for my AD&Dr1 game I ran for many years. In checking my game report, as of right now:

* Board Games: Inventory: 361, Played: 156, Percentage Played: 43.21
* Card Games: Inventory: 107, Played: 46, Percentage Played: 42.99
* Role Playing Games: Inventory: 276, Played: 31, Percentage Played: 11.23

The RPG Games stats are a bit misleading as well since I generally have multiple copies of core rule books for use the table or for various editions. For instance, I have probably 25 Shadowrun core books from 1st Edition up through 5th Edition. Same with D&D. I have several DM’s Guides from 1st Edition up through 5th Edition. Both include Collectable editions (the numbered Shadowrun books for instance) and even a few books for different languages. I have a Spanish and German Shadowrun core book.

In looking at my report, including every game, module, and expansion, I have 3,005 items. This doesn’t include dice though which exceed that by at least another 1,000 🙂

What are your favorite games?

This shifts so much that it’s hard to pin down. Cosmic Encounters, Car Wars, Shadowrun, Ticket to Ride, Splendor, Red Dragon Inn, Elder Sign, Eldritch Horror, Netrunner, Castles of Burgundy, Pandemic Iberia, Bunny Kingdom, DC Deck Building, the list goes on.

Favorite Boardgame of the past year?

I’d say Ticket to Ride Rails and Sails was the one we enjoyed playing the most. We introduced it to several others who have also purchased the game.

Most played boardgame?

Probably something like Cosmic Encounters outside something like Risk or Monopoly 🙂

What is your least favorite game?

That’s a tough question in that most games are somewhat fun and/or interesting. Probably the one that disappointed me most was Fragged. It’s a Doom video game set to board game and really doesn’t translate well. We played it once and decided we’d rather play the video game 🙂

I will say that we did try several of the Legendary card games and really had a hard time playing them. I particularly like Big Trouble in Little China but the Legendary game we played just killed the game itself for us and I haven’t picked up another since then.

Next boardgame purchase?

No real plans on a specific board game. I mainly look forward more to the next Shadowrun book.

On to the pictures! I have 4 5×5 Kallax shelves. Three 2×4 Kallax shelves. Two 2×2 Kallax shelves. And 5 1×4 Kallax shelves. That’s 152 Kallax squares (not all filled though). Pictures are basically 3 or 4 squares wide and two shelves high so there are a lot of pictures.

Full Gallery: Photo Gallery (Note that you can click on the pics here to see the full size pic.)

Come on in!

It is a bit narrow but it’s a game room. As long as tables and such fit, it should be good.

The top row here are some miscellaneous stuff including on the right, my Dad’s old chess set. This is the first shelf on the left as you enter the room.

Next shelf.

This is a side shelf to the second shelf. Just trying to make us of the space.

You can’t really see this shelf from the door due to the angle of the room.

One of the 2×4 + 2×2 shelves.

And the other 2×4 + 2×2 shelf.

This unit is one I might push the top shelf back a little. Lots of RPG books in this one so it’s a bit heavier than the others. That’s generally my Netrunner, Shadowrun, Arkham Horror, and Force of Will cards on the left.

And the final bookshelf. The top here also has miscellaneous stuff. Card decks I haven’t put into the above deck boxes, Cards Against Humanity, etc. On the right are a few more card decks including my Xxxenophile deck.

And behind the door, are my boxes for other miscellaneous stuff. Dice, card games, miniatures, Wings of War planes, playmats, etc.

And finally, my gaming table. Currently set up for Shadowrun. I have a second table I’m considering setting up for regular board gaming.

And that’s the list.

Posted in Gaming | 2 Comments

Differences between Shadowrun 4th/20th Anniversary Edition and Shadowrun 5th Edition

Just documenting the differences between the two since I ran 4th for quite a few years and am starting in on 5th. I’ll likely expand things as I further gather information but at least the initial parts will be bullet points.

* Lots of price changes in gear but mainly bringing 5th to 3rd Edition levels. Lots of the 4th prices were dropped quite a bit.
* Limits have been added. This puts a cap on the total number of successes you can achieve when rolling dice. There are several however the Physical, Mental, and Social are the main ones. They’re derived from your Attributes. When you need to roll 15 dice for something, the limit is the maximum number of successes you can count regardless of the roll.
* Accuracy has been added to weapons. Accuracy is basically the same as Limits in that it dictates the maximum number of successes you can achieve.
* Spell Damage Resistance is Force – (some number) where in 4th it was (Force / 2) – (some number). The (some number) values are mostly positive where in 4th there were some positive and some negative (some numbers).
* Melee weapon Damage is Strength + (some number) vs 4th which was (Strength / 2) + (some number).
* Initiative in 4th was (Intuition + Logic) number of d6 dice. In 5th it’s (Intuition + Logic) as a base number and you roll 1d6 to increase the base.
* Cyberware and Bioware would add Initiative Passes, speed giving you an extra turn. In 5th they add extra d6 rolls to the dice pool to a maximum of 5d6. Passes are now similar to 3rd where the totals would be in place, 22, 19, 16, 13, 10, 9, 7 for the first Initiative pass, then 10 is subtracted and folks who still have positive Initiative values (12, 9, 6, 3) would go again, then 10 is subtracted and the rest (2) goes one more time.
* Hacking in 4th was attempted for User, Admin, and Security levels. In 5th you’re creating “Marks” which give you the same level of access (3 Marks max) and 4 Marks was the owner of the system.
* There are Cyberdecks in addition to Commlinks. Commlinks revert to being equivalent to smartphones with basic functionality and ‘decks have the extra common and hacking programs.
* Cyberdecks have four attributes however they’re reconfigurable on boot of the device and through a program, reconfigurable while active.
* In 4th, you’d combine a program’s rank and the appropriate commlink value to have a Dice Pool in order to complete a program task. In 5th, Programs simply provide bonuses to a separate list of actions. In 4th for example, you’d use the Data Search active skill plus the Browse program to create the dice pool and you’d need n successes to succeed. In 5th the ‘deck has attributes you’d use such as Data Processing and combine it with the Computer active skill to create the Dice Pool. The Browse program still exists but its only function is to cut the time searching the Matrix in half.
* Armor has been combined into a single stat
* There’s a “Grid Overwatch Division” (GOD) that keeps an eye on the Matrix. Using a Deck over a longer period of time can attract their attention which is a bad thing.
* Glitches have a slight change. Where it was “half of your dice pool or more”, it’s now “more than half your dice pool”. So it’s slightly harder to glitch.

Posted in Gaming | Tagged | Leave a comment