Setting Up Kubernetes

In a series on my home environment, I’m next working on the Kubernetes sandbox. It’s defined as a 3 master, 5 minion cluster. The instructions I currently have seem to work only with a 1 master, n minion cluster (as many minions as I want to install). I need to figure out how to add 2 more masters.

Anyway, on to the configurations.

For the Master, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379"

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://kube1:8080"

Edit the etcd.conf file. There are a bunch of entries but a majority are commented out. Either edit the lines or copy and comment out the original and add new lines:

# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

# [cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit the kubernetes apiserver file:

# vi /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""

Configure etcd to hold the network overlay on the master. Use an unused network:

$ etcdctl mkdir /kube-centos/network
$ etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://kube1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS=""

Finally start the services. You should see a green “active (running)” for the status of each of the services.

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld
do
  systemctl restart $SERVICES
  systemctl enable $SERVICES
  systemctl status $SERVICES
done

Everything worked perfectly following the above instructions.

On the Minions or worker nodes, you’ll need to follow these steps. Many are the same as for the master but I split them out to make it easier to follow. Conceivably you can copy the necessary configuration files from the master to all the minions with the exception of the kubelet file.

For the Minions, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379"

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://kube1:8080"

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://kube1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS=""

Edit the kubelet file on each of the Minions. The main thing to note here is the KUBELET_HOSTNAME. You can either leave it blank if the Minion hostnames are fine or enter in the names you want to use. Leaving it blank lets you copy it to all the nodes without having to edit it, again assuming the hostname is the one you’ll be using for the variable:

# vi /etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=knode1"                    # <-------- Check the node number!

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://lnmt1cuomkube1:8080"

# Add your own!
KUBELET_ARGS=""

And start the services on the nodes:

for SERVICES in kube-proxy kubelet flanneld docker
do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

The final step is to configure kubectl:

kubectl config set-cluster default-cluster --server=http://kube1:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

Once that's done on the Master and all the Minions, you should be able to get a node listing:

# kubectl get nodes
NAME     STATUS    AGE
knode1   Ready     1h
knode2   Ready     1h
knode3   Ready     1h
knode4   Ready     1h
knode5   Ready     1h
Posted in Computers | Leave a comment

Configuring The Home Environment

I’m using the new drive space and systems to basically mirror the work environment. Part of it is in order to have a playground or sandbox where I can try new things and learn how to use the tools we have and part is just “Because It’s There” 🙂 There’s satisfaction in being able to recreate the basic work setup at home.

As noted in previous posts, I have a pretty decent computer network now and I’ve created four environments.

Site 1. CentOS 7 based and hosts my personal, more live stuff like a movie and music server, development environment (2 servers), and backups. I also have a couple of Windows Workstation installations and Server installs for Jeanne. Plus of course the firewall. 13 Servers in total.
Site 2. CentOS 5, 6, and 7 based and hosts the Ansible and Kubernetes/Docker environments. In addition, there’s now an Ansible Tower server and a Spacewalk server. 24 Servers in total.
Site 3. Red Hat 6 and 7 based for Ansible testing. 11 Servers in total.
Site 4. Miscellaneous operating systems for further Ansible testing. 16 Servers in total.

16 Servers on the main ESX host.
48 Servers on the sandbox ESX host.

Total Servers: 64 Servers.

Red Hat

One of the nice things is Red Hat has a Developer network which provides self-support for Red Hat Enterprise Linux (RHEL) to someone who’s signed up. The little known bit though is you can have unlimited copies of RHEL if you’re running them virtually. Sign up is simple. Go to Red Hat and sign up to the Developer Network. Then download RHEL and install it. Run the following command to register a server:

# subscription-manager register --auto-attach

Note that you will need to renew your registration every year.

Spacewalk

Spacewalk is the freely available tool used for managing your servers. Red Hat’s paid version is Satellite. For ours at work, it’s $10,000 a year for a license. So Spacewalk it is 🙂

I use Satellite at work and it works pretty well. We have about 300 servers registered since the start of the year and are working to add more. I am finding Spacewalk, even though it’s older, to be quite a bit easier to use compared to Satellite. It’s quicker and the tasks are more obvious. Not perfect of course but it seems to be a simpler system to use. I set up CentOS 5, 6, and 7 repositories (repos) to sync and download updates each week.

Before you can connect a client, you need to create a channel for the operation system.

1. You need to create a Channel to provide an anchor for any underlying repos. I created a ‘hcs-centos54’, ‘hcs-centos65′, and hcs-centos7’ channel. Create a Channel: Channels -> Manage Software Channels -> Create Channel
2. You need to create repositories. You can create a one-for-one relationship or add multiple repos to a channel. I did mine one-for-one for now. I had to locate URLs for repositories. For the ‘centos7_mirror’, I used the mirror.centos.org site. For older versions, I had to use the vault.centos.org site. Create a Repository: Channels -> Manage Software Channels -> Manage Repositories
3. Now associate the repo with a channel. Simply go to the channel and click on the Repositories tab. Check the appropriate repo(s) and click the Update Repositories button.

The command to associate a server requires an activation key. This lets you auto-register clients so you don’t have to pop into Spacewalk to manually associate servers. The only thing needed is a name, I used ‘centos5-base’ for one, and associate a channel. The key is created automatically once you click the button. Create an Activation Key: Systems -> Activation Keys -> Create Key -> Description, Base Channel, click Update Activation Key

You’ll need the ‘1-‘ at the beginning of the key to activate a client.

There’s a set of tools needed in order to support the activation and what gets installed depends on the OS version. For my purposes, the following are needed:

RHEL5/CentOS 5

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/5/x86_64/spacewalk-client-repo-2.5-3.el5.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-5.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL6/CentOS 6

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/6/x86_64/spacewalk-client-repo-2.5-3.el6.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL7/CentOS 7

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/7/x86_64/spacewalk-client-repo-2.5-3.el7.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

Once that’s installed (if there’s an error, you’ll need to install the epel-release package and try again), register the system.

rpm -Uvh http://192.168.1.5/pub/rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm
rhnreg_ks --serverUrl=http://192.168.1.5/XMLRPC --sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT --activationkey=1-[key]

Once done, log in to Spacewalk and click on Systems -> Systems to see the newly registered system. If you’re running different OSs under different channels, you’ll need different keys for the various OSs.

In order to activate kickstarts, you need to sync a kickstart with the Spacewalk server. It’s not complicated but it’s not obvious 🙂 Get the channel name for the kickstart repo you want to create and run the following command:

# spacewalk-repo-sync -c [Channel Name] --sync-kickstart

My channel name is hcs-centos7 so the command on my system would be:

# spacewalk-repo-sync -c hcs-centos65 --sync-kickstart

I plan on taking the kickstart configurations I built for the servers and adding them to Spacewalk to see how that works and maybe kickstart some systems to play with kickstarting.

Configuration

I also have the scripts I wrote for work and have them deployed on all the servers plus adding accounts. I needed to update the times to be Mountain Time as the times for the kick off of scheduled nightly or weekly tasks were going off in early evening and slowing down access to the ‘net for Jeanne and me. This involved updating the timezones and starting the ntp daemon.

RHEL7/CentOS 7

# timedatectl set-timezone America/Denver

RHEL6/CentOS 6

You link because if there’s an update that changes the zone information, such as a day change, the system is automatically correct.

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime

RHEL5/CentOS 5

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime

Time

And related to time, I need to ensure either ntp or chrony is properly configured and started. Kubernetes especially requires consistent time.

The chronyd and chronyc are the replacements for ntpd and ntpq. The configuration is similar though and with the same understanding about how it works. As I have a time server running on pfSense, I’m ensuring the servers all are enable and are pointing to the local time server. No point in generating a bunch of unnecessary traffic through comcast and just keep pfSense updated.

chronyd

Edit /etc/chrony.conf, comment out the existing pool servers and add in this line:

server 192.168.1.1 iburst

Enable and start chronyd if it’s not running and restart it if it’s already up. Then run the chronyc command to verify the change.

# systemctl enable chronyd
# systemctl start chronyd

or

# systemctl restart chronyd

Results:

# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* pfSense.internal.pri          3   6    17     1    -13us[ -192us] +/-  109ms

ntpd

Edit /etc/ntp.conf, comment out the existing pool servers and add in this line:

server 192.168.1.1

Enable and start ntpd if it’s not running and restart it if it’s already up. Then run the ntpq command to verify the change.

# service ntpd start
# chkconfig ntpd on

or

# service ntpd restart

Results:

# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 pfSense.interna 42.91.213.246    3 u   11   64    1    0.461   -5.037   0.001
 LOCAL(0)        .LOCL.          10 l   10   64    1    0.000    0.000   0.001

Nagios

I started setting a nagios server. Nagios is a tool used to monitor various aspect of servers. At work we’re using it as a basic ping test just to make sure we know servers are up with a quick look. Other bits are being added in as time permits. Here I did install net-snmp and net-snmp-utils in order to build the check_snmp plugin. This gives me lots and lots of options on what to check and might let me replace some of the scripts I have in place.

SNMP Configuration

####
# First, map the community name "public" into a "security name"

#       sec.name  source          community
com2sec AllUser   default         CHANGEME

####
# Second, map the security name into a group name:

#       groupName      securityModel securityName
group   notConfigGroup v1            notConfigUser
group   AllGroup       v2c           AllUser

####
# Third, create a view for us to let the group have rights to:

# Make at least  snmpwalk -v 1 localhost -c public system fast again.
#       name           incl/excl     subtree         mask(optional)
view    systemview     included      .1.3.6.1.2.1.1
view    systemview     included      .1.3.6.1.2.1.25.1.1
view    AllView        included      .1

####
# Finally, grant the group read-only access to the systemview view.

#       group    context sec.model sec.level prefix read    write  notif
access  AllGroup ""      any       noauth    exact  AllView none   none

Unfortunately the default check_snmp command in the commands.cfg file was a bit off.

Old:

# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ $ARG1$ 

In running the program with a -h command, I found the correct options for what I needed to do:

New:

# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ -C $ARG1$ -o $ARG2$ -P 2c

Per the configuration, I’m using snmp version 2. Other than that, just pass the appropriate community string (-C) and object id (-o) to do the check you want to check.

Uptime:

In the linux.cfg file I created, I added the following check_snmp block:

define service{
        use                             local-service         ; Name of service template to use
        host_name                       [comma separated list of hosts]
        service_description             Uptime
        check_command                   check_snmp!CHANGEME!.1.3.6.1.2.1.1.3.0!
        }

Possibly Interesting OIDs:

Network Interface Statistics

  • List NIC names: .1.3.6.1.2.1.2.2.1.2
  • Get Bytes IN: .1.3.6.1.2.1.2.2.1.10
  • Get Bytes IN for NIC 4: .1.3.6.1.2.1.2.2.1.10.4
  • Get Bytes OUT: .1.3.6.1.2.1.2.2.1.16
  • Get Bytes OUT for NIC 4: .1.3.6.1.2.1.2.2.1.16.4

Load

  • 1 minute Load: .1.3.6.1.4.1.2021.10.1.3.1
  • 5 minute Load: .1.3.6.1.4.1.2021.10.1.3.2
  • 15 minute Load: .1.3.6.1.4.1.2021.10.1.3.3

CPU times

  • percentages of user CPU time: .1.3.6.1.4.1.2021.11.9.0
  • percentages of system CPU time: .1.3.6.1.4.1.2021.11.10.0
  • percentages of idle CPU time: .1.3.6.1.4.1.2021.11.11.0
  • raw user cpu time: .1.3.6.1.4.1.2021.11.50.0
  • raw system cpu time: .1.3.6.1.4.1.2021.11.52.0
  • raw idle cpu time: .1.3.6.1.4.1.2021.11.53.0
  • raw nice cpu time: .1.3.6.1.4.1.2021.11.51.0

Memory Statistics

  • Total Swap Size: .1.3.6.1.4.1.2021.4.3.0
  • Available Swap Space: .1.3.6.1.4.1.2021.4.4.0
  • Total RAM in machine: .1.3.6.1.4.1.2021.4.5.0
  • Total RAM used: .1.3.6.1.4.1.2021.4.6.0
  • Total RAM Free: .1.3.6.1.4.1.2021.4.11.0
  • Total RAM Shared: .1.3.6.1.4.1.2021.4.13.0
  • Total RAM Buffered: .1.3.6.1.4.1.2021.4.14.0
  • Total Cached Memory: .1.3.6.1.4.1.2021.4.15.0

Disk Statistics

  • Path where the disk is mounted: .1.3.6.1.4.1.2021.9.1.2.1
  • Path of the device for the partition: .1.3.6.1.4.1.2021.9.1.3.1
  • Total size of the disk/partion (kBytes): .1.3.6.1.4.1.2021.9.1.6.1
  • Available space on the disk: .1.3.6.1.4.1.2021.9.1.7.1
  • Used space on the disk: .1.3.6.1.4.1.2021.9.1.8.1
  • Percentage of space used on disk: .1.3.6.1.4.1.2021.9.1.9.1
  • Percentage of inodes used on disk: .1.3.6.1.4.1.2021.9.1.10.1

System Uptime OID’s

  • .1.3.6.1.2.1.1.3.0

One problem with the OIDs are they’re statistics and not much use without a trigger. They’re really more useful with MRTG where you can see what things look like over a period of time. What you really want to do is check to see when stats exceed expected norms.

MRTG

This is primarily a network traffic monitoring type tool but I’ve configured it to track other system statistics regarding disk space, swap, memory, and whatnot. It’s not configured just yet but that’s my next configuration task.

Posted in Computers | Leave a comment

Home Network and Internet Access

Over the years I’ve had a few network configurations.

In the mid-80’s I ran a BBS and connected to other BBSs through a transfer scheme. My system would be accessed at all hours and I had the biggest collection of utilities and games in the area at the time.

In 1989 I got a job at Johns Hopkins APL and I got direct internet access. With that, I started poking around on how to get access at home without going through a pay service like AOL or CompuServe or Prodigy. One of my coworkers at jhuapl recommended PSINet and I was able to finally get to get direct access to the Internet.

In the mid-90’s Comcast started offering access with a mixed cable/dial-up configuration and I switched over. Faster download speeds was a big draw and the payment was less than PSINet.

Comcast eventually offered full cable access. At that point I repurposed an older system as an internet gateway. It gave me the ability to play with Red Hat at home plus I’d just become a full time Unix admin at NASA. I had a couple of 3Com ethernet cards in the old computer and was running Red Hat 3 I think. It worked well and I was able to get access to the Internet and Usenet.

One of the problems though was Red Hat or the 3Com cards didn’t support CIDR. I’d gone through a system upgrade again and put the old system to the side. I built a new system and used Linksys cards (I still have one in a package 🙂 ) and Mandrake Linux as a gateway and firewall. I learned about iptables and built a reasonably safe system. I did have my schelin.org website on the server using DynDNS to make sure it was accessible, and hosted pictures there but since it was on Comcast user space, not everyone was able to see the pictures. It was about this time that I started looking in to a hosted system and went to ServerPronto for a server hosted in a data center in Florida. I configured the local system (now using Mandriva after the merge) to access the remote server and configured that server with OpenBSD.

In 2004 I bought an Apple Extreme wireless access point (WAP) for my new MacBook G4. I added a third network interface to the gateway and blocked the laptop from the internal network (only permitted direct internet access). By this time I also had a Linksys 10/100 switch so other systems could directly access the ‘net.

In 2008 it was time to switch systems again. My old XP system, which was reasonably beefy with 300 Gigs of disk space mirrored and 16 gigs of RAM was converted over to be a firewall and the old box wiped and disposed of at the computer recycling place. I installed Ubuntu to muck around with it and configured its firewall. Still running three network cards and a new Apple Extreme as the old one tanked.

In November 2015, I was caught by one of the Microsoft “Upgrade to Windows 10” dialog boxes and upgraded my Windows 7 Pro system with Windows 10. For months there were problems with some of the games I played and other issues with drivers.

Around February 2016, the Virtualization folks at work were in the process of replacing their old VMWare systems. These systems are pretty beefy as they run hundreds of virtual machines. A Virtual Machine is a fully installed Unix or Windows box but only using a slice of the underlying physical system. It severely reduces time and cost in that I can get a VM stood up pretty quickly and don’t have to add in the purchase and racking of a physical system. Very efficient.

Anyway, they were decommissioning the old gear and one of the guys asked me if I was interested in one of the servers. I was a bit puzzled because I didn’t know they were decommissioning old systems and thought it would be on my desk or something. But I said sure and looked at my desk to see where I’d put it. But I received the paperwork and it was actually transferring the system to my ownership. Belongs to me, take it home. Woah! I signed off and picked up the system from the dock a few days later.

The system is a Dell R710 with 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror), 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards.

Holy crap!

I immediately set it up as a gateway and firewall using CentOS 7. I’d recently received my Red Hat Certified System Admin certification and need to try for my Red Hat Certified Engineer certification. This gave me something to play on. I set up the firewall and got all my files transferred over. The old box (XP then Ubuntu) is sitting under my desk.

In March of 2016, I bought a new system entirely. In part because of the issues I was having with my 2008 system and in part because of a decent tax check refund. Over the years I’d added a couple of 2 TB drives to the 2008 system so they were transferred over to the new system. I also have an external 3TB drive that stopped working for some reason.

I moved the 2008 system away and got the new one up and running. I had to do some troubleshooting as there were video issues but it’s now working very well.

But in mid summer the Virtualization folks asked me again if I wanted a system. They were still decommissioning systems. Initially I declined as the one I have actually does work very well for my needs but one of my coworkers highly suggested snagging it and setting up an ESX host running VMware’s VSphere 6. I’d been mucking about with Red Hat’s KVM without much success so I went back and changed my mind. Sure, I’ll take the second system.

The new system is a Dell R710 again. It had 192 Gigs of RAM but the guy gave me enough to fully populate the system to 288 Gigs of Ram. It also had 2 6 Core Xenon X5660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards just like the first one. One of the drives had failed though. At suggestion, I puchased 5 3 TB SATA drives. This gave me 8 TB of space and a spare SATA drive in case one fails plus the remaining 3 750 Gig drives are available to the first system in case a drive fails.

I configured the new system with VMWare and created a virtual machine firewall. I created a VM to replace the full physical system and copied all the files from the physical server over to VMs on the new system. With all that redundancy, the files should be safe over there. They’re plugged into a UPS as well as is my main system. I’ve been using UPSs for years, since I kept losing hardware due to brownouts and such in Virginia.

Once everything was copied over, I converted the first system into an ESX host. That one has my sandbox environment now. Mirroring the work environment in order to test out scripts, Ansible, and Kubernetes/Docker.

My sandbox consists of VMs which have been set up for three environments, and use a naming scheme that tells me what major OS version is running.

For Ansible testing, at site 1, I have an Ansible server, 2 utility servers, and 2 pairs each of 2 CentOS 5, 6, and 7 servers (2 db and 2 web). 15 servers.

At site 2, I also have an Ansible server, 2 utility servers, 2 pairs each of 2 Red Hat 6 and 7 servers (2 db and 2 web). 11 servers. And Red Hat will let you register Red Hat servers on an ESX host for free which is excellent.

For off the wall Ansible testing, at site 3, I have just a pair of servers for current versions of Fedora, Ubuntu, Slackware, SUSE, FreeBSD, OpenBSD, Solaris 10, and Solaris 11 (1 db and 1 web). 16 servers.

For Kubernetes and Docker testing, I have 3 master servers and 5 minion servers. 8 servers.

So far, 50 servers for the sandbox environment.

For my personal sites, I have a firewall, a development server, 3 host servers for site testing, a staging server, a remote backup server, a local Samba backup server, a movie and music server, a Windows XP server, a Windows 7 server, and 2 Windows 2012 servers for Jeanne’s test environment. In general, these were all on the XP server I had before I got the R710. The ability to set up VMs lets me better manage the various tasks including rebooting a server or even powering it off when I’m not using it.

And 13 servers for 63 total servers.

But wait, there’s more 🙂

In September, the same coworker made available a Sun 2540. This is a drive array he got from work under the same circumstances as the R710’s I have. He’s a big storage guy so had a lot of storage type stuff at home. I picked it up along with instructions, drive trays (no drives though), and fiber to connect to the two ESX systems. Fully populated with 3 TB drives, this would give me 36 TB of raw disk space. As RAID 5 loses a drive, a single RAID 5 would give me 33 TB however the purposes of this is to present space to both systems so I’d need to slice it up. I checked on line and purchased 6 3 TB drives for 18 TB raw, RAIDed to 15 TB and the same guy pointed us to a guy in Colorado Springs selling 18 2 TB drives. I snagged 8 and completely populated the array with 6 2 TB drives for 12 TB raw, RAIDed to 10 TB giving me 25 TB of available space to the two ESX systems. Because of the age of the 2540, I have Solaris 10 installed so I can run the Oracle management software.

As the 3 TB external drive had tanked for some reason, I extracted it from the case and installed it into the Windows 10 system. I wanted to mirror the 2 2 TB drives but couldn’t as long as there was data on it. With a 3 TB drive, I can move everything from the 2 TB one and then mirror the drives for safety.

I’ve come a long way from a single system with maybe 60 Megabytes of disk space up to a Home Environment with 70 Terabytes of raw disk space.

And of course, more will come as tech advances.

Posted in Computers | Leave a comment

Setting up pfSense

Over here I configured a firewall to replace the old single system based firewall. One of the guys at work recommended installing pfSense as a VM to replace the firewall. pfSense from the firewall aspect is wrapped around pf, the BSD Packet Filter software. I’ve used it on my old OpenBSD system so I’m somewhat familiar with it from the ruleset. Plus I already have the firewall running so it wasn’t a big deal to at least give it a shot. I downloaded the FreeBSD based .iso and created a Virtual Machine using a basic configuration of 1 CPU, 1 Gig of Ram, and 20 Gigs of space (mainly logs). As it’s a VM, I can add resources as needed. I also added the three interfaces I configured for the Firewall.

Booting to the ISO comes up with a simple menu for access to other features or by default starts installing the package. Next up is to configure the console. Next I selected the ‘Quick/Easy Install’ as I didn’t have all that much experience with the tool and really don’t have a complicated environment except for the Wireless Access Point for the third interface. “Are you SURE?” is next and basically says it’ll just install pfSense and erase the disk. Again, no problem as it’s a VM. If it tanks, I kill it and build a new one (case in point, I’m writing this after I’ve had pfSense up for a bit and am building a second VM to remind me what I did for this posting 🙂 ).

Okay and it’s on its way (wait, it is asking a question about the Kernel; just continue).

And it’s done. Took about 2 minutes even with swapping between there and here.

Reboot and let it start up. I’ve disabled all three interfaces for this one as I didn’t want it interfering with the current running one. It does come up with a start menu which will let me configure it. WAN is em0, LAN is the internal network, default to 192.168.1.1. In this case, reIPs to 192.168.1.2. This is also the web interface which is where I can configure the system.

Login for a new system is ‘admin’, ‘pfsense’. Change it of course.

Once in, a wizard starts up to help configure the system. First, do you want to upgrade to Gold 🙂 Next is to configure the hostname and gateway. Set the ntp server and zone. Next is configuring the WAN DHCP information. Since it’s DHCP and no special settings, Next. Configure the LAN Interface is next. As it’s already configured, I left it at the initial settings. Next, reset the admin account password. And done, click the Reload button and the firewall is ready to use.

As I have a Wireless connection as well, I needed to add the interface in. Under Interfaces, select (assign) to show the existing three. It shows the first two; WAN and LAN and an Available network port for the third interface. Click Add and it becomes ‘Opt1’. Click on it and it takes you to the configuration page for the interface. Initially it’s not configured. I select Static IPv4 from the IPv4 Configuration Type drop down and entered the new IP address for the IPv4 Address (192.168.10.2 for purposes of the instruction). I did not check the Reserved Networks checkboxes. Click the Enable checkbox at the top and click the Save button. It tells you the configuration has changed and that you need to Apply Changes. Note that this stays as a reminder even if you close the browser tab.

Next is the Firewall drop down. As I have no reason to permit inbound traffic to my system, I left the WAN configuration at default of ‘All incoming connections’…’will be dropped’. LAN configuration was also left at default.

The OPT1 (Wireless) interface had two rules added.

As I wanted it to pass traffic to and from the ‘net, I added an Allow to Any rule. Interface: Opt1, Address Family: IPv4, Protocol: any (note the default is tcp; it caught me initially 🙂 ). Finally Source Opt1 Net, Destination any. Description is ‘Default allow OPT1 to any rule’.

Not done though. I don’t want Wireless traffic permitted on the internal network so I added a second rule. This one is Action Reject, Protocol any (don’t forget, default is TCP), Source OPT1 net, Destination LAN net. Description: Drop inbound traffic to Internal.

And as far as the firewall configuration is concerned, I’m done. I did want to use some of the other features so I started poking around the menus a bit. I set up two services: DHCP, DNS, and NTP.

I set up DHCP to be enabled on the LAN interface and added a network range of 192.168.1.150 to 192.168.1.199. I set up the DNS server to be 192.168.1.1 (the pfSense server). Default gateway is the also the pfSense server so that can be blank. I added a Domain search list of ‘internal.pri’ as I use that for all my behind the firewall domains. I also enabled the rrd statistics graphs as I’m used to using rrdtool.

I can also enable DHCP on the Wireless lan but since I have an Apple Extreme WAP, it already handles that for me so I ignored it.

For DNS, I only needed to use the General Settings as it wasn’t all that complicated. I did Enable Forwarding Mode but left everything else alone. As I started adding VMs though, I added the IPs to DNS.

In order for CygWin on my Windows 10 system to use it as DNS, I had to manually enter the ‘internal.pri’ in the DNS suffix for this connection box:

Settings
Network and Internet
Ethernet
Change adapter options
Click on Network 2
Change settings of this connection
Click on the Internet Protocol Version 4 item
Click Properties
Click Advanced
Make sure the IP addresses and Default gateways are right (should be)
Click the DNS tab
Make sure the DNS servers are correct (again should be)
Under the DNS suffix for this connection add internal.pri.

For NTP, I added a few more pool servers as it works best with at least 3 and I set up 5. I did enable RRD graphs for NTP.

And that’s it. You can check out stats for the various services by looking under Status and troubleshoot under Diagnostics.

All in all it seems to be working as desired.

Posted in Computers | Leave a comment

Colonoscopy Time!

Per all the recommendations, it’s something that really should be done starting at age 50 and earlier if you have family history. I just had mine done and I’m 6 months away from 60.

One of the problems I’ve had with doing it at 50 was the cost. At around $4,000 and not always covered by whatever plan you’re on at the time, it wasn’t something I was financially prepared for. I’ve been saving over the past 2 years though through an HSA so I can have it done now.

I went in a month or so back due to a different medical problem (first time seeing a doctor in many years) and the doctor said I should schedule myself for the procedure. The big plus, last year Medicaid pays fully for the procedure making it cost me nothing. Even with a high copay right now, my cost and copay is zero bucks.

Sign me up then!

I heard the horror stories about making sure you were always a few feet away from the toilet because the preparation meds really really clean you out (really!) and quickly. I did get the 4 liter container with the preparation chemicals plus a small packet of lemon flavoring. I will say lemon does leave a film on the teeth. I started prep last night at 6pm per instructions after eating nothing after 7am (procedure was scheduled for this morning at 9:30am). It really wasn’t as bad as folks say in my experience. The fluid was a bit thick feeling, like taking the Alka-Seltzer Cold, but a touch on the salty side. Maybe due to it having electrolytes mixed in. While I did have to hit the toilet several times before going to bed and again in the morning, I mostly had a sense of fullness vs the urgency you feel with actual diarrhea when you eat something that disagrees with you.

I had to drink a glass every 15 to 30 minutes and drink until the liquid leaving the body was clear. You also must drink a lot of water because you will be dehydrated by the fluid (I drank about 5 liters of water throughout the day in addition to the 2 liters of laxative starting at 6). The nurse also said to drink half starting at 6 and the remainder before 7am this morning. By 7am I’d finished it off and exiting fluids were mainly clear with a yellow tinge to it.

Another thing folks mention is being very hungry. I have to say I felt almost no hunger throughout the day. I did spend my time hacking code and playing Doom after adjusting the Gamma to 1.0 (much much better).

Getting to the clinic was no big deal and the nurse said they were ready right now (8:45 vs 9:30) so I was brought back, given instructions, and signed the waiver that let them remove any polyps found. She did say at my age, about 30% of patients have polyps and that the most they’d ever removed was 49 in one session! She said smokers typically had more polyps than non-smokers.

I stripped, got dressed in the gown, lay down, and she covered me with a warm blanket. Did the blood pressure cuff, the finger cuff for heart rate, and then tried to locate a vein for the drugs. Being dehydrated even with the water I drank, she had a hard time. After many slaps trying to raise a vein, she finally tried my right hand. Insertion was pretty damned painful and it felt like she hit a tendon (the funny bone feeling along the finger). She pulled it out and observed “no bleeding”. Even with the large amount of water, I was still dehydrated. I pointed out that when I give blood, my left arm is generally the best place. She removed the cuff and was able to (again, painfully) successfully insert the needle.

And we’re ready to go!

I was wheeled down the hall into the room. The room itself was pretty cool itself with a large screen to my left and several screens and dangly bits to my right. I was instructed to lay on my left side and bring my legs up as if I was sitting down. She started the drugs and within 30 seconds I was feeling dizzy. The drug was a Conscious Anesthetic so I was awake but as friends say, I didn’t care. 🙂

Per the nurse, I wasn’t to drive, drink alcohol, or sign important documents for 24 hours and that I would have trouble remembering the procedure.

I do remember watching a part of the procedure on the screen and a few instructions or just a push by the doc or Nurse to shift around a little but that was it. I don’t remember being wheeled out or being taken to the reception area. I do remember farting though and getting a “good job” from someone. 🙂 My girlfriend was there to drive me home and I don’t recall asking her to take any pictures of me in the reception area still on the gurney but she has a few pics 🙂 Apparently I told her about the hand and tendon a few times. I don’t remember getting dressed. I do remember slightly leaving the office but not the elevator and vaguely recall the parking lot but not getting in the car. I just remembered her wandering around in the parking lot looking for a gas station so she could get me a soda (wanted the caffeine before headaches kicked in) and a few “there’s a long runway to get on back on the road” comments.

Later Jeanne mentioned I was in the recovery room and the nurse had removed the cuff and finger clip but I don’t remember the needle being removed. Jeanne did say I got up from the gurney, with just a T-Shirt and socks, and the nurse walking in on me, “oh my” 🙂 I remember the nurse asking if I wanted something to drink or eat; apple juice, orange juice, graham crackers, or animal crackers. I wanted apple juice. I remember that it was warm but I did drink it down. I drank down the apple juice and threw it away but do not, even with Jeanne reminding me, remember getting dressed.

I did take a slight nap in the car and as we getting close to home, I recall feeling cold sweats and nauseous. Probably because I’d been fasting for more than 24 hours. Jeanne was willing to get me something to eat but I only wanted some toast, unbuttered, and drank some of the soda. I tucked in to the couch and slept for about 4 hours (around 1pm). I felt a lot better after that and Jeanne headed out to get lunch.

Oh, and they removed 2 polyps so less than the record but in the 30% of patients. I do remember that

It’ll be about a week before they return the results of the biopsies. The results will dictate how often I’ll need to return for further cleanup. 10 years is the default but it could be less if there are problems with the biopsies.

Posted in About Carl | 1 Comment

Using a VMWare Virtual Machine As An Internet Gateway

I’ve been using my old cast off computers as internet gateways for a couple of decades now. I’d upgrade my current system, build a new one, and stick the old one in a a new Firewall/Gateway. I seem to learn better when I have something to do with it. I started with Red Hat Linux until it stopped using 3Com Ethernet cards (3c503) (issue with CIDR). Then Mandrake for several years. Then OpenBSD for a bit. Then Ubuntu. And most recently CentOS 7. I’ve used iptables a couple of times, the OpenBSD pf firewall set, back to iptables on Ubuntu, and now firewall-cmd.

A few months back, one of the teams at work was in the process of replacing their old gear with new gear. Standard lifecycle type stuff. A few of us were asked if we wanted to keep any of these cast off systems. I was a bit puzzled but sure. I’ll use it as a sandbox type thing at home. These are bigger systems as they were the older ESX servers used by the Virtualization Team. A Dell R710 (this is a rack mount server vs a desktop or laptop system).

Dell R710, 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 4 PCI Ethernet Ports.

Woah. Big box. The 2 143 Gig drives are mirrored as RAID 1. The 4 750 Gig drives are mirrored as RAID 5. This gives me 143 Gig for boot and 2 TB for data.

I built it up as a replacement for my old system. Part of this was learning how to use CentOS 7 with the new systemd and *ctl commands. I snagged my rules for the old system (iptables) and learned how to set up the new system.

I’ve used 3 network interfaces for quite a few years. One for Comcast or Internet traffic. One for my Wireless traffic. And one for my internal network. My Wireless has always had a hidden access point with security turned on. Plus it isn’t permitted access to the internal network. It uses the firewall as strictly a pass through to the ‘net. I copied all my files from the old system to the new one and have spent the past few months making sure things work as expected.

But there’s more 🙂

A few months later and they’re still decommissioning old gear. This time I was asked if I wanted a second system. Honestly I was fine with the first one. Lots and lots of power. But one of the other guys suggested I use it as a VMWare ESX host. We use VMWare a lot at work with some 400 or so virtual machines that my team manages plus these were ESX hosts so they’re really already built for this sort of thing.

While I initially said no, I went back and agreed to take the second one. I’ve been trying to use KVM to learn Kubernetes, Docker, and Ansible but it’s a bit rough to use and I was having the most trouble with getting the network working correctly without dicking up my firewall. So new system.

Dell R710, 288 Gigs of Ram, 2 6 Core Xenon X56660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 4 PCI Ethernet Ports.

Well, a _bit_ more Ram. More power to the processors but fewer. One of the 750 Gig drives had failed as well. At the recommendation of the same guy at work 🙂 I popped on line and chased down the maximum size drives these things would take. 3 Terabytes! I picked up 5 of them, 4 as replacements for the existing 750’s and one spare just in case. I pulled the four 750’s, put in the four 3 TB drives and the new system now has 8 Terabytes of disk space in a RAID 5.

I installed vSphere from VMWare and started building virtual systems. First a 8 node system for my Kubernetes and Docker testing. Next three nodes for my development environment. And most recently 3 nodes for my Ansible environment plus 12 nodes for the sandbox for Ansible (4 CentOS 5.4, 4 CentOS 6.5, and 4 CentOS 7.2 system to mirror the basic work environment).

But you know. Since the system is up and running, I should be able to create a virtual system that’s pretty thin with all the power I need for a firewall. Updates to the firewall system can take a good bit of time if I have to reboot, say for a kernel upgrade. I have the configuration from the current system of course to apply to the new system. And I would just need to pull the cable from the current physical gateway to the virtual gateway to test, then back if it’s not working as expected.

But first, let’s move the existing web sites off the firewall to the virtual environment. They don’t need to be up all the time. Just the backup for the forums and blogs.

The photo site takes about 250 gigs; pictures (35 Gigs), site backups (42 Gigs), and system backups (175 Gigs). So a new VM with 500 gigs will at least hold all this while I build up a firewall.

vSphere

The vSphere client has its own idiosyncrasies such as making sure I use the E1000 interface with CentOS 5 systems or making sure I disconnect before resetting the system or the iso gets locked and I have to close the vSphere client. Also, at recommendation, I enabled ssh access to the system. Even better. Now I can poke around with the esxcfg utilities. I believe everything you can do in vSphere can be done with the CLI which makes me happier.

Next up, I need to create a couple of vSwitches or Virtual Switch. With this, I can isolate the three networks and ultimately set up the second R710 as a second ESX host in a cluster.

In the vSphere client, select the home system and the Configuration tab. Click on Networking and you’ll see your current configuration. If you want to add network adapters to your current switch, you’d click on the Properties and add adapters and then ‘Team’ them. In this case I want to create unique networks so click on ‘Add Networking’, then Next as you’re adding a new network. Select the NIC you want to use for the new vSwitch and click Next. Give it a good label. I used Wireless for one and External – Comcast for the third switch. Then Finish. Do the same for the second switch if that’s what you are adding. I’m adding two switches so I added the second one.

For the VM, add the two additional networks to the base configuration. One for Wireless and one for the External – Comcast network. I of course upgraded the system once it was booted from kickstart. Next up, setting up the firewall.

Firewall-cmd

Make sure you have forwarding enabled as this will be a router. In /etc/sysctl.d, add the following line to ’99-ipforward.conf’.

net.ipv4.ip_forward = 1

Next of course, start firewalld.

# systemctl enable firewalld
Created symlink from /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service to /usr/lib/systemd/system/firewalld.service.
Created symlink from /etc/systemd/system/basic.target.wants/firewalld.service to /usr/lib/systemd/system/firewalld.service.
# systemctl start firewalld
#

Now let’s see what’s in place by default.

# firewall-cmd --get-zones
block dmz drop external home internal public trusted work

For firewall-cmd, I need to utilize only three of the available zones. So I don’t have to define new zones, I’ll associated one of the three interfaces with the appropriate zone; dmz for wireless, internal for the home network, and external for comcast.

And let’s see what the default zone is.

# firewall-cmd --get-default-zone
public

The default zone is ‘public’ so first I want to change it. In general always add –permanent when configuring the system. That adds it to the files in /etc/firewalld. In this case you can’t but for future commands.

# firewall-cmd --set-default-zone=internal

I have three interfaces. Sadly they come up as eno16777984, eno33557248, and eno50336512. I want to rename them to be a bit shorter to make them easier to manage. In /usr/lib/udev/rules.d edit the 60-net.rules file and add the following line for each of your interfaces. You’ll need to get the MAC address from each interface before hand so ifconfig or ip addr first.

ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:3f:a7", NAME="ens192"

Replace the ATTR with the interface MAC and NAME with what you want to call it. I called mine ens192, ens193, and ens194 although you could call it internal, external, and wireless I suppose. Don’t forget to rename the ifcfg files in /etc/sysconfig/network-scripts and update the files themselves (the NAME and DEVICE keywords). And of course update the three files with the additional correct information. My wireless network interface is 192.168.10.1. External (ens193) would just use DHCP to connect to Comcast.

With the interfaces more sanely named, I want to bind them to the appropriate zones

# firewall-cmd --permanent --zone=internal --add-interface=ens192
# firewall-cmd --permanent --zone=external --add-interface=ens193
# firewall-cmd --permanent --zone=dmz --add-interface=ens194

Unfortunately this doesn’t make it permanent even with –permanent. I had to add the zone information to each of the ifcfg files as ‘ZONE=’ so when I reboot, they’re still attached to the correct zone.

Next to check the default services as assigned

# firewall-cmd --list-services --zone dmz
ssh
# firewall-cmd --list-services --zone external
ssh
# firewall-cmd --list-services --zone internal
dhcpv6-client ipp-client mdns samba-client ssh

On my current physical system, I have the following final configuration

# firewall-cmd --list-services --zone=dmz
# firewall-cmd --list-services --zone=external
# firewall-cmd --list-services --zone=internal
dhcpv6-client ipp-client mdns samba-client ssh

I want to make sure the appropriate services are configured. By default, I don’t want wireless or external listening for anything but internal should be listening. firewall-cmd is aware of quite a few services:

# firewall-cmd --get-services
RH-Satellite-6 amanda-client bacula bacula-client dhcp dhcpv6 dhcpv6-client dns freeipa-ldap freeipa-ldaps freeipa-replication ftp high-availability http https imaps ipp ipp-client ipsec iscsi-target kerberos kpasswd ldap ldaps libvirt libvirt-tls mdns mountd ms-wbt mysql nfs ntp openvpn pmcd pmproxy pmwebapi pmwebapis pop3s postgresql proxy-dhcp radius rpc-bind rsyncd samba samba-client smtp ssh telnet tftp tftp-client transmission-client vdsm vnc-server wbem-https

For purposes of this though, I only care about ssh. Since it’s there, I don’t have to create a special rule for it. First off, remove any existing services that may be pre-configured.

# firewall-cmd --zone=dmz --remove-service=ssh --permanent
# firewall-cmd --zone=external --remove-service=ssh --permanent

Next if ssh isn’t part of the zone already (and it is by default) add ssh to your internal zone.

# firewall-cmd --zone=internal --add-service=ssh --permanent

And Masquerading is required in order for networking to work. Add it to the external zone.

# firewall-cmd --zone=external --add-masquerade --permanent

Once done, you’ll need to reload the firewall configuration.

# firewall-cmd reload

Confirm by listing the services available to the zones now.

# firewall-cmd --zone=dmz --list-services
# firewall-cmd --zone=internal --list-services
dhcpv6-client ipp-client mdns samba-client ssh
# firewall-cmd --zone=external --list-services

That should be it. Don’t forget to reip the interface on the public interface on the VM before rebooting. And I had to reconfigure my external server to accept connections from my new DHCP IP I received from Comcast.

Posted in Computers | Tagged | 2 Comments

Weeds and Lawns

The annoying broadleaf weed is Common Mallow. Get ground damp and pull.

http://www.colostate.edu/Depts/CoopExt/4DMG/Weed/mallow.htm

Pointy leaves, purple flowers and dense seed puffs is Canadian Thistle. Paint leaves with herbicide.

http://www.colostate.edu/Depts/CoopExt/4DMG/Weed/canad1.htm

The succulent in the cracks and edge of the grass is Purslane. Easily pulled when young.

http://www.colostate.edu/Depts/CoopExt/4DMG/Weed/pursl1.htm

The quick growing vine that climbs fences and plants id Bindweed. Paint with herbicide.

http://www.colostate.edu/Depts/CoopExt/4DMG/Weed/bindw1.htm

It looks like the clover looking weed with dark leaves too is Black Medic. Easily pulled from damp lawns.

http://www.colostate.edu/Depts/CoopExt/4DMG/Weed/black.htm

Lawn care fact sheet.

http://www.ext.colostate.edu/pubs/garden/07202.html

The “paint leaves” is where you want to keep the underlying or surrounding vegetation. For gravel or fence, just spray. The blue Bayer stuff works well.

Posted in Home Improvement | Leave a comment

Building a New Gaming Table

As a tl;dr post, this will be even more of a highlight of the process of building the gaming table 🙂

Several years back, I was testing a new wood working tool and created a quick and dirty gaming table. Bigger than a dining room table and less likely to require a moving stuff for dinner 🙂 As time progressed, I added a felt top to it, fixed the base (from an ‘X’ to a square base), added a rim for more support, and painted it. I used it for RPG gaming and board gaming. It was big enough to hold all the expansions for Arkham Horror 🙂 It was a little off in height though. Just a touch too tall for a dining room chair but a touch too short for bar stools.

As time goes by though, I wanted to make a better table. A bit more sturdy. But a bit more portable, or at least transportable. I’d been looking at tables for years of one sort or another from the extremely expensive ones at $20,000 down to the home built one. At the local gaming shop, they had a board game demo table. About 3′ on a side but two of the sides had a deep tray for things like bags of dice and a center cup holder for a dice cup maybe or a cup of something to drink. Inspiring.

I whipped out my pen and paper and started measuring things to make a table. I liked the deep tray (or trough) and size but I needed a larger table for bigger games. I had other ideas as well and eventually got them written down.

Four 3’x3′ modules with a corner and troughs for each side and section.

The lip on the left shows where I wanted to put the edge of a book or clipboard and after some thought, I added a slot for a support piece.

I decided to go with a hardwood since people will be leaning on the troughs and picked out Poplar in part because of the softer texture of the wood over Oak (and it’s a bit cheaper).

The base

Starting to look like a table

I used a wood dye vs a stain and selected a darker Cherry. After the stain, I covered it with shellac

Finally I coated it all with lacquer

For the table tops, I selected a grey felt and used some wood glue to mount the felt

Once everything was dry, Jeanne and I brought it all into the basement and started assembling it. This should look familiar

We mounted the two opposite sides loosely so they leaned out a little and slid the tops under the slots. When tightened, nothing should move.

Jeanne was an excellent helper 🙂

And the table assembled. I made the dice towers as well.

One of the things I wanted were trays in the troughs. I added a couple of pieces of plywood as a base to the trays (runners).

As you can see, you put a support into the slot, either vertically or horizontally, and it supports a clipboard

Or a book

Ready for Shadowrun in this case.

Behind the screen

And of course, the slots work well for holding cards 😀

I have more bits to add but the core of the table is done.

Posted in Game Table, Woodworking | Leave a comment

3dMark Video Card Testing

At a suggestion from Kevin, I snagged the 3DMark tool to test my systems video.

I ran the program to run tests on the system just to see if it encountered any issues. Nope, everything seems just peachy 🙂

I got a 3,611 Time Spy score which isn’t spectacular (better than 32% of systems).

3DMark Time Spy is a new DirectX 12 benchmark test for Windows 10 gaming PCs. Time Spy is one of the first DirectX 12 apps to be built “the right way” from the ground up to fully realize the performance gains that the new API offers. With its pure DirectX 12 engine, which supports new API features like asynchronous compute, explicit multi-adapter, and multi-threading, Time Spy is the ideal test for benchmarking the latest graphics cards.

But I got a 10,089 Fire Strike score which is real good (better than 72% of systems).

Fire Strike is a showcase DirectX 11 benchmark designed for today’s high-performance gaming PCs. It is our most ambitious and technical benchmark ever, featuring real-time graphics rendered with detail and complexity far beyond what is found in other benchmarks and games today.

I did put the system into SLI mode (connect 2 graphics cards for better performance) but the tests didn’t change and per the site (and for $30) I’d have to run the Fire Strike Extreme to test SLI and Fire Strike Ultra Benchmark test to test the 4K monitor.

3DMark Fire Strike Extreme is an enhanced version of Fire Strike designed for high-end multi-GPU systems (SLI / Crossfire) and future hardware generations.

In addition to raising the rendering resolution, additional visual quality improvements increase the rendering load to ensure accurate performance measurements for truly extreme hardware setups.

Can your PC handle the world’s first 4K gaming benchmark? Fire Strike Ultra’s 4K UHD rendering resolution is four times larger than the 1080p resolution used in Fire Strike.

A 4K monitor is not required, but your graphics card must have at least 3GB of memory to run this monstrously demanding benchmark.

So, sweeeeet 😀

Posted in Computers | Leave a comment

Concert: Free Fallin’

From here

[Intro]
        D/Dsus/Dsus/D/A        D/Dsus/Dsus/D/A
[Verse]
        D    Dsus  Dsus  D   A
she's a good girl, loves her mama
      D D sus    Dsus D   A
loves Je-sus and Amer-ica too
        D    Dsus  Dsus   D    A
she's a good girl, crazy 'bout Elvis 
      D   Dsus        Dsus  D      A
loves hor-ses and her boy - friend too

   D/Dsus/Dsus/D/A

           D    Dsus Dsus   D  A
and it's a long day  livin' in Reseda
          D    Dsus Dsus    D           A
there's a free-way  runnin' through the yard
          D   Dsus         Dsus  D    A
and I'm a bad boy, cause I don't even miss her
     D   Dsus    Dsus     D   A
im a bad boy for breakin' her heart


[Chorus] 
        D    Dsus/Dsus/D/A       D    Dsus/Dsus/D/A
and im free,                free fallin'
        D    Dsus/Dsus/D/A       D    Dsus/Dsus/D/A
ya, im free,                free fallin'

[Verse]

        D   Dsus  Dsus    D           A
all the vam-pires walkin' through the valley 
     D    Dsus     Dsus  D     A
move west down Ven-tura  Boule-vard.
            D   Dsus     Dsus     D      A
and all the bad boys are standing in the shadows 
        D    Dsus         Dsus      D      A
and the good girls are at home with broken hearts


[Chorus]
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
and im free,                free fallin'
        D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
ya, im free,                free fallin'

[Bridge]
   D/Dsus/Dsus/D/A
   D/Dsus/Dsus/D/A
   D/Dsus/Dsus/D/A
   D/Dsus/Dsus/D/A

        D     Dsus Dsus  D   A
i wanna glide down over  Mul-holland    
        D     Dsus Dsus D      A
i wanna write her  name in the sky
         D    Dsus Dsus D    A
im gonna free fall out  into nothin'
      D     Dsus  Dsus  D     A
gonna leave this  world for a while

[Chorus]
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
and im free,                free fallin'
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
ya, im free,                free fallin'

[Outro]
       D/Dsus/Dsus/D/A           D/Dsus/Dsus/D/A
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
ya, im free,                free fallin'
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
oh!                         free fallin'
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
ya, im free,            oh! free fallin'
       D    Dsus/Dsus/D/A        D    Dsus/Dsus/D/A
Posted in 2016 Company Picnic, Music | 1 Comment