Career Day

In thinking about the last 40 years of me being in the workforce, I thought about the reasons for change or even leaving the company.

Marine Corps Reserve

Signed up as a grunt in the 10th grade with parental permission, standard line animal. When I left High School, I went active duty into The Army.

US Army

Started off as a Military Policeman and then as a Dispatcher. Said something threatening about a Sargent that was overheard and reported, and was transferred into working as a Graphics Artist for the Battalion. Worked in Ft. Meade MD as an MP then Graphic Artist, Erlangen Germany as a Battalion Graphic Artist, and then Ft. Belvoir VA as a Post Graphic Artist. Unable to reenlist due to being 5 lbs overweight.

Graphic Artist

I worked part time in Alexandria VA while in The Army as a Graphic Artist, then when I got out, I was a Typesetter. Work declined so was laid off.

Car Salesman

For 2 months, talked to over 300 people but only sold 7.5 cars. Let go.

Security Guard

Gate duty. Let go when the contract ended.


Worked part time as a Basic programmer mainly on Leading Edge (IBM compatible) and Franklin (Apple compatible) computers. Left briefly for some additional programming training then returned but left again when I was working more than he could afford.

Programmer/System Installer

Started off as a programmer working on Funeral Home and Point of Sales software. Then when the system installer left to start her own company, I took that over. When the company had issues with Employee taxes and the IRS, I bailed.

System Installer/LAN Admin

Worked installing networks and admining the company LAN. Also did some assembly of computers when needed. Company went out of business.

Tech Support/DBA/LAN Admin

Started off as a telephone tech support person, then moved to a DBA position briefly, then the company’s first full time LAN Admin. IT was outsourced.

LAN Admin

Company indicated I was to get a raise but when I mentioned just that I was getting a raise (not how much) to the others, the raise was pulled. I left the company.

LAN Admin/Trainer

Hired as a contractor to manage the LAN, refresh and provide support to others who had their own LANs, and support the Token Ring network folks. Company bought by another company.

LAN Admin/Trainer

Company advised me that when the contract ended, I’d be let go. Job search and moved on.

LAN Admin

Lots of changes as a contractor at this government agency. Started off as a LAN Admin but the company, which was a programming company ultimately, lost the position when the networks were consolidated into a central managed group.

Tech Support/LAN Admin/Unix Admin

Briefly provided Tech Support while the LAN Admin position opened up. Once it was available, I transferred into that group. After a bit, I was not happy with the work and planned on leaving but the company talked me into taking a Unix Admin position.

Unix Admin

Company was bought by a second company. As part of the transition, I was sent to Cisco training to get networking knowledge.

Unix Admin/Operations Engineer/Network Admin

Company left the contract and I was hired by the Prime Contractor. Started off as a Unix Admin but was brought in as an Operations Engineer to assist with the new contract with the customer. Then transitioned into being a Network Admin before leaving Virginia and moving to Colorado.

Tech Support/Network Admin

Worked in Athens Greece for 30 days assisting with the 2004 Olympics. Left as it was a 30 day contract.

Unix Admin

Managed Unix systems. Contract ended.

Unix Admin

Managed Unix systems. Company was very “cog in a machine” and folks were let go without warning. As I’m not comfortable with that sort of environment, I left.

Unix Admin

Managed Unix systems. The company had been purchased by a larger company a few years before I started but hadn’t exerted any control other than budgetary. Eventually the larger company started pushing for more control.

Operations Engineer

Moved into the ‘Build’ role of the company transition to a ‘Plan/Build/Run’ model. The definition of Build matches an Operations Engineering type role.

Posted in About Carl, Computers | Leave a comment

Development Environments

I have a few big projects at work that I’ve created over the years. An inventory application, status management, and of course all the scripts (some 200 of them).

Initially I’d work on the inventory and status apps on the live server. Changes were minor and new features could be added without disruption. I used RCS, the simple revision control system that comes with Unix and that I’ve used extensively in the past to manage DNS files. But as time went on, some of the updates might take a couple of days to get working which impacted the application. As a result, I took the two desktops I’d salvaged, installed Red Hat and Ubuntu on the two, and copied the source code (php scripts) over to the server as a simple location to work on the scripts.

About 2 years back, I went though a major revision to the Inventory and started looking at creating an actual development environment where there’s a central store of ready to use code and a separate code directory where I could work on stuff without impacting the STABLE code base. It took a bit of work and several changes to make things work cleanly (and of course I’m still updating things).

With the virtualization I now have at home, I’m busily moving that code base over to the new environment. The nice thing is all the test bits I’ve done over the years can be moved over to the test location and the actual sites can be pared down to just the necessary bits. Cleaner and it protects the site a bit.

Environment now:

code – This is a directory with all the code for the web sites. I have a good 15 sites at home and 5 or 6 sites at work.
archive – Old stuff I don’t need but want to hold on to for historical references.
static – Any non-source code bits. Images for the most part but some data files that are imported regularly.
stage – The staging area for the code. The site is assembled into this directory and then sync’d with the production server.
html – The working web directory. All work is done here.

In the code site directory are the following utility scripts:

findcount – This script runs the find -print command to generate a list of all the files in the source which is stored in the countall file.
fixsettings – This script recreates the link to all the settings files to ensure every file has the same settings information. This script is in the html working area.
searchall – This script lets you search all the scripts for a string (pass -i to ignore case). This script is also in the html working area.

In the code directory are two files for each site, three if you include the log file.

make[site] – This is a script that builds the site located in the staging area.
– Runs the findcount script and compares it to the countall.backup file. If a new script has been added, this reminds you to add it to the manifest file.
– Parses the manifest file and creates directories and installs files as listed.
– Uses rsync to copy any static files into the staging area.
– Compares the manifest output with the countall output files to list any scripts that were added but failed to get added to the manifest file.
– A flag is created indicating the site has been updated and needs to be sync’d with the production site.
manifest.[site] – List of directories that need to be created and files that are copied from the source code directory into the staging area.

In the staging area are also three files, four if you count the log file.

sync[site] – This script rsync’s the site to all the target production sites. For work, I have the Inventory going to three servers right now due to transitioning to a new server.
sync.[site] – This is the flag file created by the make[site] script in the code directory. The sync[site] script only sync’s with the target server when this file exists.
exclude.[site] – This is a list of files and/or directories to exclude from the sync process.

For my account, I have two sets of lines in cron. The first set runs at 1am and runs the make[site] script for each listed site. This lets me automatically update the sites each night even if there are no changes (rsync is set with –delete so files that shouldn’t be there are deleted). The second set of lines run the sync[site] commands every minute. At 1am when the make[site] scripts run, they create the sync.[site] flag files and a minute later the sync[site] script runs and updates the site. But also, when I make a change to a site and manually run the make[site] script, the sync[site] script runs after a minute and updates the site.

Other scripts I have are the ”’check”’ scripts.

checkout – Uses the co -l command to check out a script from the source code into the working directory.
checkin – Uses the ci -u command to check in a script.
checkrw – Checks the source code site to see which files have been checked out and are being worked on.
checkmanifest – Checks working site against the manifest and provides a difference between any checked out scripts and STABLE scripts.
checkdiff – Runs a diff between a specific checked out script and the STATIC script.
checkinventory – Tells you what files are in the working directory that aren’t in the site directory.

And an import script that retrieves the mysql database from the target server and imports the data into mysql.

And that’s the environment. It works pretty well for what I’m trying to accomplish. I would like to use git to manage sites but I haven’t been able to find a good tutorial on how to pull files for sites.

Posted in Computers | Leave a comment

State of the Game Room: 2017

Got my gaming table done last year.

Building a New Gaming Table

And plans for the Dice Towers.

Building Dice Towers

And last year I picked up a storage unit from Ikea (first picture).

Yesterday we headed down to Ikea again and picked up 2 more; one 4×4 for under the window (5′ hight) and one 5×5 like the first. From looking at the games in the computer room, I have sufficient space for another 5×5 in the game room which will help with being able to get my books out of the closet again. But one of my main goals was to get the games out of the computer room and off the desktops specifically. I’m not ever going to have all the games in the game room. There’s just not the space without moving the TV and couch out (and even then there’s not the space). It takes about 4 storage boxes for a shelf and there are 3 larger shelves which takes about 12 boxes or half a 5×5 unit. The shorter shelves are about 3 boxes and there are 6 shelves which is 18 boxes for a total of 30 boxes potentially. Another 5×5 unit plus a narrower 5×2 maybe for 35 boxes. Maybe next to the A rack (first picture).

7th Sea through Pandemic in the Game room and Pandemic Legacy through Zpocolypse in the computer room.


Posted in Gaming | Leave a comment

Setting Up Kubernetes

In a series on my home environment, I’m next working on the Kubernetes sandbox. It’s defined as a 3 master, 5 minion cluster. The instructions I currently have seem to work only with a 1 master, n minion cluster (as many minions as I want to install). I need to figure out how to add 2 more masters.

Anyway, on to the configurations.

For the Master, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster

# logging to stderr means we get it in the systemd journal

# journal message level, 0 is debug

# Should this cluster be allowed to run privileged docker containers

# How the replication controller and scheduler find the kube-apiserver

Edit the etcd.conf file. There are a bunch of entries but a majority are commented out. Either edit the lines or copy and comment out the original and add new lines:

# vi /etc/etcd/etcd.conf
# [member]

# [cluster]

Edit the kubernetes apiserver file:

# vi /etc/kubernetes/apiserver
# The address on the local server to listen to.

# The port on the local server to listen on.

# Port kubelets listen on

# Address range to use for services

# Add your own!

Configure etcd to hold the network overlay on the master. Use an unused network:

$ etcdctl mkdir /kube-centos/network
$ etcdctl mk /kube-centos/network/config "{ \"Network\": \"\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass

Finally start the services. You should see a green “active (running)” for the status of each of the services.

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld
  systemctl restart $SERVICES
  systemctl enable $SERVICES
  systemctl status $SERVICES

Everything worked perfectly following the above instructions.

On the Minions or worker nodes, you’ll need to follow these steps. Many are the same as for the master but I split them out to make it easier to follow. Conceivably you can copy the necessary configuration files from the master to all the minions with the exception of the kubelet file.

For the Minions, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster

# logging to stderr means we get it in the systemd journal

# journal message level, 0 is debug

# Should this cluster be allowed to run privileged docker containers

# How the replication controller and scheduler find the kube-apiserver

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass

Edit the kubelet file on each of the Minions. The main thing to note here is the KUBELET_HOSTNAME. You can either leave it blank if the Minion hostnames are fine or enter in the names you want to use. Leaving it blank lets you copy it to all the nodes without having to edit it, again assuming the hostname is the one you’ll be using for the variable:

# vi /etc/kubernetes/kubelet
# The address for the info server to serve on

# The port for the info server to serve on

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=knode1"                    # <-------- Check the node number!

# Location of the api-server

# Add your own!

And start the services on the nodes:

for SERVICES in kube-proxy kubelet flanneld docker
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES

The final step is to configure kubectl:

kubectl config set-cluster default-cluster --server=http://kube1:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

Once that's done on the Master and all the Minions, you should be able to get a node listing:

# kubectl get nodes
knode1   Ready     1h
knode2   Ready     1h
knode3   Ready     1h
knode4   Ready     1h
knode5   Ready     1h
Posted in Computers | Leave a comment

Configuring The Home Environment

I’m using the new drive space and systems to basically mirror the work environment. Part of it is in order to have a playground or sandbox where I can try new things and learn how to use the tools we have and part is just “Because It’s There” 🙂 There’s satisfaction in being able to recreate the basic work setup at home.

As noted in previous posts, I have a pretty decent computer network now and I’ve created four environments.

Site 1. CentOS 7 based and hosts my personal, more live stuff like a movie and music server, development environment (2 servers), and backups. I also have a couple of Windows Workstation installations and Server installs for Jeanne. Plus of course the firewall. 13 Servers in total.
Site 2. CentOS 5, 6, and 7 based and hosts the Ansible and Kubernetes/Docker environments. In addition, there’s now an Ansible Tower server and a Spacewalk server. 24 Servers in total.
Site 3. Red Hat 6 and 7 based for Ansible testing. 11 Servers in total.
Site 4. Miscellaneous operating systems for further Ansible testing. 16 Servers in total.

16 Servers on the main ESX host.
48 Servers on the sandbox ESX host.

Total Servers: 64 Servers.

Red Hat

One of the nice things is Red Hat has a Developer network which provides self-support for Red Hat Enterprise Linux (RHEL) to someone who’s signed up. The little known bit though is you can have unlimited copies of RHEL if you’re running them virtually. Sign up is simple. Go to Red Hat and sign up to the Developer Network. Then download RHEL and install it. Run the following command to register a server:

# subscription-manager register --auto-attach

Note that you will need to renew your registration every year.


Spacewalk is the freely available tool used for managing your servers. Red Hat’s paid version is Satellite. For ours at work, it’s $10,000 a year for a license. So Spacewalk it is 🙂

I use Satellite at work and it works pretty well. We have about 300 servers registered since the start of the year and are working to add more. I am finding Spacewalk, even though it’s older, to be quite a bit easier to use compared to Satellite. It’s quicker and the tasks are more obvious. Not perfect of course but it seems to be a simpler system to use. I set up CentOS 5, 6, and 7 repositories (repos) to sync and download updates each week.

Before you can connect a client, you need to create a channel for the operation system.

1. You need to create a Channel to provide an anchor for any underlying repos. I created a ‘hcs-centos54’, ‘hcs-centos65′, and hcs-centos7’ channel. Create a Channel: Channels -> Manage Software Channels -> Create Channel
2. You need to create repositories. You can create a one-for-one relationship or add multiple repos to a channel. I did mine one-for-one for now. I had to locate URLs for repositories. For the ‘centos7_mirror’, I used the site. For older versions, I had to use the site. Create a Repository: Channels -> Manage Software Channels -> Manage Repositories
3. Now associate the repo with a channel. Simply go to the channel and click on the Repositories tab. Check the appropriate repo(s) and click the Update Repositories button.

The command to associate a server requires an activation key. This lets you auto-register clients so you don’t have to pop into Spacewalk to manually associate servers. The only thing needed is a name, I used ‘centos5-base’ for one, and associate a channel. The key is created automatically once you click the button. Create an Activation Key: Systems -> Activation Keys -> Create Key -> Description, Base Channel, click Update Activation Key

You’ll need the ‘1-‘ at the beginning of the key to activate a client.

There’s a set of tools needed in order to support the activation and what gets installed depends on the OS version. For my purposes, the following are needed:

RHEL5/CentOS 5

rpm -Uvh
BASEARCH=$(uname -i)
rpm -Uvh
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL6/CentOS 6

rpm -Uvh
BASEARCH=$(uname -i)
rpm -Uvh
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL7/CentOS 7

rpm -Uvh
BASEARCH=$(uname -i)
rpm -Uvh
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

Once that’s installed (if there’s an error, you’ll need to install the epel-release package and try again), register the system.

rpm -Uvh
rhnreg_ks --serverUrl= --sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT --activationkey=1-[key]

Once done, log in to Spacewalk and click on Systems -> Systems to see the newly registered system. If you’re running different OSs under different channels, you’ll need different keys for the various OSs.

In order to activate kickstarts, you need to sync a kickstart with the Spacewalk server. It’s not complicated but it’s not obvious 🙂 Get the channel name for the kickstart repo you want to create and run the following command:

# spacewalk-repo-sync -c [Channel Name] --sync-kickstart

My channel name is hcs-centos7 so the command on my system would be:

# spacewalk-repo-sync -c hcs-centos65 --sync-kickstart

I plan on taking the kickstart configurations I built for the servers and adding them to Spacewalk to see how that works and maybe kickstart some systems to play with kickstarting.


I also have the scripts I wrote for work and have them deployed on all the servers plus adding accounts. I needed to update the times to be Mountain Time as the times for the kick off of scheduled nightly or weekly tasks were going off in early evening and slowing down access to the ‘net for Jeanne and me. This involved updating the timezones and starting the ntp daemon.

RHEL7/CentOS 7

# timedatectl set-timezone America/Denver

RHEL6/CentOS 6

You link because if there’s an update that changes the zone information, such as a day change, the system is automatically correct.

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime

RHEL5/CentOS 5

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime


And related to time, I need to ensure either ntp or chrony is properly configured and started. Kubernetes especially requires consistent time.

The chronyd and chronyc are the replacements for ntpd and ntpq. The configuration is similar though and with the same understanding about how it works. As I have a time server running on pfSense, I’m ensuring the servers all are enable and are pointing to the local time server. No point in generating a bunch of unnecessary traffic through comcast and just keep pfSense updated.


Edit /etc/chrony.conf, comment out the existing pool servers and add in this line:

server iburst

Enable and start chronyd if it’s not running and restart it if it’s already up. Then run the chronyc command to verify the change.

# systemctl enable chronyd
# systemctl start chronyd


# systemctl restart chronyd


# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
^* pfSense.internal.pri          3   6    17     1    -13us[ -192us] +/-  109ms


Edit /etc/ntp.conf, comment out the existing pool servers and add in this line:


Enable and start ntpd if it’s not running and restart it if it’s already up. Then run the ntpq command to verify the change.

# service ntpd start
# chkconfig ntpd on


# service ntpd restart


# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
 pfSense.interna    3 u   11   64    1    0.461   -5.037   0.001
 LOCAL(0)        .LOCL.          10 l   10   64    1    0.000    0.000   0.001


I started setting a nagios server. Nagios is a tool used to monitor various aspect of servers. At work we’re using it as a basic ping test just to make sure we know servers are up with a quick look. Other bits are being added in as time permits. Here I did install net-snmp and net-snmp-utils in order to build the check_snmp plugin. This gives me lots and lots of options on what to check and might let me replace some of the scripts I have in place.

SNMP Configuration

# First, map the community name "public" into a "security name"

#  source          community
com2sec AllUser   default         CHANGEME

# Second, map the security name into a group name:

#       groupName      securityModel securityName
group   notConfigGroup v1            notConfigUser
group   AllGroup       v2c           AllUser

# Third, create a view for us to let the group have rights to:

# Make at least  snmpwalk -v 1 localhost -c public system fast again.
#       name           incl/excl     subtree         mask(optional)
view    systemview     included      .
view    systemview     included      .
view    AllView        included      .1

# Finally, grant the group read-only access to the systemview view.

#       group    context sec.model sec.level prefix read    write  notif
access  AllGroup ""      any       noauth    exact  AllView none   none

Unfortunately the default check_snmp command in the commands.cfg file was a bit off.


# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ $ARG1$ 

In running the program with a -h command, I found the correct options for what I needed to do:


# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ -C $ARG1$ -o $ARG2$ -P 2c

Per the configuration, I’m using snmp version 2. Other than that, just pass the appropriate community string (-C) and object id (-o) to do the check you want to check.


In the linux.cfg file I created, I added the following check_snmp block:

define service{
        use                             local-service         ; Name of service template to use
        host_name                       [comma separated list of hosts]
        service_description             Uptime
        check_command                   check_snmp!CHANGEME!.!

Possibly Interesting OIDs:

Network Interface Statistics

  • List NIC names: .
  • Get Bytes IN: .
  • Get Bytes IN for NIC 4: .
  • Get Bytes OUT: .
  • Get Bytes OUT for NIC 4: .


  • 1 minute Load: .
  • 5 minute Load: .
  • 15 minute Load: .

CPU times

  • percentages of user CPU time: .
  • percentages of system CPU time: .
  • percentages of idle CPU time: .
  • raw user cpu time: .
  • raw system cpu time: .
  • raw idle cpu time: .
  • raw nice cpu time: .

Memory Statistics

  • Total Swap Size: .
  • Available Swap Space: .
  • Total RAM in machine: .
  • Total RAM used: .
  • Total RAM Free: .
  • Total RAM Shared: .
  • Total RAM Buffered: .
  • Total Cached Memory: .

Disk Statistics

  • Path where the disk is mounted: .
  • Path of the device for the partition: .
  • Total size of the disk/partion (kBytes): .
  • Available space on the disk: .
  • Used space on the disk: .
  • Percentage of space used on disk: .
  • Percentage of inodes used on disk: .

System Uptime OID’s

  • .

One problem with the OIDs are they’re statistics and not much use without a trigger. They’re really more useful with MRTG where you can see what things look like over a period of time. What you really want to do is check to see when stats exceed expected norms.


This is primarily a network traffic monitoring type tool but I’ve configured it to track other system statistics regarding disk space, swap, memory, and whatnot. It’s not configured just yet but that’s my next configuration task.

Posted in Computers | Leave a comment

Home Network and Internet Access

Over the years I’ve had a few network configurations.

In the mid-80’s I ran a BBS and connected to other BBSs through a transfer scheme. My system would be accessed at all hours and I had the biggest collection of utilities and games in the area at the time.

In 1989 I got a job at Johns Hopkins APL and I got direct internet access. With that, I started poking around on how to get access at home without going through a pay service like AOL or CompuServe or Prodigy. One of my coworkers at jhuapl recommended PSINet and I was able to finally get to get direct access to the Internet.

In the mid-90’s Comcast started offering access with a mixed cable/dial-up configuration and I switched over. Faster download speeds was a big draw and the payment was less than PSINet.

Comcast eventually offered full cable access. At that point I repurposed an older system as an internet gateway. It gave me the ability to play with Red Hat at home plus I’d just become a full time Unix admin at NASA. I had a couple of 3Com ethernet cards in the old computer and was running Red Hat 3 I think. It worked well and I was able to get access to the Internet and Usenet.

One of the problems though was Red Hat or the 3Com cards didn’t support CIDR. I’d gone through a system upgrade again and put the old system to the side. I built a new system and used Linksys cards (I still have one in a package 🙂 ) and Mandrake Linux as a gateway and firewall. I learned about iptables and built a reasonably safe system. I did have my website on the server using DynDNS to make sure it was accessible, and hosted pictures there but since it was on Comcast user space, not everyone was able to see the pictures. It was about this time that I started looking in to a hosted system and went to ServerPronto for a server hosted in a data center in Florida. I configured the local system (now using Mandriva after the merge) to access the remote server and configured that server with OpenBSD.

In 2004 I bought an Apple Extreme wireless access point (WAP) for my new MacBook G4. I added a third network interface to the gateway and blocked the laptop from the internal network (only permitted direct internet access). By this time I also had a Linksys 10/100 switch so other systems could directly access the ‘net.

In 2008 it was time to switch systems again. My old XP system, which was reasonably beefy with 300 Gigs of disk space mirrored and 16 gigs of RAM was converted over to be a firewall and the old box wiped and disposed of at the computer recycling place. I installed Ubuntu to muck around with it and configured its firewall. Still running three network cards and a new Apple Extreme as the old one tanked.

In November 2015, I was caught by one of the Microsoft “Upgrade to Windows 10” dialog boxes and upgraded my Windows 7 Pro system with Windows 10. For months there were problems with some of the games I played and other issues with drivers.

Around February 2016, the Virtualization folks at work were in the process of replacing their old VMWare systems. These systems are pretty beefy as they run hundreds of virtual machines. A Virtual Machine is a fully installed Unix or Windows box but only using a slice of the underlying physical system. It severely reduces time and cost in that I can get a VM stood up pretty quickly and don’t have to add in the purchase and racking of a physical system. Very efficient.

Anyway, they were decommissioning the old gear and one of the guys asked me if I was interested in one of the servers. I was a bit puzzled because I didn’t know they were decommissioning old systems and thought it would be on my desk or something. But I said sure and looked at my desk to see where I’d put it. But I received the paperwork and it was actually transferring the system to my ownership. Belongs to me, take it home. Woah! I signed off and picked up the system from the dock a few days later.

The system is a Dell R710 with 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror), 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards.

Holy crap!

I immediately set it up as a gateway and firewall using CentOS 7. I’d recently received my Red Hat Certified System Admin certification and need to try for my Red Hat Certified Engineer certification. This gave me something to play on. I set up the firewall and got all my files transferred over. The old box (XP then Ubuntu) is sitting under my desk.

In March of 2016, I bought a new system entirely. In part because of the issues I was having with my 2008 system and in part because of a decent tax check refund. Over the years I’d added a couple of 2 TB drives to the 2008 system so they were transferred over to the new system. I also have an external 3TB drive that stopped working for some reason.

I moved the 2008 system away and got the new one up and running. I had to do some troubleshooting as there were video issues but it’s now working very well.

But in mid summer the Virtualization folks asked me again if I wanted a system. They were still decommissioning systems. Initially I declined as the one I have actually does work very well for my needs but one of my coworkers highly suggested snagging it and setting up an ESX host running VMware’s VSphere 6. I’d been mucking about with Red Hat’s KVM without much success so I went back and changed my mind. Sure, I’ll take the second system.

The new system is a Dell R710 again. It had 192 Gigs of RAM but the guy gave me enough to fully populate the system to 288 Gigs of Ram. It also had 2 6 Core Xenon X5660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards just like the first one. One of the drives had failed though. At suggestion, I puchased 5 3 TB SATA drives. This gave me 8 TB of space and a spare SATA drive in case one fails plus the remaining 3 750 Gig drives are available to the first system in case a drive fails.

I configured the new system with VMWare and created a virtual machine firewall. I created a VM to replace the full physical system and copied all the files from the physical server over to VMs on the new system. With all that redundancy, the files should be safe over there. They’re plugged into a UPS as well as is my main system. I’ve been using UPSs for years, since I kept losing hardware due to brownouts and such in Virginia.

Once everything was copied over, I converted the first system into an ESX host. That one has my sandbox environment now. Mirroring the work environment in order to test out scripts, Ansible, and Kubernetes/Docker.

My sandbox consists of VMs which have been set up for three environments, and use a naming scheme that tells me what major OS version is running.

For Ansible testing, at site 1, I have an Ansible server, 2 utility servers, and 2 pairs each of 2 CentOS 5, 6, and 7 servers (2 db and 2 web). 15 servers.

At site 2, I also have an Ansible server, 2 utility servers, 2 pairs each of 2 Red Hat 6 and 7 servers (2 db and 2 web). 11 servers. And Red Hat will let you register Red Hat servers on an ESX host for free which is excellent.

For off the wall Ansible testing, at site 3, I have just a pair of servers for current versions of Fedora, Ubuntu, Slackware, SUSE, FreeBSD, OpenBSD, Solaris 10, and Solaris 11 (1 db and 1 web). 16 servers.

For Kubernetes and Docker testing, I have 3 master servers and 5 minion servers. 8 servers.

So far, 50 servers for the sandbox environment.

For my personal sites, I have a firewall, a development server, 3 host servers for site testing, a staging server, a remote backup server, a local Samba backup server, a movie and music server, a Windows XP server, a Windows 7 server, and 2 Windows 2012 servers for Jeanne’s test environment. In general, these were all on the XP server I had before I got the R710. The ability to set up VMs lets me better manage the various tasks including rebooting a server or even powering it off when I’m not using it.

And 13 servers for 63 total servers.

But wait, there’s more 🙂

In September, the same coworker made available a Sun 2540. This is a drive array he got from work under the same circumstances as the R710’s I have. He’s a big storage guy so had a lot of storage type stuff at home. I picked it up along with instructions, drive trays (no drives though), and fiber to connect to the two ESX systems. Fully populated with 3 TB drives, this would give me 36 TB of raw disk space. As RAID 5 loses a drive, a single RAID 5 would give me 33 TB however the purposes of this is to present space to both systems so I’d need to slice it up. I checked on line and purchased 6 3 TB drives for 18 TB raw, RAIDed to 15 TB and the same guy pointed us to a guy in Colorado Springs selling 18 2 TB drives. I snagged 8 and completely populated the array with 6 2 TB drives for 12 TB raw, RAIDed to 10 TB giving me 25 TB of available space to the two ESX systems. Because of the age of the 2540, I have Solaris 10 installed so I can run the Oracle management software.

As the 3 TB external drive had tanked for some reason, I extracted it from the case and installed it into the Windows 10 system. I wanted to mirror the 2 2 TB drives but couldn’t as long as there was data on it. With a 3 TB drive, I can move everything from the 2 TB one and then mirror the drives for safety.

I’ve come a long way from a single system with maybe 60 Megabytes of disk space up to a Home Environment with 70 Terabytes of raw disk space.

And of course, more will come as tech advances.

Posted in Computers | Leave a comment

Setting up pfSense

Over here I configured a firewall to replace the old single system based firewall. One of the guys at work recommended installing pfSense as a VM to replace the firewall. pfSense from the firewall aspect is wrapped around pf, the BSD Packet Filter software. I’ve used it on my old OpenBSD system so I’m somewhat familiar with it from the ruleset. Plus I already have the firewall running so it wasn’t a big deal to at least give it a shot. I downloaded the FreeBSD based .iso and created a Virtual Machine using a basic configuration of 1 CPU, 1 Gig of Ram, and 20 Gigs of space (mainly logs). As it’s a VM, I can add resources as needed. I also added the three interfaces I configured for the Firewall.

Booting to the ISO comes up with a simple menu for access to other features or by default starts installing the package. Next up is to configure the console. Next I selected the ‘Quick/Easy Install’ as I didn’t have all that much experience with the tool and really don’t have a complicated environment except for the Wireless Access Point for the third interface. “Are you SURE?” is next and basically says it’ll just install pfSense and erase the disk. Again, no problem as it’s a VM. If it tanks, I kill it and build a new one (case in point, I’m writing this after I’ve had pfSense up for a bit and am building a second VM to remind me what I did for this posting 🙂 ).

Okay and it’s on its way (wait, it is asking a question about the Kernel; just continue).

And it’s done. Took about 2 minutes even with swapping between there and here.

Reboot and let it start up. I’ve disabled all three interfaces for this one as I didn’t want it interfering with the current running one. It does come up with a start menu which will let me configure it. WAN is em0, LAN is the internal network, default to In this case, reIPs to This is also the web interface which is where I can configure the system.

Login for a new system is ‘admin’, ‘pfsense’. Change it of course.

Once in, a wizard starts up to help configure the system. First, do you want to upgrade to Gold 🙂 Next is to configure the hostname and gateway. Set the ntp server and zone. Next is configuring the WAN DHCP information. Since it’s DHCP and no special settings, Next. Configure the LAN Interface is next. As it’s already configured, I left it at the initial settings. Next, reset the admin account password. And done, click the Reload button and the firewall is ready to use.

As I have a Wireless connection as well, I needed to add the interface in. Under Interfaces, select (assign) to show the existing three. It shows the first two; WAN and LAN and an Available network port for the third interface. Click Add and it becomes ‘Opt1’. Click on it and it takes you to the configuration page for the interface. Initially it’s not configured. I select Static IPv4 from the IPv4 Configuration Type drop down and entered the new IP address for the IPv4 Address ( for purposes of the instruction). I did not check the Reserved Networks checkboxes. Click the Enable checkbox at the top and click the Save button. It tells you the configuration has changed and that you need to Apply Changes. Note that this stays as a reminder even if you close the browser tab.

Next is the Firewall drop down. As I have no reason to permit inbound traffic to my system, I left the WAN configuration at default of ‘All incoming connections’…’will be dropped’. LAN configuration was also left at default.

The OPT1 (Wireless) interface had two rules added.

As I wanted it to pass traffic to and from the ‘net, I added an Allow to Any rule. Interface: Opt1, Address Family: IPv4, Protocol: any (note the default is tcp; it caught me initially 🙂 ). Finally Source Opt1 Net, Destination any. Description is ‘Default allow OPT1 to any rule’.

Not done though. I don’t want Wireless traffic permitted on the internal network so I added a second rule. This one is Action Reject, Protocol any (don’t forget, default is TCP), Source OPT1 net, Destination LAN net. Description: Drop inbound traffic to Internal.

And as far as the firewall configuration is concerned, I’m done. I did want to use some of the other features so I started poking around the menus a bit. I set up two services: DHCP, DNS, and NTP.

I set up DHCP to be enabled on the LAN interface and added a network range of to I set up the DNS server to be (the pfSense server). Default gateway is the also the pfSense server so that can be blank. I added a Domain search list of ‘internal.pri’ as I use that for all my behind the firewall domains. I also enabled the rrd statistics graphs as I’m used to using rrdtool.

I can also enable DHCP on the Wireless lan but since I have an Apple Extreme WAP, it already handles that for me so I ignored it.

For DNS, I only needed to use the General Settings as it wasn’t all that complicated. I did Enable Forwarding Mode but left everything else alone. As I started adding VMs though, I added the IPs to DNS.

In order for CygWin on my Windows 10 system to use it as DNS, I had to manually enter the ‘internal.pri’ in the DNS suffix for this connection box:

Network and Internet
Change adapter options
Click on Network 2
Change settings of this connection
Click on the Internet Protocol Version 4 item
Click Properties
Click Advanced
Make sure the IP addresses and Default gateways are right (should be)
Click the DNS tab
Make sure the DNS servers are correct (again should be)
Under the DNS suffix for this connection add internal.pri.

For NTP, I added a few more pool servers as it works best with at least 3 and I set up 5. I did enable RRD graphs for NTP.

And that’s it. You can check out stats for the various services by looking under Status and troubleshoot under Diagnostics.

All in all it seems to be working as desired.

Posted in Computers | Leave a comment

Colonoscopy Time!

Per all the recommendations, it’s something that really should be done starting at age 50 and earlier if you have family history. I just had mine done and I’m 6 months away from 60.

One of the problems I’ve had with doing it at 50 was the cost. At around $4,000 and not always covered by whatever plan you’re on at the time, it wasn’t something I was financially prepared for. I’ve been saving over the past 2 years though through an HSA so I can have it done now.

I went in a month or so back due to a different medical problem (first time seeing a doctor in many years) and the doctor said I should schedule myself for the procedure. The big plus, last year Medicaid pays fully for the procedure making it cost me nothing. Even with a high copay right now, my cost and copay is zero bucks.

Sign me up then!

I heard the horror stories about making sure you were always a few feet away from the toilet because the preparation meds really really clean you out (really!) and quickly. I did get the 4 liter container with the preparation chemicals plus a small packet of lemon flavoring. I will say lemon does leave a film on the teeth. I started prep last night at 6pm per instructions after eating nothing after 7am (procedure was scheduled for this morning at 9:30am). It really wasn’t as bad as folks say in my experience. The fluid was a bit thick feeling, like taking the Alka-Seltzer Cold, but a touch on the salty side. Maybe due to it having electrolytes mixed in. While I did have to hit the toilet several times before going to bed and again in the morning, I mostly had a sense of fullness vs the urgency you feel with actual diarrhea when you eat something that disagrees with you.

I had to drink a glass every 15 to 30 minutes and drink until the liquid leaving the body was clear. You also must drink a lot of water because you will be dehydrated by the fluid (I drank about 5 liters of water throughout the day in addition to the 2 liters of laxative starting at 6). The nurse also said to drink half starting at 6 and the remainder before 7am this morning. By 7am I’d finished it off and exiting fluids were mainly clear with a yellow tinge to it.

Another thing folks mention is being very hungry. I have to say I felt almost no hunger throughout the day. I did spend my time hacking code and playing Doom after adjusting the Gamma to 1.0 (much much better).

Getting to the clinic was no big deal and the nurse said they were ready right now (8:45 vs 9:30) so I was brought back, given instructions, and signed the waiver that let them remove any polyps found. She did say at my age, about 30% of patients have polyps and that the most they’d ever removed was 49 in one session! She said smokers typically had more polyps than non-smokers.

I stripped, got dressed in the gown, lay down, and she covered me with a warm blanket. Did the blood pressure cuff, the finger cuff for heart rate, and then tried to locate a vein for the drugs. Being dehydrated even with the water I drank, she had a hard time. After many slaps trying to raise a vein, she finally tried my right hand. Insertion was pretty damned painful and it felt like she hit a tendon (the funny bone feeling along the finger). She pulled it out and observed “no bleeding”. Even with the large amount of water, I was still dehydrated. I pointed out that when I give blood, my left arm is generally the best place. She removed the cuff and was able to (again, painfully) successfully insert the needle.

And we’re ready to go!

I was wheeled down the hall into the room. The room itself was pretty cool itself with a large screen to my left and several screens and dangly bits to my right. I was instructed to lay on my left side and bring my legs up as if I was sitting down. She started the drugs and within 30 seconds I was feeling dizzy. The drug was a Conscious Anesthetic so I was awake but as friends say, I didn’t care. 🙂

Per the nurse, I wasn’t to drive, drink alcohol, or sign important documents for 24 hours and that I would have trouble remembering the procedure.

I do remember watching a part of the procedure on the screen and a few instructions or just a push by the doc or Nurse to shift around a little but that was it. I don’t remember being wheeled out or being taken to the reception area. I do remember farting though and getting a “good job” from someone. 🙂 My girlfriend was there to drive me home and I don’t recall asking her to take any pictures of me in the reception area still on the gurney but she has a few pics 🙂 Apparently I told her about the hand and tendon a few times. I don’t remember getting dressed. I do remember slightly leaving the office but not the elevator and vaguely recall the parking lot but not getting in the car. I just remembered her wandering around in the parking lot looking for a gas station so she could get me a soda (wanted the caffeine before headaches kicked in) and a few “there’s a long runway to get on back on the road” comments.

Later Jeanne mentioned I was in the recovery room and the nurse had removed the cuff and finger clip but I don’t remember the needle being removed. Jeanne did say I got up from the gurney, with just a T-Shirt and socks, and the nurse walking in on me, “oh my” 🙂 I remember the nurse asking if I wanted something to drink or eat; apple juice, orange juice, graham crackers, or animal crackers. I wanted apple juice. I remember that it was warm but I did drink it down. I drank down the apple juice and threw it away but do not, even with Jeanne reminding me, remember getting dressed.

I did take a slight nap in the car and as we getting close to home, I recall feeling cold sweats and nauseous. Probably because I’d been fasting for more than 24 hours. Jeanne was willing to get me something to eat but I only wanted some toast, unbuttered, and drank some of the soda. I tucked in to the couch and slept for about 4 hours (around 1pm). I felt a lot better after that and Jeanne headed out to get lunch.

Oh, and they removed 2 polyps so less than the record but in the 30% of patients. I do remember that

It’ll be about a week before they return the results of the biopsies. The results will dictate how often I’ll need to return for further cleanup. 10 years is the default but it could be less if there are problems with the biopsies.

Posted in About Carl | 1 Comment

Using a VMWare Virtual Machine As An Internet Gateway

I’ve been using my old cast off computers as internet gateways for a couple of decades now. I’d upgrade my current system, build a new one, and stick the old one in a a new Firewall/Gateway. I seem to learn better when I have something to do with it. I started with Red Hat Linux until it stopped using 3Com Ethernet cards (3c503) (issue with CIDR). Then Mandrake for several years. Then OpenBSD for a bit. Then Ubuntu. And most recently CentOS 7. I’ve used iptables a couple of times, the OpenBSD pf firewall set, back to iptables on Ubuntu, and now firewall-cmd.

A few months back, one of the teams at work was in the process of replacing their old gear with new gear. Standard lifecycle type stuff. A few of us were asked if we wanted to keep any of these cast off systems. I was a bit puzzled but sure. I’ll use it as a sandbox type thing at home. These are bigger systems as they were the older ESX servers used by the Virtualization Team. A Dell R710 (this is a rack mount server vs a desktop or laptop system).

Dell R710, 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 4 PCI Ethernet Ports.

Woah. Big box. The 2 143 Gig drives are mirrored as RAID 1. The 4 750 Gig drives are mirrored as RAID 5. This gives me 143 Gig for boot and 2 TB for data.

I built it up as a replacement for my old system. Part of this was learning how to use CentOS 7 with the new systemd and *ctl commands. I snagged my rules for the old system (iptables) and learned how to set up the new system.

I’ve used 3 network interfaces for quite a few years. One for Comcast or Internet traffic. One for my Wireless traffic. And one for my internal network. My Wireless has always had a hidden access point with security turned on. Plus it isn’t permitted access to the internal network. It uses the firewall as strictly a pass through to the ‘net. I copied all my files from the old system to the new one and have spent the past few months making sure things work as expected.

But there’s more 🙂

A few months later and they’re still decommissioning old gear. This time I was asked if I wanted a second system. Honestly I was fine with the first one. Lots and lots of power. But one of the other guys suggested I use it as a VMWare ESX host. We use VMWare a lot at work with some 400 or so virtual machines that my team manages plus these were ESX hosts so they’re really already built for this sort of thing.

While I initially said no, I went back and agreed to take the second one. I’ve been trying to use KVM to learn Kubernetes, Docker, and Ansible but it’s a bit rough to use and I was having the most trouble with getting the network working correctly without dicking up my firewall. So new system.

Dell R710, 288 Gigs of Ram, 2 6 Core Xenon X56660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 4 PCI Ethernet Ports.

Well, a _bit_ more Ram. More power to the processors but fewer. One of the 750 Gig drives had failed as well. At the recommendation of the same guy at work 🙂 I popped on line and chased down the maximum size drives these things would take. 3 Terabytes! I picked up 5 of them, 4 as replacements for the existing 750’s and one spare just in case. I pulled the four 750’s, put in the four 3 TB drives and the new system now has 8 Terabytes of disk space in a RAID 5.

I installed vSphere from VMWare and started building virtual systems. First a 8 node system for my Kubernetes and Docker testing. Next three nodes for my development environment. And most recently 3 nodes for my Ansible environment plus 12 nodes for the sandbox for Ansible (4 CentOS 5.4, 4 CentOS 6.5, and 4 CentOS 7.2 system to mirror the basic work environment).

But you know. Since the system is up and running, I should be able to create a virtual system that’s pretty thin with all the power I need for a firewall. Updates to the firewall system can take a good bit of time if I have to reboot, say for a kernel upgrade. I have the configuration from the current system of course to apply to the new system. And I would just need to pull the cable from the current physical gateway to the virtual gateway to test, then back if it’s not working as expected.

But first, let’s move the existing web sites off the firewall to the virtual environment. They don’t need to be up all the time. Just the backup for the forums and blogs.

The photo site takes about 250 gigs; pictures (35 Gigs), site backups (42 Gigs), and system backups (175 Gigs). So a new VM with 500 gigs will at least hold all this while I build up a firewall.


The vSphere client has its own idiosyncrasies such as making sure I use the E1000 interface with CentOS 5 systems or making sure I disconnect before resetting the system or the iso gets locked and I have to close the vSphere client. Also, at recommendation, I enabled ssh access to the system. Even better. Now I can poke around with the esxcfg utilities. I believe everything you can do in vSphere can be done with the CLI which makes me happier.

Next up, I need to create a couple of vSwitches or Virtual Switch. With this, I can isolate the three networks and ultimately set up the second R710 as a second ESX host in a cluster.

In the vSphere client, select the home system and the Configuration tab. Click on Networking and you’ll see your current configuration. If you want to add network adapters to your current switch, you’d click on the Properties and add adapters and then ‘Team’ them. In this case I want to create unique networks so click on ‘Add Networking’, then Next as you’re adding a new network. Select the NIC you want to use for the new vSwitch and click Next. Give it a good label. I used Wireless for one and External – Comcast for the third switch. Then Finish. Do the same for the second switch if that’s what you are adding. I’m adding two switches so I added the second one.

For the VM, add the two additional networks to the base configuration. One for Wireless and one for the External – Comcast network. I of course upgraded the system once it was booted from kickstart. Next up, setting up the firewall.


Make sure you have forwarding enabled as this will be a router. In /etc/sysctl.d, add the following line to ’99-ipforward.conf’.

net.ipv4.ip_forward = 1

Next of course, start firewalld.

# systemctl enable firewalld
Created symlink from /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service to /usr/lib/systemd/system/firewalld.service.
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/firewalld.service.
# systemctl start firewalld

Now let’s see what’s in place by default.

# firewall-cmd --get-zones
block dmz drop external home internal public trusted work

For firewall-cmd, I need to utilize only three of the available zones. So I don’t have to define new zones, I’ll associated one of the three interfaces with the appropriate zone; dmz for wireless, internal for the home network, and external for comcast.

And let’s see what the default zone is.

# firewall-cmd --get-default-zone

The default zone is ‘public’ so first I want to change it. In general always add –permanent when configuring the system. That adds it to the files in /etc/firewalld. In this case you can’t but for future commands.

# firewall-cmd --set-default-zone=internal

I have three interfaces. Sadly they come up as eno16777984, eno33557248, and eno50336512. I want to rename them to be a bit shorter to make them easier to manage. In /usr/lib/udev/rules.d edit the 60-net.rules file and add the following line for each of your interfaces. You’ll need to get the MAC address from each interface before hand so ifconfig or ip addr first.

ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:3f:a7", NAME="ens192"

Replace the ATTR with the interface MAC and NAME with what you want to call it. I called mine ens192, ens193, and ens194 although you could call it internal, external, and wireless I suppose. Don’t forget to rename the ifcfg files in /etc/sysconfig/network-scripts and update the files themselves (the NAME and DEVICE keywords). And of course update the three files with the additional correct information. My wireless network interface is External (ens193) would just use DHCP to connect to Comcast.

With the interfaces more sanely named, I want to bind them to the appropriate zones

# firewall-cmd --permanent --zone=internal --add-interface=ens192
# firewall-cmd --permanent --zone=external --add-interface=ens193
# firewall-cmd --permanent --zone=dmz --add-interface=ens194

Unfortunately this doesn’t make it permanent even with –permanent. I had to add the zone information to each of the ifcfg files as ‘ZONE=’ so when I reboot, they’re still attached to the correct zone.

Next to check the default services as assigned

# firewall-cmd --list-services --zone dmz
# firewall-cmd --list-services --zone external
# firewall-cmd --list-services --zone internal
dhcpv6-client ipp-client mdns samba-client ssh

On my current physical system, I have the following final configuration

# firewall-cmd --list-services --zone=dmz
# firewall-cmd --list-services --zone=external
# firewall-cmd --list-services --zone=internal
dhcpv6-client ipp-client mdns samba-client ssh

I want to make sure the appropriate services are configured. By default, I don’t want wireless or external listening for anything but internal should be listening. firewall-cmd is aware of quite a few services:

# firewall-cmd --get-services
RH-Satellite-6 amanda-client bacula bacula-client dhcp dhcpv6 dhcpv6-client dns freeipa-ldap freeipa-ldaps freeipa-replication ftp high-availability http https imaps ipp ipp-client ipsec iscsi-target kerberos kpasswd ldap ldaps libvirt libvirt-tls mdns mountd ms-wbt mysql nfs ntp openvpn pmcd pmproxy pmwebapi pmwebapis pop3s postgresql proxy-dhcp radius rpc-bind rsyncd samba samba-client smtp ssh telnet tftp tftp-client transmission-client vdsm vnc-server wbem-https

For purposes of this though, I only care about ssh. Since it’s there, I don’t have to create a special rule for it. First off, remove any existing services that may be pre-configured.

# firewall-cmd --zone=dmz --remove-service=ssh --permanent
# firewall-cmd --zone=external --remove-service=ssh --permanent

Next if ssh isn’t part of the zone already (and it is by default) add ssh to your internal zone.

# firewall-cmd --zone=internal --add-service=ssh --permanent

And Masquerading is required in order for networking to work. Add it to the external zone.

# firewall-cmd --zone=external --add-masquerade --permanent

Once done, you’ll need to reload the firewall configuration.

# firewall-cmd reload

Confirm by listing the services available to the zones now.

# firewall-cmd --zone=dmz --list-services
# firewall-cmd --zone=internal --list-services
dhcpv6-client ipp-client mdns samba-client ssh
# firewall-cmd --zone=external --list-services

That should be it. Don’t forget to reip the interface on the public interface on the VM before rebooting. And I had to reconfigure my external server to accept connections from my new DHCP IP I received from Comcast.

Posted in Computers | Tagged | 2 Comments

Weeds and Lawns

The annoying broadleaf weed is Common Mallow. Get ground damp and pull.

Pointy leaves, purple flowers and dense seed puffs is Canadian Thistle. Paint leaves with herbicide.

The succulent in the cracks and edge of the grass is Purslane. Easily pulled when young.

The quick growing vine that climbs fences and plants id Bindweed. Paint with herbicide.

It looks like the clover looking weed with dark leaves too is Black Medic. Easily pulled from damp lawns.

Lawn care fact sheet.

The “paint leaves” is where you want to keep the underlying or surrounding vegetation. For gravel or fence, just spray. The blue Bayer stuff works well.

Posted in Home Improvement | Leave a comment