Migrating KVM Guests


This article describes the process in migrating a Virtual Machine from one physical host to another.


There are two methods of how the virtual machines were built on the current hosts. The old way is to create a LVM slice in the disk and lay a base image over the top of the using dd. The second process is more common where the images are created and stored as a file on the host.

Guest Shutdown

For any of the non Openshift (OCP) systems, you have a couple of methods of shutting down the systems. You can log in to the server and shut it down.

ssh tato0cuomifnag02
sudo su -
shutdown -t 0 now -h 

Or use virsh console from the underlying host to log in and shut it down. (Reminder the _domain and _pxe are assignments created by the new automation process):

virsh console tato0cuomifnag02
login: root
shutdown -t 0 now -h  


An interesting difference between a Kubernetes Control Node and an OCP Control Node are the extra pods used to manage the OCP cluster. The oauth pod, registry pods, console pods, and others for example. This means that while a drain isn’t necessary on a Kubernetes Control Node, for an OCP Control Node, you should drain so any control pod such as oauth will continue to be available to the cluster.

This is a concern though as if a control node fails for whatever reason, the cluster may be unavailable until replacement pods are created. OCP should be aware of the loss of an important pod like oauth and start it up on a different master. I suspect it would occur eventually.

In any case, evict the control and worker node from the cluster before migrating it.

$ oc adm drain bldr0cuomocpwrk02.dev.internal.pri --delete-emptydir-data --ignore-daemonsets --force
node/bldr0cuomocpwrk02.dev.internal.pri evicted
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: openshift-marketplace/redhat-operators-8kqpc; ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-w5r84, openshift-dns/dns-default-th2ql, openshift-dns/node-resolver-vbw7d, openshift-image-registry/node-ca-j2nrk, openshift-ingress-canary/ingress-canary-d6l42, openshift-machine-config-operator/machine-config-daemon-z5hzf, openshift-monitoring/node-exporter-rqj52, openshift-multus/multus-additional-cni-plugins-h8vcd, openshift-multus/multus-mqg5z, openshift-multus/network-metrics-daemon-npcjh, openshift-network-diagnostics/network-check-target-lflxb, openshift-sdn/sdn-zgqrt
evicting pod openshift-monitoring/thanos-querier-7c8bb4cdbd-n97pv
evicting pod default/llamas-6-p84z2
evicting pod default/inventory-4-szhvw
evicting pod default/photo-manager-4-cqqbc
evicting pod openshift-marketplace/redhat-operators-8kqpc
evicting pod openshift-monitoring/alertmanager-main-1
evicting pod openshift-monitoring/prometheus-adapter-66ff97555b-x92r2
pod/redhat-operators-8kqpc evicted
pod/inventory-4-szhvw evicted
pod/alertmanager-main-1 evicted
pod/llamas-6-p84z2 evicted
pod/photo-manager-4-cqqbc evicted
pod/thanos-querier-7c8bb4cdbd-n97pv evicted
pod/prometheus-adapter-66ff97555b-x92r2 evicted
node/bldr0cuomocpwrk02.dev.internal.pri evicted
$ oc get nodes
NAME                                       STATUS                     ROLES    AGE   VERSION
bldr0cuomocpctl01.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpctl02.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpctl03.dev.internal.pri   Ready                      master   13d   v1.22.3+e790d7f
bldr0cuomocpwrk01.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk02.dev.internal.pri   Ready,SchedulingDisabled   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk03.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk04.dev.internal.pri   Ready                      worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk05.dev.internal.pri   Ready,SchedulingDisabled   worker   13d   v1.22.3+e790d7f

The delete-emptydir-data option is used when a pod is using the emptyDir storage method. Moving a pod using this method deletes any data in that emptyDir location.

The ignore-daemonsets option is used as if a pod is using daemonsets, it means the pod is running on every node and can’t be removed. You’re just saying that, yes you know there are pods using daemonsets and it’s fine if the node is cordoned.

The force option is used when there are pods that can’t be deleted.

Once evicted, you’ll log into each OCP/K8S server that you will be migrating and shut it down.

ssh tato0cuomocpbld01
sudo su -
cd /home/ocp4
ssh -i id_rsa core@tato0cuomocpctl01
sudo su -
shutdown -t 0 now -h 

Migrate LVM Guests

This process details the process of migrating an LVM built guest.

First identify the guests on the host so you know which one to migrate. For example, the upcoming event where the physical hosts are being moved to a different data center.

# virsh list --all
 Id    Name                           State
 2     tato0cuomifnag01               running
 4     tato0cuomifnag02               running

For example, migrating tato0cuomifnag01. You’ll need to know what the path is in order to get the LVM information.

# ls -la /dev/pool2
total 0
drwxr-xr-x.  2 root root  200 Feb  7 01:40 .
drwxr-xr-x. 23 root root 4180 Feb  7 02:07 ..
lrwxrwxrwx.  1 root root    8 Feb  7 01:40 tato0cuomifnag01 -> ../dm-44
lrwxrwxrwx.  1 root root    8 Feb  7 01:06 tato0cuomifnag02 -> ../dm-45

Now you can run lvdisplay to get the size of the image. The value you want is the Current LE value.

# lvdisplay /dev/pool2/tato0cuomifnag02
  --- Logical volume ---
  LV Path                /dev/pool2/tato0cuomifnag02
  LV Name                tato0cuomifnag02
  VG Name                pool2
  LV UUID                MFBxt1-8yFR-EOd4-TVZD-nQlh-RUIu-GweC8c
  LV Write Access        read/write
  LV Creation host, time tato0cuomifnag02, 2018-01-30 15:02:24 -0600
  LV Status              available
  # open                 1
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:46

Create a new same sized LVM on the destination server.

lvcreate -l5120 -ntato0cuomifnag02 vg00

Run the following command to migrate the image. Obviously you need to be able to ssh to root on the destination server.

dd if=/dev/pool2/tato0cuomifnag02 | pv | ssh -C root@destination dd of=/dev/vg00/tato0cuomifnag02

The nice thing is the -C compresses the image and it’s an encrypted copy.

Migrate Images

This process defines the process of migrating a file and start it up on the other host.

When you shut down the guest, then per libvirt, the guest is stopped. But you’ll also need to stop the storage pool.

virsh pool-destroy tato0cuomifnag01_pool

Now that both the guest and the storage pool have been stopped, copy the image from the /opt/libvirt_images/tato0cuomifnag01_pool directory to the destination server. Use the /opt/libvirt-images directory as the target as it has sufficient space for larger images such as the katello server.

scp commoninit.iso [yourusername]@nikodemus:/opt/libvirt_images/
scp tato0cuomifnag01_amd64.qcow2 [yourusername]@nikodemus:/opt/libvirt_images/

On the destination server, create the pool directory and move the images into the /opt/libvirt-images/tato0cuomnag01_pool/ directory. You’ll need to set ownership and permissions as well.

mkdir /opt/libvirt_images/tato0cuomifnag01_pool
cd /opt/libvirt_images
mv commoninit.iso tato0cuomifnag01_pool/
mv tato0cuomifnag01_amd64.qcow2 tato0cuomifnag01_pool/
chown -R root:root
find . -type f -exec chown 644 {} \;

Extract Definitions

Once the images have been copied to the destination host, you’ll need to extract the domain and for the guests that are images, the storage desc riptions.

Extract the guest definition.

virsh dumpxml tato0cuomifnag01_domain > tato0cuomifnag01_domain.xml

For the guests that are images (the new automation process), extract the storage pool definition.

virsh pool-dumpxml tato0cuomifnag01_pool > ~/tato0cuomifnag01_pool.xml

Copy Definitions

Once you have the definitions, copy the xml files to the destination server.

scp tato0cuomifnag01.xml [yourusername]@nikodemus:/var/tmp
scp tato0cuomifnag01_pool.xml [yourusername]@nikodemus:/var/tmp

Import Definitions

Log into the destination server and import the domain definition. The LVM based guest may require editing of the xml file in case the source LVM slice is different than the destination LVM slice.

virsh define /var/tmp/tato0cuomifnag01.xml

For the image based guests, import the storage pool definition as well.

virsh pool-define /var/tmp/tato0cuomifnag01_pool.xml

Activate Guests

For the image based guests, activate the storage pool first. The guest won’t start if the storage pool hasn’t been started. Also configure it to automatically start when the underlying host boots.

virsh pool-start tato0cuomifnag01_pool
virsh pool-autostart tato0cuomifnag01_pool

Then start the guest.

virsh start tato0cuomifnag01_domain


Rejoin the migrated node to the cluster.

$ oc adm uncordon bldr0cuomocpwrk02.dev.internal.pri
node/bldr0cuomocpwrk02.dev.internal.pri uncordonedReferences

Then check the cluster status to see that the migrated node is up and Ready.

$ oc get nodes
NAME                                 STATUS  ROLES    AGE   VERSION
bldr0cuomocpctl01.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpctl02.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpctl03.dev.internal.pri   Ready   master   13d   v1.22.3+e790d7f
bldr0cuomocpwrk01.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk02.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk03.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk04.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f
bldr0cuomocpwrk05.dev.internal.pri   Ready   worker   13d   v1.22.3+e790d7f


Finally remove the xml files.

rm /var/tmp/tato0cuomifnag01.xml
rm /var/tmp/tato0cuomifnag01_pool.xml


The Recovery process is very similar. In the event the physical host was replaced, we’ll need to migrate all the guests back over to the replacement host.

In order to determine what guests belong on the replaced host, check the installation repositories. Both the terraform and pxeboot repositories are complete installs on all physical hosts for the site. The directory structure is based on the hostname of the physical host. Simply log in to the current hosts, navigate to the repo’s site/hostname directory for the replaced host, and determine which guests need to be migrated back to the replaced host.

Once that’s determined, follow the above process to migrate the guests back to the replaced host.


After all the guests have been migrated back to the replaced host, you’ll need to remove the guests from the holding physical hosts.

virsh undefine [guest]
virsh pool-undefine [guest]
rm -rf /opt/libvirt_images/[guest]_pool

For LVM based guests, you’ll need to use the lvremove command.


Some information that’s helpful during the work.

If you accidentally pool-destroy (stop) the wrong pool, the guest doesn’t stop working. Remember the command simply marks the storage pool as inactive. It doesn’t actually shut down storage. As long as the guest is running, the pool will remain active to the guest. If you stop the guest and try to start it again and the storage pool is inactive, the guest will not start. To restart the storage pool, run pool-start for the storage pool and it’s active again.


  • virt-backup.pl – Alternative Python script to migrate LVM images.
  • https://docs.openshift.com/container-platform/4.9/nodes/nodes/nodes-nodes-working.html
Posted in Computers, KVM | Tagged , , , , | Leave a comment

Computers And Me

Over the past 12 months, I’ve deep dived into automation. I’d been investigating this for some time prior to that but this was work related. This involved research into using Terraform to automatically build virtual machines and Ansible to configure the virtual machines. I’ve used Ansible in the past but this again was a deep dive. Due to the method of building an Openshift Container Platform (OCP), I also used tftp and pxe to automatically build an OCP cluster.

As a result, I built 92 virtual machines including three OCP clusters, in 2 hours.

For perspective, a relatively recent project where I built 100 virtual machines using a more manual process, took 18 months.

To be clear, it took 12 months and a ton of experience in building machines manually to get to the point where I could build 92 machines in 120 minutes. But in that time I built the systems over and over again as I tested methods and broke environments. This also means I can now rebuild a system, several systems, or even a complete site in a very short period of time. Minutes instead of days.

I’ve been building systems for over 40 years now. From local area networks, personal gaming systems, systems for my clients, to various flavors of Unix and Linux, to cloud based systems such as Amazon and Google cloud services. I also have quite a few programming projects from back when I started all the way to present day. It’s great fun and keeps me on my toes.

My current home environment is pretty extensive. I use it as a lab where I can try things, break them, and try again. I’m running both a VMware vCenter cluster and a standalone server running Ubuntu to use KVM. Over 100 TB of storage, a TB of memory, and 144 CPU cores. I have several Kubernetes environments consisting of docker servers, docker repositories, Kubernetes clusters, Elastic Stack clusters, and tools like gitlab and jenkins. I’m currently researching some gitops tools such as ArgoCD and Flux. I also have quite a few underlying infrastructure type servers and development servers. Total of about 150 servers.

All this has helped me explore and gain experience in development practices and the current work I’m doing with automation and working with developers has increased my knowledge and skills. I look forward to continuing this path and exploring new technologies.

Posted in About Carl, Computers | Leave a comment

Gitlab Runners


This article provides local configuration details specific to the site. Links to the relevant documentation will also be provided.


The gitlab-runner is a tool that uses the .gitlab-ci.yml file to build, test, and deploy to the target host. Each job is unique to itself but if it fails, all subsequent jobs do not run. A gitlab-runner is similar to a Jenkins Agent. You want to install it on a server different than the main gitlab server so workloads don’t impact access to gitlab itself.

Runner Installation

Installation is pretty easy. After you create the new runner server, you pull and install the runner for the server then register it.

Pull the runner binary

# curl -LJO "https://gitlab-runner-downloads.s3.amazonaws.com/latest/rpm/gitlab-runner_amd64.rpm"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  418M  100  418M    0     0  1583k      0  0:04:30  0:04:30 --:--:-- 1807k

Install the runner.

# rpm -ivh gitlab-runner_amd64.rpm
warning: gitlab-runner_amd64.rpm: Header V4 RSA/SHA512 Signature, key ID 35dfa027: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:gitlab-runner-14.6.0-1           ################################# [100%]
GitLab Runner: creating gitlab-runner...
Home directory skeleton not used
Runtime platform                                    arch=amd64 os=linux pid=11354 revision=5316d4ac version=14.6.0
gitlab-runner: the service is not installed
Runtime platform                                    arch=amd64 os=linux pid=11363 revision=5316d4ac version=14.6.0
gitlab-ci-multi-runner: the service is not installed
Runtime platform                                    arch=amd64 os=linux pid=11387 revision=5316d4ac version=14.6.0
Runtime platform                                    arch=amd64 os=linux pid=11423 revision=5316d4ac version=14.6.0
INFO: Docker installation not found, skipping clear-docker-cache

Then register the runner (this is internal to my homelab so the token being displayed isn’t an issue).

# gitlab-runner register
Runtime platform                                    arch=amd64 os=linux pid=11468 revision=5316d4ac version=14.6.0
Running in system-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
Enter the registration token:
Enter a description for the runner:
Enter tags for the runner (comma-separated):
Registering runner... succeeded                     runner=Li7r2znM
Enter an executor: docker, docker-ssh, virtualbox, docker+machine, docker-ssh+machine, kubernetes, custom, parallels, shell, ssh:
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Final Configuration

Once it’s registered, you’ll need to create a rsa key pair and copy it to whatever target servers you intend to deploy jobs to. In my example, I have a local server where I can test to make sure things work, and the remote live site. Log in to the two servers to register the host key. I’m using php in this case so the server also needs to have php installed in order to do my minimal lint test of the php scripts.

Note to also get the rsa public keys for any of the artifact servers you pull from. For example I have artifacts on my two dev servers. The means all the gitlab-runner servers that pull from those two dev servers will need their rsa public keys added to the dev servers.

Create Jobs

You’ll need to create a .gitlab-ci.yml file in your repository that contains the steps required to build the project. In this case, I’m using my small Llamas band website for the example but it could be anything.

Here I define the stages I’ll be using for the deployment.

Each job is unique to the other jobs. Any task you want to do such as removing the .git directory, will need to be done in each stage. Using tags, you could point each stage to different runners.

  - test
  - deploy-local
  - deploy-remote

    - test
  stage: test
    - |
      for i in $(find "${CI_PROJECT_DIR}" -type f -name \*.php -print)
        php -l ${i}

    - home
  stage: deploy-local
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ svcacct@ndld1cuomtool11:/var/www/html/llamas/

    - remote
  stage: deploy-remote
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ svcacct@remote:/usr/local/httpd/llamas/


When you check in the .gitlab-ci.yml file, a pipeline starts. In the project, on the left side, click on the CI/CD and then Pipelines to see the pipeline progress. Note that there is a CI Lint button where you can validate your .gitlab-ci.yml file.

You can see the failed pipeline, due to incorrect spacing for the test script (I verified in the CI Lint section). After fixing, the pipeline passed.

Clicking on the Passed or Failed button will take you to the Pipeline.

You can see the progress of the pipeline. Each stage can be rerun by clicking on the arrow-circle and you can see how the task worked by clicking on the stage.

This is the test-job stage. Line 2 shows it’s running on the dedicated runner server. It’s a ‘Shell’ executor. It pulls the git repo to the working directory. Then runs the quick php lint test on the three files.

Things to Think About

With each stage being a unique task, we could have a runner that only does testing. It would have all the necessary tools to test projects such as php in this case. You could also have a dedicated runner that has access to the local QA box but no access to any other server. Same with remote access. You create tags for test-runner, local-qa, and remote-live for example. Then the three stages in the above example would have appropriate functions.


  • https://docs.gitlab.com/runner/
  • https://docs.gitlab.com/runner/install/linux-manually.html
  • https://docs.gitlab.com/runner/register/index.html#linux
  • https://docs.gitlab.com/ee/ci/yaml/gitlab_ci_yaml.html

Posted in Computers, Git | Tagged , , | Leave a comment

Resize KVM Images


In order to properly support the environment, one set of images will be retrieved from the Red Hat OpenShift reference site for Debian, Ubuntu, and CentOS. These images will then be modified in order to support the necessary installations. Based on the requirements, an image will be created to provide sufficient space for the deployed product to operate efficiently.

This document will provide instructions on how to make changes to such images in order to prepare them for use.

Preparing Access

The Cloud images don’t have credentials by default. The intention is to use the cloud-init process to inject the account information for the service account, which then permits access to the image. Because the images aren’t configured to use the Linux Volume Manager (LVM), we’ll need to extend the file systems the old fashioned way. The tool to use is guestfish. It permits access to the image and the ability to mount a file system which can then be edited. In this case, we’ll want to either create a password for the root user or copy your credentials from the local system. In addition, in order for root to be able to log in, you may need to edit the /etc/ssh/sshd_config file and edit the PermitRootLogin option to be yes. With those two changes, you can then log in to the image to make any updates. Example session below.

# guestfish -a centos8.qcow2

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
      ‘man’ to read the manual
      ‘quit’ to quit the shell

><fs> run
><fs> list-filesystems
/dev/sda1: xfs
><fs> mount /dev/sda1 /
><fs> vi /etc/shadow
><fs> vi /etc/ssh/sshd_config
><fs> quit

Don’t forget to use guestfish to access the image and replace the password with an asterisk.

Extending Images

There are multiple products being deployed and all have different disk space requirements. You’ll use the following command to access the consoles and create the necessary images. In preparation, copy each of the images to a specific disk size based on the requirements. If we have consistencies and the product has no underlying Operating System requirements, keep the changes to a minimum.

The sizes below are based on a reasonable base size of 20 Gigabytes and then a review of the existing environment, both as configured and the current utilization.

  • DNS Server – 20 Gigabytes, Any Operating System
  • FreeSwitch Server – 20 Gigabytes, Debian 10 Operating System
  • NFS Server – 50 Gigabytes, Any Operating System
  • MongoDB Server – 75 Gigabytes, XFS requirement for the WiredTiger Storage Engine mandating using the CentOS Operating System
  • HAProxy Server – 20 Gigabytes, Any Operating System
  • Provisioning Server – 50 Gigabytes, Any Operating System
  • OpenShift Boot Node – 20 Gigabytes, CoreOS Operating System
  • OpenShift Master Node – 100 Gigabytes, CoreOS Operating System
  • OpenShift Worker Node – 100 Gigabytes, CoreOS Operating System

You’ll use the qemu-img command to extend the images as noted above.

# qemu-img resize debian10.qcow2 20G

Accessing a Debian Console

By default, grub on a Debian 9 and 10 Cloud image has console access disabled. This is a security measure for OpenShift to prevent Out of Band (OOB) access to an image. This does mean you have to have the ability to run an X Server on your laptop. Personally I use cygwin for my Windows laptops and XCode for the Mac. Once prepared, bring up a terminal window and run startx. This should bring up the X Server and a graphical terminal console. From there, you’ll need to use Secure Shell and a specific switch in order to access the target server. ssh -Y (target server). You can verify successful access by checking your DISPLAY variable (echo $DISPLAY). If it is set, you should then be able to access the Debian image. Don’t forget to change the image flag to Spice or VNC when opening the console.

Copy the retrieved debian10.qcow2 image into a common location where you’ll make the necessary changes such as /var/lib/libvirt/images/debian10/. Run the following command to bring up a graphical console session. Note that the –graphics flag is spice.

# virt-install \
    --memory 2048 \
    --vcpus 2 \
    --name dbtst \
    --disk /var/lib/libvirt/images/debian10/debian10.qcow2,device=disk \
    --os-type Linux \
    --os-variant debian10 \
    --virt-type kvm \
    --graphics spice \
    --network default \

Once the terminal is up, edit the /etc/default/grub file and uncomment the GRUB_TERMINAL=console line. Save it and run the update-grub command. Once that is done, you will be able to bring up a text console in the future to troubleshoot any issues. In this case, you will continue the disk space modifications through the graphical console.

Accessing the CentOS and Ubuntu Consoles

For the CentOS and Ubuntu images, copy the centos8.qcow2 and ubuntu18.img image into the image directory, in this example the /var/lib/libvirt/images/centos8|ubuntu18/ directory. You’ll need to give it a unique name when starting it as noted in the examples below. Once in the image, you can make the necessary changes, such as increasing the available disk space, then shut the image down.

For CentOS

# virt-install \
    --memory 2048 \
    --vcpus 2 \
    --name cotst \
    --disk /var/lib/libvirt/images/centos8/centos8.qcow2,device=disk \
    --os-type Linux \
    --os-variant centos8 \
    --virt-type kvm \
    --graphics none \
    --network default \

And for Ubuntu

# virt-install \
    --memory 2048 \
    --vcpus 2 \
    --name ubtst \
    --disk /var/lib/libvirt/images/ubuntu18/ubuntu18.img,device=disk \
    --os-type Linux \
    --os-variant ubuntu18.04 \
    --virt-type kvm \
    --graphics none \
    --network default \

Extending Debian EXT4 File System

By default the Debian image is 2 Gigs in size. This process extends the file system as required. Start the console and log in. This is an EXT4 file system so you’ll use the fdisk tools and partprobe and resize2fs to update the partition.

# df -k
Filesystem     1K-blocks   Used Available Use% Mounted on
udev             1014152      0   1014152   0% /dev
tmpfs             204548   2948    201600   2% /run
/dev/vda1        2030416 991160    918068  52% /
tmpfs            1022720      0   1022720   0% /dev/shm
tmpfs               5120      0      5120   0% /run/lock
tmpfs            1022720      0   1022720   0% /sys/fs/cgroup
tmpfs             204544      0    204544   0% /run/user/0

Run fdisk to see that 20 Gigs is available to the system now.

# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe5c3b0d8

Device     Boot Start     End Sectors Size Id Type
/dev/vda1  *     2048 4194303 4192256   2G 83 Linux

For an EXT4 file system, you’ll need to delete the partition and add it back in at the full available size.

# fdisk /dev/vda

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe5c3b0d8

Device     Boot Start     End Sectors Size Id Type
/dev/vda1  *     2048 4194303 4192256   2G 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-41943039, default 41943039):

Created a new partition 1 of type 'Linux' and of size 20 GiB.
Partition #1 contains a ext4 signature.

Do you want to remove the signature? [Y]es/[N]o: n

Command (m for help): p

Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe5c3b0d8

Device     Boot Start      End  Sectors Size Id Type
/dev/vda1        2048 41943039 41940992  20G 83 Linux

Command (m for help): w
The partition table has been altered.
Syncing disks.

Unfortunately, partprobe isn’t part of the Debian installation. Simply install the parted package and partprobe will be installed in /sbin.

# aptitude install parted
The following NEW packages will be installed:
  libparted2{a} parted
0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 473 kB of archives. After unpacking 809 kB will be used.
Do you want to continue? [Y/n/?] y
Get: 1 http://deb.debian.org/debian buster/main amd64 libparted2 amd64 3.2-25 [277 kB]
Get: 2 http://deb.debian.org/debian buster/main amd64 parted amd64 3.2-25 [196 kB]
Fetched 473 kB in 1s (458 kB/s)
Selecting previously unselected package libparted2:amd64.
(Reading database ... 27035 files and directories currently installed.)
Preparing to unpack .../libparted2_3.2-25_amd64.deb ...
Unpacking libparted2:amd64 (3.2-25) ...
Selecting previously unselected package parted.
Preparing to unpack .../parted_3.2-25_amd64.deb ...
Unpacking parted (3.2-25) ...
Setting up libparted2:amd64 (3.2-25) ...
Setting up parted (3.2-25) ...
Processing triggers for libc-bin (2.28-10) ...

Now run partprobe to register the new partition in the kernel.

# partprobe

Finally use resize2fs to extend the filesystem.

# resize2fs /dev/vda1
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/vda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
[ 4014.025845] EXT4-fs (vda1): resizing filesystem from 524032 to 5242624 blocks
[ 4014.172547] EXT4-fs (vda1): resized filesystem to 5242624

And we’re now at 20 Gigs.

# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
udev             1014152       0   1014152   0% /dev
tmpfs             204548    2948    201600   2% /run
/dev/vda1       20608592 1008764  18723724   6% /
tmpfs            1022720       0   1022720   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1022720       0   1022720   0% /sys/fs/cgroup
tmpfs             204544       0    204544   0% /run/user/0

Extending CentOS XFS File System

By default, the cloud image for CentOS 8 is 8 Gigs. The file system is XFS and not EXT4 so you’ll use the XFS tools.

# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
devtmpfs          897776       0    897776   0% /dev
tmpfs             930128       0    930128   0% /dev/shm
tmpfs             930128   16856    913272   2% /run
tmpfs             930128       0    930128   0% /sys/fs/cgroup
/dev/vda1        8181760 1404372   6777388  18% /
tmpfs             186024       0    186024   0% /run/user/0

When running fdisk, you’ll see the current /dev/vda1 partition size of 16,384,000 sectors and the available sectors at 41,943,040 sectors.

# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xada233c8

Device     Boot Start      End  Sectors  Size Id Type
/dev/vda1  *     2048 16386047 16384000  7.8G 83 Linux

Grow the partition to the available size.

# growpart /dev/vda 1
CHANGED: partition=1 start=2048 old: size=16384000 end=16386047 new: size=41940992 end=41943039

And then extend the file system to use the entire partition.

# xfs_growfs -d /dev/vda1
meta-data=/dev/vda1              isize=512    agcount=4, agsize=512000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=2048000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2048000 to 5242624

When done, the partition is now the new size.

# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
devtmpfs          897776       0    897776   0% /dev
tmpfs             930128       0    930128   0% /dev/shm
tmpfs             930128   16856    913272   2% /run
tmpfs             930128       0    930128   0% /sys/fs/cgroup
/dev/vda1       20960256 1493912  19466344   8% /
tmpfs             186024       0    186024   0% /run/user/0

Extending Ubuntu GPT File System

The default Ubuntu image is only 2 Gigs in size. You’ll need to use the gdisk command in the console vs the fdisk one as fdisk doesn’t work on GPT partitions. The process is similar though.

# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
udev             1007580       0   1007580   0% /dev
tmpfs             204072     680    203392   1% /run
/dev/vda1        2058100 1072008    969708  53% /
tmpfs            1020348       0   1020348   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1020348       0   1020348   0% /sys/fs/cgroup
/dev/vda15        106858    3696    103162   4% /boot/efi
tmpfs             204068       0    204068   0% /run/user/0

In gdisk, you’ll need to delete the existing partition and recreate it to the new size.

# gdisk /dev/vda
GPT fdisk (gdisk) version 1.0.3

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/vda: 41943040 sectors, 20.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): E1A6C9DD-012D-4943-8697-0FE02F412F36
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 41943006
Partitions will be aligned on 2048-sector boundaries
Total free space is 37332958 sectors (17.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1          227328         4612062   2.1 GiB     8300
  14            2048           10239   4.0 MiB     EF02
  15           10240          227327   106.0 MiB   EF00

Command (? for help): d
Partition number (1-15): 1

Command (? for help): n
Partition number (1-128, default 1): 1
First sector (34-41943006, default = 227328) or {+-}size{KMGTP}:
Last sector (227328-41943006, default = 41943006) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'

Command (? for help): p
Disk /dev/vda: 41943040 sectors, 20.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): E1A6C9DD-012D-4943-8697-0FE02F412F36
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 41943006
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1          227328        41943006   19.9 GiB    8300  Linux filesystem
  14            2048           10239   4.0 MiB     EF02
  15           10240          227327   106.0 MiB   EF00

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/vda.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.

You’ll need to refresh the partition table in the kernel by running the partprobe command.

# partprobe

With the partition recognized, we now need to resize the file system to the new partition table.

# resize2fs /dev/vda1
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/vda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/vda1 is now 5214459 (4k) blocks long.

And we’re at 20 Gigs.

# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
udev             1007580       0   1007580   0% /dev
tmpfs             204072     680    203392   1% /run
/dev/vda1       20145724 1077020  19052320   6% /
tmpfs            1020348       0   1020348   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1020348       0   1020348   0% /sys/fs/cgroup
/dev/vda15        106858    3696    103162   4% /boot/efi
tmpfs             204068       0    204068   0% /run/user/0
Posted in Computers, KVM | Tagged , , , , , | Leave a comment

Docker Best Practices

Use the Official Docker Image for your Base Image. Download the image from the docker.io site.

Never use the Latest tag for an image. In order for consistency, identify the version you want to use. This also prevents potentially breaking changes.

Use the image that satisfies your requirements. Many time something like Alpine, which is a tiny image, has all the functionality you need for your image. Something like Ubuntu/Debian or CentOS will have a ton of extra, unnecessary tools. Similar to when we build server, only start services that are needed and disable or even remove the services that aren’t used.

Optimize Caching Image Layers. If you check the dockerfile for an image, you can see how what’s been installed. Each command adds a layer to the final container. To optimize, consider how your container is built. Whatever is changed the most often, should be later in the final container. For example, your application code might be best at the end of the dockerfile. Then the rest of the unchanged layers won’t need to be rebuilt as everything after a rebuilt image is also rebuilt.

docker history image:tag

Exclude unnecessary content to reduce the size of the image. Use the .dockerignore file. Ignoring .git, etc.

Remove unnecessary files after the container is built. Use multi-stage builds. Lets you use staging images to prevent development files from the final image. For example, when you compile a C program, you have makefiles and .obj files. You would use a multi-stage build where the first image compiles the final program but the last image only contains the compiled binary.

Set up an appropriate user to run the final application and not root. It’s a security bad practice.

# create group and user
RUN groupadd -r tom && useradd -g tom tom

# set ownership and permissions
RUN chown -R tom:tom /app

# switch to user
USER tom

CMD node index.js

Scan image for vulnerabilities. Use the docker scan from the docker hub system. docker scan image:tag.

Posted in Computers, Docker | Leave a comment

Replacing An OKD Master

I’m running an OKD4 (aka an upstream Red Hat OpenShift Container Platform v4) cluster at home. Recently my bldr0cuomokdmst1 master node crapped out. I couldn’t even log in to determine the problem. For my current Kubernetes clusters, I’m casting the logs to my ELK clusters but hadn’t done that for the OKD4 cluster yet. So no idea why it failed. Now I need to delete the old master, clear out the configuration, and add it back in again.


First off, you should have already replaced the certificate in your installation process. See my OKD4 Installation post for details on how to do that. As a reminder, the certificate is only good for 24 hours for a new cluster build. After that you need to retrieve the cluster certificate and add it to the ignition file.

High Availability Proxy (haproxy)

I removed the failed master node from the cluster in part because it was causing timeout problems with managing the cluster while I was researching the solution. Sometimes the console would work, other times I suspect it was trying to get to the failed master and timing out. However deleting the master and adding it back in is a pretty quick process so removing it from the haproxy configuration might not be necessary.

Log in to your cluster haproxy server (bldr0cuomokdhap1) and in /etc/haproxy, edit the haproxy.cfg file to comment out the bldr0cuomokdmst1 server entry from the configuration. Don’t delete it as we’ll be uncommenting it when the master has been recovered.

Remove Master

Drain the node from the cluster. While the pods won’t be removed, the node will be cleared from the system so it’s not accepting incoming connections any more.

$ oc drain bldr0cuomokdmst1 \
--delete-emptydir-data \
--ignore-daemonsets \

This will take some time as requests to the failed master will need to time out. Now verify bldr0cuomokdmst1 has been removed as an endpoint in the cluster. The IP is which should be missing from the output below.

$ oc describe svc kubernetes -n default 
Name: kubernetes
Namespace: default
Labels: component=apiserver
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Families: <none>
Port: https 443/TCP
TargetPort: 6443/TCP
Session Affinity: None
Events: <none>

Finally delete bldr0cuomokdmst1 from the cluster.

$ oc delete node bldr0cuomokdmst1
node "bldr0cuomokdmst1" deleted

Clear etcd

Since etcd is on the masters and not a separate cluster, we’ll need to remove the etcd configuration as well. You’ll log in to a working etcd pod, remove bldr0cuomokdmst1, then remove any secrets that belong to the master node from the cluster.

Verify Status

$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
2 of 3 members are available, bldr0cuomokdmst1 is unhealthy

Working Pods

$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd etcd-bldr0cuomokdmst2                3/3     Running     0          33d etcd-bldr0cuomokdmst3                3/3     Running     0          33d 

Clear Configuration

In this step, you’ll log in to one of the above working pods (2 or 3) and remove bldr0cuomokdmst1 from the etcd configuration.

$ oc rsh -n openshift-etcd etcd-bldr0cuomokdmst3
Defaulting container name to etcdctl.
Use 'oc describe pod/etcd-bldr0cuomokdmst3 -n openshift-etcd' to see all of the containers in this pod.

Get the member list

# etcdctl member list -w table
| e43c9f92fda4af5 | started | bldr0cuomokdmst3 | | | false |
| ac4ca03e8d200e17 | started | bldr0cuomokdmst2 | | | false |
| c7804a193b578f80 | started | bldr0cuomokdmst1 | | | false |

You can see that bldr0cuomokdmst1 is in the configuration. Now remove it.

# etcdctl member remove c7804a193b578f80
Member c7804a193b578f80 removed from cluster 617aed10ec5206e3

Finished. You can exit the etcd pod now.

Remove Secrets

There are three secrets for each master node. You’ll need to get the list and then remove them from the cluster.

$ oc get secrets -n openshift-etcd | grep bldr0cuomokdmst1
etcd-peer-bldr0cuomokdmst1 kubernetes.io/tls 2 205d
etcd-serving-bldr0cuomokdmst1 kubernetes.io/tls 2 205d
etcd-serving-metrics-bldr0cuomokdmst1 kubernetes.io/tls 2 205d
$ oc delete secret -n openshift-etcd etcd-peer-bldr0cuomokdmst1 secret
"etcd-peer-bldr0cuomokdmst1" deleted
$ oc delete secret -n openshift-etcd etcd-serving-bldr0cuomokdmst1
secret "etcd-serving-bldr0cuomokdmst1" deleted
$ oc delete secret -n openshift-etcd etcd-serving-metrics-bldr0cuomokdmst1
secret "etcd-serving-metrics-bldr0cuomokdmst1" deleted

And the failed bldr0cuomokdmst1 server has been completely removed from the cluster.

Rebuild Master

This process follows the initial build process except with a single node. You’ll boot the server to an ISO image, update the boot line, approve the Certificate Signing Request (csr), and monitor the node.

haproxy Node

If you cleared the master server from haproxy, you’ll need to uncomment that line in the haproxy.cfg file plus you’ll need the boot node so that’ll also need to be uncommented. Log in to bldr0cuomokdhap1 and in /etc/haproxy edit the haproxy.cfg file. After updated, restart haproxy.

Start Guest Image

Update the boot settings for the image to boot to the BIOS after started. Once started, attach the Fedora CoreOS ISO, make sure the system boots to CD-ROM, and save and start the image.

In the Live Image, tab to the boot line and enter in the following parameters at the end.


Monitor the console and you’ll see the system start up, then start downloading the image, and it’ll boot. Remove the ISO from the image after it starts and reboot the system. Initially it’ll retrieve the newest image so you may see it reboot again.

Approve CSRs

Now you need to review and approve any outstanding CSRs.

$ oc get csr
csr-k258q 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending

There should really only be one but if more than one, feel free to investigate. Once done, approve the outstanding CSR and pods will start on the bldr0cuomokdmst1 node.

$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
certificatesigningrequest.certificates.k8s.io/csr-k258q approved

At this point, the replacement master node is part of the cluster and is creating pods. As there are quite a few (43 at my last count) on a single master node and I’m on high speed WiFi, it will take several minutes.

Verify etcd Configuration

Once all the pods have started and the new cluster member is Ready, verify etcd is also working. First off, check the health of the cluster.

$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
3 members are available

That looks good. Log in to the same etcd pod you did at the start and check the table output.

$ oc rsh -n openshift-etcd etcd-bldr0cuomokdmst3
Defaulting container name to etcdctl.
Use 'oc describe pod/etcd-bldr0cuomokdmst3 -n openshift-etcd' to see all of the containers in this pod
# etcdctl member list -w table
| e43c9f92fda4af5 | started | bldr0cuomokdmst3 | | | false |
| 630bbe550c81b877 | started | bldr0cuomokdmst1 | | | false |
| ac4ca03e8d200e17 | started | bldr0cuomokdmst2 | | | false |

haproxy Update

The final task is to remove the boot server from the haproxy configuration. Log in to bldr0cuomokdhap1 and edit /etc/haproxy/haproxy.cfg. Comment out the boot server lines, save, and restart haproxy.


Posted in Computers, OpenShift | Tagged , , | Leave a comment

Original Car Wars Play

Morning! Jeanne and I played the old Car Wars Saturday night. Took a bit to track down some of the bits and there are a few others that I haven’t found yet but I’m still hunting.

Top pic is my chit carrying case. Under are the bags of chits from the game and various expansions.

Top pic is the vehicle record sheet I created back then. I was a graphics artist so that’s an ink and paper creation. Under pic is the same record sheet but after spending about a half hour filling it out. I used one of the sample cars but still had to track down information so we had damage points, ammo, etc.

Amazingly I found my first arena, the top pic. All the post-its are holding chits down where they were. Apparently we were playing a game who knows how long ago and intended on continuing. Clearly not. The lower pic is after all the post-its have been removed leaving the original damage and car in place. I suspect the other vehicle dropped off sometime in the past and it may be in a box somewhere.

To the game itself. Jeanne and I have the same car and I didn’t want to add extra bits to the game so no dropped debris from hits. In the pic you can see the original arena, our vehicles, flaming oil chits, and the official turning key. We both have linked rocket launchers with a targeting laser (+1). To hit is 8 or higher on two dice by default and a 7 with the laser and 2d6 damage. To the rear is a Flaming Oil Dispenser, 3d6 damage. We’re coming in at 15 and 20 mph, both with a handling class of 3 and acceleration of 5mph. We also have the Control Table and Speed Table close at hand.

In the first pic, after doing some maneuvering, Jeanne cut a hard left in front of me just missing hitting each other (cars not chit dimension). In the next pic however, she learned and dropped a flaming oil patch in front of me that I can’t avoid! 11pts of damage to my undercarrage!

In the final pic, you see the ending of the game. She decided to head on crash and the damage gave her 2 pts left of front armor. I fired my rockets into her front end (and took damage too, 2″ radius explosive damage). I still had armor after the explosion but I breached her armor, removed both her rocket launchers, her engine, and her targeting laser leaving her to scramble out of the wreckage and hotfoot it over to the exit.

In comparing the two games, Jeanne liked playing in both. The issues with the game with chits is they were small and tended to adhere to the fingers so moving them around was a bit annoying at times, something 6th fixed with miniatures. We liked the original turning key better than the new turning key that slides under the base of the miniatures. Certainly set up was significantly quicker with 6th and having cards for the various weapons and accessories. If we had pre-generated vehicle record sheets, setup would have been about as quick I guess. Having to hunt through the book for stats took quite a bit of time and I had to look up the Difficulty values for some things like taking damage but once known, it was easy enough to manage (taking damage: 1-5 D1, 6-9 D2, 10+ D3). We really liked the MPH for speed vs the 1-5 increments. Made it a bit more realistic feeling.

Overall 6th is better just because of the miniatures and having cards for the weapons and accessories. 1st-4th is better though if part of your joy is building the best car. Think of it as closer to a Collectible Card Game like MtG where you’re creating the “Best” deck. And the earlier game has a ton of adventures. But we like both and Jeanne is looking forward to breaking out either game.

Posted in Gaming | Tagged , | Leave a comment

Car Wars and Car Wars 6E, A Comparison

I played Car Wars back with the pocket edition and a bunch (not quite all) of the expansions. I even ran a AADA chapter in Virginia for a bit and when we got into a car accident one Sunday afternoon, the idiots who started the problem pointed to my AADA Autodueling bumper sticker to prove I was driving up and down I-95 causing trouble (with my wife, BIL, and two kids in the car of course).

For those who haven’t played the original Car Wars, it’s basically a Mad Max, Zelazny’s Why Johnny Can’t Speed, junkyard car battles, and Robot Wars mixed together.

Car Wars 1st Edition

Car Wars is based on a dystopian future with shortages and minimal government safety in the badlands. Much like junkyard battles with the addition of weapons like machine guns, rocket launchers, and recoilless rifles.

Basically you select a pregenerated car which has a chassis, body, wheels, engine, armor, accessories, and weapons (and a crew) and take the car to an arena and battle it out. Winner takes all.

To keep things fair, you set a price limit. For arena combat, you have ‘Division’ level combats. A Division level 10 area might mean you’re limited to a car that’s worth $10,000. There might be Division 12, 15, 20, 25, and even Unlimited arenas.

This also means you can build your own car (like Robot Wars). There are rules on how to build a car in the original Car Wars. To me, this was half the fun. My first ever computer program back in the early 80’s was a Car Wars Vehicle Generation Program. I recently found the source code and it’s up on my github site now 🙂 You had several tables with the things you need to build a vehicle. Chassis, body type, engine, etc.

For movement, there was a movement key. An ‘L’ shaped key with Difficulty (D) markings. D1 – D6. The sharper the turn, the more D you accumulated. This was compared with your speed to a table. The lower the speed, the less likely you’d crash. As you went faster, control was more easily lost when making maneuvers to the point that you needed to roll a die to see if you lost control. If you did, or if you exceeded your control value by too much, you went to the crash tables. You could skid, slide, or even roll the car over, one side for every 10mph you were going.

One of the very cool maneuvers was a Bootlegger Reverse. You’d use the key to turn your car sideways and then point back in the opposite direction at 0mph. Cool stuff.

Combat was basically you rolled two dice based on your weapon and getting under a specific target number means you hit the target. However there were a ton of modifiers. Distance, speed, cover, etc would increase or decrease the target number. If you finally hit, the target would take damage generally to the armor first, then to components or even crew killing the driver and/or gunner.

There were several rules on collisions, running into or over obstacles, pedestrians, plus lots of new vehicles over time like motorcycles, trailers, helicopters, boats, and even tanks.

Subsequent Editions

I didn’t continue with the game after the 1st edition. From what I can discover, 2nd, 3rd, and 4th editions were pretty similar if pared down. 1st had a ton of expansions and gear. Arenas, road sections, adventures, even books on background of various areas of the US and world. Subsequent editions just improved the rules but you could still use the old expansions.

5th Edition

5th Edition make some significant changes. You had a Division booklet with two pregenerated vehicles and fought each other. I looked at it back then and my preference was to create the vehicle, not use a pregenerated one regardless of who created it.

6th Edition Kickstarter

When the 6th Edition kickstarter was announced, I was skeptical. But in reviewing the information, it seemed like it was returning to the roots of the game to some extent. And it had vehicle miniatures. I put up my money and got everything I could. I like Car Wars and if it’s going back to where I can build vehicles, I’m in.

We received the first package yesterday morning and played last night. The first box is the Double Ace box.

  • Core game with two boxes of miniatures (red/yellow and blue/green sets) and a rules box with cards, 5 set of dice, chits, and pregenerated cars.
  • Miniatures Set 1 with 5 more miniatures
  • Miniatures Set 3 with 5 more miniatures
  • Dice Bag
  • Road Tiles with more chits (Control, Ace, and Damage)
  • Crew Pack with more crew options and weapons
  • Dropped Weapons pack with mines, paint, spikes and other debris you can drop from your vehicle
  • Linked Weapons Pack with 8 linked weapons and turreted weapons
  • Armory Pack with 8 more weapons like Gauss and Railgun and turreted weapons

We basically played just the core set last night. My wife playing green and me in blue.

The core game has a dashboard now. It has armor tracks for all four sides of the vehicle, a tire track, engine track, and speed track. Tires and Engine start at 10 points. Then you select a Build Point target for the game. On the tire track, there are control markers. You get a control chit for each visible marker on the dashboard. As you lose points, you get fewer control chits at the start of your turn. Four is max for tires plus if you’re going slower, you can pick up a control chit for a max of five. In addition your speeds are 0-5 and you can lost max speed depending on tire damage down to 1 max speed.

You have a set of cards for your vehicle. It has accessories, upgrades, weapons, and crew cards which also have accessories and weapons.

The vehicle cards have build point values in blue in the upper right of each card, from 0 to 8. And damage values in the lower right. Crew cards have crew points in red, from 0 to 8, in the upper right and damage values as well.

We selected 16 Build Points (BP) per the example in the rules, a small car setup. In addition to 4 Armor Points (AP) and 4 Crew Points (CP). You have to have two crew. Rookies are worth 0 so you can get two rookies; driver and gunner, or better crew with abilities.

There are 5 sets of dice. A yellow set, green, red, black, and white set. The dice have three types of faces. Damage (a star), shields, and mechanical. It can be 1 or 2 items (like 2 stars).

For movement, the turn key has five Difficulties or D maneuvers. D0 through D6. D1 is a green die, D2 is yellow, D3 is red, and d6 is white. For weapons the cards show how many of each die you roll for damage. Two yellow, 1 red, and 1 black (for example).

In maneuvering, if you turn, you roll your speed number of yellow dice (so if speed 3, you roll 3 yellow dice) plus the die for each of the D for the maneuver you’re attempting. For example, if you make a D3 move (45 degree turn) and are at speed 3, you roll 3 yellow dice for speed plus an additional green, yellow, and red die. For every shield face that results, you lose one control chit (remember you start with 4 or 5 chits each turn). If you run out of control chits and still have shields, you lose control.

For combat, I roll the dice as indicated on the weapon plus any additional as listed by equipment. The number of stars is the damage taken. If a mechanic face shows, if the weapon has extra abilities like explosion, fire, or tire damage, that’s taken. The defensive player rolls the number of yellow dice equal to their speed plus any additional dice based on upgrades if any. Any shields block the same number of damage points.

For every two car lengths, the defensive player can reroll their dice and if they have any Ace tokens, they can spend them to reroll any single die.


The original Car Wars had a lot of tables which added complexity to the game. What’s your Handling Class, speed, what difficulty was the move, look things up, okay roll the dice and check the crash table depending on your vehicle.

The 6th Edition has replaced much of that with chits and dice. It does speed up the game and my wife, who’s not really into the more complex table lookup type games, actually enjoyed the quick game last night.


The price will be the stopping point for all but the most fervent Car Wars fans. At $310.00 it might be just a bit out of reach. 🙂 You can get the starter sets without all the extra bits for much less though. For us fans though, I think this is a good upgrade to the original game. Especially if my wife enjoyed the game. That makes all the difference. I can’t wait to get more games going with her. I’m considering running a few games at my FLGS just to see if there’s interest. Maybe Jamie will stock a game.

Kickstarter vs Retail

Okay, I asked a question on the forums about one of my purchases on the and have compiled a list of what the Kickstarter stuff costs and what the retail cost is.

Site: https://carwars.sjgames.com/products/

Kickstarter/Retail Price:

  • Double Ace – $200/375 (bought separately from the store, below items are 442.55)
    • Core Box – 75/149.95
    • Rules Box – /30
    • Miniatures Box A – /60
    • Miniatures Box B – /60
    • Miniatures Set 1 – 40/59.95
    • Miniatures Set 3 – 40/59.95
    • Dropped Weapons Pack – 20/29.95
    • Crew Pack – 15/29.95
    • Armory Pack – 10/24.95
    • Linked Weapons Pack – 15/24.95
    • Road Tiles – 30/49.95
    • Dice Bag – 10/12.95
  • Double Ace Upgrade Bundle – 165/ (bought separately from the store, below items are 199.80)
    • Miniatures Set 2 – /59.95
    • Miniatures Set 4 – /59.95
    • Vehicle Guide – Not available in Store
    • Wrecks – Not available in Store
    • Playmat #2 – /59.95
    • Dice Pack – /19.95
  • Two Player Starter Set – Red/Yellow – 70/79.95
  • Two Player Starter set – Blue/Green – 70/79.95
  • Uncle Al’s Upgrade Pack – 15/29.95
  • Playmat #1 – 45/59.95
  • Playmat #3 – 45/59.95

Not part of kickstarter (as far as I know of course):

  • Uncle Al’s Arena Supplies #1 – /10
  • Car Wars Clear Bases – /12.95
  • Car Wars Colored Bases – /12.95


  • Kickstarter: $610.00
  • Shop: $1,035.50

So I saved $425.50 by going All-In on the Kickstarter. Which only helps if I actually get folks playing it. 🙂

Posted in Gaming | Tagged , | Leave a comment

Recommended Games

Jeanne and I have played several games in the library (see prior post). I did a quick walk through for someone who asked for a list of good games. While there are quite a few games that are pretty good, for the walk through, I did a quick selection vs an indepth review of all the games. So here’s the list I provided.

Ace of Aces You’re flying WWI airplanes. Your opponent has a book with pictures of you and you have pictures of their plane. You select a maneuver which has a page number, go to an intermediate page, then the final page to see what your opponent is doing (including shooting at you).

B-Movie Card Games You’re creating a B-Movie. There are 8 or 10 different decks. You lay down a location, characters, and accessories like a whip. Then your opponents throw monsters at your movie to prevent you from creating the best B-Movie.

Bunny Kingdom Very little conflict. Gridded and numbered board. You pick cards and place your bunnies. There are a couple of squatter cards hoping the official card doesn’t come up but mostly there isn’t a way to take over a space that has a bunny on it.

Castles of Burgundy Manage and increase your hold by rolling dice and selecting resources in order to add farm animals and buildings. This is a very well balanced game for 2 players, 3 players, or 4 players.

Cosmic Encounters Take over alien races with skill, no dice rolling. However you’re playing races that modify the core rules. I think this might have influenced the creation of Magic: The Gathering. Core rules then an alien power that changes a rule which makes the game different every time you play.

DC Deck Building Basically drawing cards based on the DC comic universe from the displayed cards using your current cards to ‘take over’ the displayed card. I find this much better than Legendary BTW.

Discoveries of Lewis and Clark Explore the west and gather routes. Gather native tribes to help in gathering routes.

The Doom That Came To Atlantic City Basically reverse Monopoly. The board is set up with two houses on a similar Monopoly style board. You play an old god and create portals by destroying houses. The person with 6 portals to the nether realms wins.

Elder Sign I like all three of the Cthulhu type games (Arkham Horror and Eldritch Horror) but this is the quicker of the three to play. In this one you’re at a portal and trying to block the coming of the elder gods. Arkham Horror takes like 6 hours to play, Eldritch Horror about 3 hours, and Elder Sign about 90 minutes. There is a second edition Arkham Horror we haven’t tried yet so maybe it’s more streamlined.

Epic Tiny There are several different games like dinosaurs, space, strategy, etc. We’ve played the space one several times. Quick and easy to set up and quick to understand. We enjoy these.

Everdell You have workers and are placing them on locations, events, and such to increase your new location.

Five Tribes You place a grid of tiles and then three meeples on each tile. You pick up the three (more or less), drop one on each tile and the last one has to match the color of a meeple on the last tile. Then you collect whatever the color describes.

Formula D This one is cool. You have multiple different dice from a 4 sided up to a 30 sided, each based on a speed of your car (slower is 4, fastest is 30). You have to slow down for corners so you have to drop your speed or you could spin out and end up in the bushes. Pretty fun game.

Gizmos You’re building engines using marbles from a central pot. Whoever has the best engine at the end of the game, wins.

Gloom You’re trying to kill off your ‘family’ by telling horrible stories about their lives based on the cards drawn. These are clear cards with text and pictures so your negative points can be blocked if someone places a positive card over it. Pretty good storytelling type game.

Horrified Actually a game much easier for younger kids. You’re playing one or two classic monsters; invisible man, wolfman, mummy, etc and the villagers are trying to stop you.

Mountains of Madness You’re flying a plane into Antarctica and to the Mountains of Madness. You’ll flip cards and deal with the results.

New York 1901 You’re creating buildings. Over time you can replace your bronze tiles with silver and gold to increase from a small building to a sky scraper.

The Others This is based on the 7 deadly sins. You are trying to defeat the core monster of each of the sins. Lots of setup but it can be fun.

Pandemic Mainly Iberia although Pandemic itself is pretty good with the expansion.

Photosynthesis You are building trees. The sun rotates around the board so you only get points if your trees aren’t blocked by other, taller trees.

The Red Dragon Inn Card game where four folks have returned from an adventure and are in a tavern enjoying the spoils, drinking, and playing cards. The last one to pass out, wins 🙂

Resident Evil This is a card game similar to DC Deck Building or Legendary and does a pretty good job matching the Resident Evil video game.

Robo Rally Kind of computerized where you’re creating paths using cards to move your robot around.

Splendor A pretty simple game. You have three rows of cards. Raw gems, polished gems, and final gems (or jewelry, I forget 🙂 ). You’re building up your cards and gems in order to buy the next and final row of cards. 15 points wins the game.

Star Wars X-Wing A miniatures game but with very large models for some of the games. You have a wheel for movement and at the end of each turn, you could be destroyed.

Talisman You have three realms. An outer, inner, and the final center task to become the ruler of all the realms. You travel around the board (or expansions, I like the City one but the space one is pretty interesting), pick up sufficient gear to beat other players to the center. The main problem here is folks tend to get a lot more than they really need to win so the final encounter is pretty much a done deal. Still a fun game.

The Thing One of several “one of the team is the bad guy” sort of games 🙂 You’re exploring, trying to discover who the bad guy is. If the bad guy escapes on the helicopter, you lose.

Ticket To Ride Mainly Rails and Sails. Train placement to complete routes. The more routes completed, the more points. Rails and Sails lets you place ship routes too on a world map or map of the Great Lakes.

Trains Another deck building type game except you’re building train routes.

Tzolk’in One of the multiple paths to victory type games. You have several wheels that move during the game.

Wings of War WWI air combat. Like X-Wing, you have plans that move around a board. You select your maneuver (three cards) and flip them. At the end, you might get shot down..

Wingspan This is a pretty cool game for the bird cards. Great pictures. You’re trying to attract birds to your location. You get food, eggs, and birds. Most points wins.

Zombicide A zombie game. Lots of setup and lots of expansions. You’re trying to get from point a to point b, collecting keys or whatnot in order to pass into the prison (for example) and defeating zombies. My wife did an excellent job escaping zombies and bringing out the other players where I was at the exit and ready to just abandon everyone. She’s great 😀

Posted in Gaming | Leave a comment

Mid Year Review

This is more of a redo of the game room into a game library type post. I generally do a COMC as an end of year review so none of the extra information that’s typically added.

In this case the old game room had the Kallax shelves against the walls leaving space in the middle of the room for a table and chairs so my wife and I could play a game now and then. Last year though we bought a really big dining room table (8 seater) and we’ve been gaming up in the dining room instead of the game room.

Last week I whipped up a room layout with the existing Kallax shelves so I could move them around on the drawing and see if things would fit in the way I desired. I currently have 4 5×5 shelves, 2 4×4 shelves, 3 2×4 shelves, 2 2×2 shelves and 5 1×4 shelves.

With the change to a game library vs a game room, I was able to add two more 5×5 shelves which gives me a ton more space plus I have room to add two more 1×4 shelves on top of the 5×5 shelves and another 4×4+2×4 combo.

The process was to build the new 5×5 shelves and put them into the hallway. Then move the games from the existing shelves into the hall shelves. I had to pull down the 1×4 shelves from on top of the 5×5 shelves into the hallway as well before we could reposition the 5×5 and 4×4 shelves. After the existing shelves were rearranged the games were put back into the shelves where ever they’d fit. Then move the new shelves into the room (the white ones in the back). Finally move the second 4×4 shelf to the left side. It holds the mini games and card games plus some display stuff on top.

My wife was a great help in moving things and had a few suggestions for the move and for the final layout. I originally had the two 4×4 shelves at the front of the room but because of the angle of the wall, some of the games would be inaccessible and the door wouldn’t be able to be fully opened. We moved the single 4×4 back enough to let the door shut and put the second 4×4 against the wall as you can see.

The pictures progress from the hallway, a view of the first shelf. Then the side wall shelf followed by the shelves all the way to the back. Then the last shelves to the left. I also was able to put up a few of my posters. I have tons more but I’m a Shadowrun geek so it’s Shadowrun 🙂

My wife’s main question, “are the new shelves enough space to hold the next 5 years of games?” 😀

My final intention for this year’s End of Year Review is a video review discussing my 50 years of gaming, pointing out specific games, and generally providing a bit more targeted information.

And all the pictures are over here if you want to see larger versions.

Gameroom Pictures
Posted in Gaming | Leave a comment