Cygwin, KVM, and X Window

With the conversion of systems from VMware over to KVM, and of course just managing the KVM servers, I need to document the process for accessing the servers and viewing consoles.

I’ve been using cygwin for many years. I’ve even used the X Window system to access X applications on servers. For the purpose of accessing KVM VMs, we’ll bring up a terminal and start X:

startx

This brings up a full screen window. Generally not bad but I have a 43″ monitor so it’s pretty large. Resize it down to a more manageable size and in the term window, ssh over to the KVM server. Mine is my Nikodemus system at 192.168.5.10. Pass the -Y option to tell ssh this is to be used as a tunnel with the X Window system.

ssh -Y 192.168.5.10

To make sure it worked, verify your DISPLAY environment variable. It should show something like this:

printenv | grep -i display
DISPLAY=localhost:10.0

Since it’s a tunnel, it’ll show localhost.

Now you can start the virt-manager for viewing all the VMs, regardless of status, or virt-viewer which shows only the running VMs. Note that resizing is done by clicking on the double box in the upper right corner of the window.

Posted in KVM | Tagged , , , , , | Leave a comment

VMware to KVM

I’ve been a member of VMUG and user of VMware on my Dell R710 and then Dell R720XDs for almost 10 years now. It’s been interesting and valuable in helping me understand VMware.

A couple of jobs back, I was reintroduced to KVM. I’d tried it before getting into VMware and couldn’t get the hang of it but with it being my job, and my new skills with virtual machines, I had a better grasp and was even able to build terraform scripts to build sites. Cool stuff.

With Broadcom restricting access to VMUG members, and my license having expired, plus moving, had left my existing VMs out in the cold. While I have terraform scripts and documentation (and backups) for most of my systems, I do need to get some data off of some where I either failed to retrieve the data or backed up the operations but not my home directories or just verify the backups I have are complete enough.

Some of the files aren’t a big deal. Jenkins and Gitlab, I’d just reinstall and reimport or rebuild the processes. I’m not in a production or developer environment where I need to bring in all the changes over the past 10 years. Just recreate the setup, git push the files, and move forward. Heck, I’ll even have clean installations. When I first installed Jenkins, I installed everything that was suggested. With experience now, I’ll just install what I need.

The first step is to pull them off the VMware systems. I can bring them up, I just can’t start any VMs. I enabled SSH access to the systems and on my R710 KVM box, simply scp’d the ones I wanted to review over to a /opt/vms directory. I reviewed the system specs in order to properly start them and off we go.

The first step is to convert the images to the qcow2 format. Install the qemu-img package and run the following command:

qemu-img convert -f vmdk -O qcow2 /opt/vms/monkey/bldr0cuomaws1/bldr0cuomaws1.vmdk bldr0cuomaws1.qcow2

Next up is to install it into Qemu. This makes it visible to KVM in order to run the system. I used the settings I retrieved in order to properly configure the domain.

virt-install --name bldr0cuomaws1 --ram 4096 --vcpus 2 --disk bldr0cuomaws1.qcow2,format=qcow2 --import

Here’s the tricky part. For most of the systems, I just wanted to retrieve the data. Once the domain has been configured, you can try to start the new server but I wasn’t having much success. I did find I could actually just use a command called guestmount and simply mount the image to /mnt and copy the data from the system.

guestmount -d bldr0cuomaws1 --ro -i /mnt

Once done, I changed over to /mnt and simply copied the data from my home directory to a central location. After that, I didn’t really need this image any more so I deactivated it and removed it.

virsh pool-destroy bldr0cuomaws_pool
virsh pool-delete bldr0cuomaws_pool

Next up, I need to see how this will work with multiple disks.

Oh, one thing. You can make sure the image was copied and converted properly before you delete the VM from VMware.

# qemu-img info bldr0cuomrepo1.qcow2
image: bldr0cuomrepo1.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 16.8 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

For a multi-disk image setup, we’ll need to convert the disks and then attach them to the primary image. They are all LVM though so you can’t really mount the entire system using guestmount. You’d only mount individual mount points.

For LVM systems, when you add the disk, you’ll need to add them as the appropriate sd. They’re added sequentially so a drive with 4 extra drives, you’d add sdb, sdc, sdd, and sde.

Once that’s done, you’ll need to look at the mount points and then mount them individually as guestmount won’t actually mount everything.

We’ll walk this through from conversion to mounting.

First off, here’s the original directory and files for this server. It basically held a bunch of linux images used when kickstarting servers. Kind of an automatic build process from the past. Do I need everything? Probably not. In this case, I’m just seeing what’s there, maybe in the home directory, and copying it off:

# ls -al
total 1656836168
drwxr-xr-x 2 root root         4096 Oct 30 23:02 .
drwxr-xr-x 6 root root         4096 Oct 31 23:30 ..
-rw-r--r-- 1 root root         1298 Oct 30 16:49 lnmt1cuomjs1-52466e53.hlog
-rw------- 1 root root  85899345920 Oct 30 17:08 lnmt1cuomjs1-flat.vmdk
-rw------- 1 root root         8684 Oct 30 23:02 lnmt1cuomjs1.nvram
-rw------- 1 root root          508 Oct 30 17:08 lnmt1cuomjs1.vmdk
-rw-r--r-- 1 root root            0 Oct 30 23:02 lnmt1cuomjs1.vmsd
-rwxr-xr-x 1 root root         4084 Oct 30 23:02 lnmt1cuomjs1.vmx
-rw------- 1 root root 536870912000 Oct 30 19:06 lnmt1cuomjs1_1-flat.vmdk
-rw------- 1 root root          511 Oct 30 19:06 lnmt1cuomjs1_1.vmdk
-rw------- 1 root root 536870912000 Oct 30 21:04 lnmt1cuomjs1_2-flat.vmdk
-rw------- 1 root root          511 Oct 30 21:04 lnmt1cuomjs1_2.vmdk
-rw------- 1 root root 536870912000 Oct 30 23:02 lnmt1cuomjs1_3-flat.vmdk
-rw------- 1 root root          457 Oct 30 23:02 lnmt1cuomjs1_3.vmdk
-rw------- 1 root root       186501 Oct 30 23:02 vmware-89.log
-rw------- 1 root root       413874 Oct 30 23:02 vmware-90.log
-rw-r--r-- 1 root root       309911 Oct 30 23:02 vmware-91.log
-rw-r--r-- 1 root root       226039 Oct 30 23:02 vmware-92.log
-rw-r--r-- 1 root root       281301 Oct 30 23:02 vmware-93.log
-rw-r--r-- 1 root root       334962 Oct 30 23:02 vmware-94.log
-rw-r--r-- 1 root root       191944 Oct 30 23:02 vmware.log
-rw------- 1 root root     85983232 Oct 30 23:02 vmx-lnmt1cuomjs1-24cae1aa722f12da9b70e188df14347036fca212-2.vswp

The files we’re interested in are just the vmdk files. These have a description of each disk, like so:

# cat lnmt1cuomjs1.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=bbd23a17
parentCID=ffffffff
createType="vmfs"

# Extent description
RW 167772160 VMFS "lnmt1cuomjs1-flat.vmdk"

# The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "10443"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "876a84261b8e8ba71481a111bbd23a17"
ddb.toolsInstallType = "4"
ddb.toolsVersion = "11269"
ddb.uuid = "60 00 C2 9c 1b 27 9b c2-8a a4 da a6 3e ae eb 89"
ddb.virtualHWVersion = "11"

Honestly, to me, these don’t mean a whole lot. Once I have a list of the vmdk files (initial, 1, 2, and 3), I can convert them. First I created a directory for the files in /opt/libvirt_images which is where I have all the pool files. Then ran the qemu-img commands to convert all the disk images.

qemu-img convert -f vmdk -O qcow2 /opt/vms/morgan/lnmt1cuomjs1/lnmt1cuomjs1.vmdk lnmt1cuomjs1.qcow2
qemu-img convert -f vmdk -O qcow2 /opt/vms/morgan/lnmt1cuomjs1/lnmt1cuomjs1_1.vmdk lnmt1cuomjs1_disk1.qcow2
 qemu-img convert -f vmdk -O qcow2 /opt/vms/morgan/lnmt1cuomjs1/lnmt1cuomjs1_2.vmdk lnmt1cuomjs1_disk2.qcow2
 qemu-img convert -f vmdk -O qcow2 /opt/vms/morgan/lnmt1cuomjs1/lnmt1cuomjs1_3.vmdk lnmt1cuomjs1_disk3.qcow2

Once everything is converted, you’ll need to install the main qcow2 file, then add the VMs.

You’ll have to get the domain created before you can attach the disks. To do that, you use virt-install.

virt-install --name lnmt1cuomjs1 --ram 4096 --vcpus 2 --disk lnmt1cuomjs1.qcow2,format=qcow2 --import

You can run a virsh list to see the domain once it’s created. Now attach the three disks to the domain.

virsh attach-disk lnmt1cuomjs1 /opt/libvirt_images/lnmt1cuomjs1_pool/lnmt1cuomjs1_disk1.qcow2 sdb --type disk --config
virsh attach-disk lnmt1cuomjs1 /opt/libvirt_images/lnmt1cuomjs1_pool/lnmt1cuomjs1_disk2.qcow2 sdc --type disk --config
virsh attach-disk lnmt1cuomjs1 /opt/libvirt_images/lnmt1cuomjs1_pool/lnmt1cuomjs1_disk3.qcow2 sdd --type disk --config

Run the qemu-img info command to verify the integrity of the new VM.

virt-filesystems -a /opt/libvirt_images/lnmt1cuomjs1_pool/lnmt1cuomjs1.qcow2
/dev/sda1
/dev/vg00/home
/dev/vg00/opt
/dev/vg00/root
/dev/vg00/tmp
/dev/vg00/usr
/dev/vg00/var
Posted in Qemu, Virtualization | Leave a comment

Prostate Cancer

Alrighty Friends and Neighbors. I’m going to detail my journey here from start to current. I’m a big information sharing junkie. Stay tuned for the finish as even I don’t know what that will be.

First off of course, FUCK CANCER!!!

There, with that out of the way, let’s dive right in 🙂

Primary Care Physician

Over the past three years, since I started Medicare Advantage with United Healthcare, I’ve actually had a regular Primary Care Physician. This is unusual as with healthcare provided through business health plans, doctors would change, sometimes often. At times I’ve even not gone until the doctor I trust is back on the plan. Anyway, now I’ve gone in for yearly checkups and been admonished about my bloodwork and general health. Overall I’m better than most but I could be better.

This year, I asked to see a dermatologist as I’d last gone years back and lost a small bit of skin off my shoulder. I wanted to get a checkup, which turned out to be just fine.

My doctor also recommended I see a Urologist as my PSA over the past 3 years was 5.9, 5.9, 6.0 where the range is 1 to 4. So high of course, but consistent.

Urologist Exams

I made the appointment and went to see the doctor, here in Longmont. We had a discussion about overall health, he did a digital exam; which is about what you’d expect (if you’re a guy anyway 🙂 ). We all have to do it, it’s for our benefit. But of course it’s not comfortable. My prostate appeared, at least from touch, to be in good shape. No nodules or abnormalities. He also said I had to donate to a urine test and return for the results.

When following up, the doctor said he wanted to do a more in-depth prostate exam. Mainly a slight massage of the prostate to have fluids appear in my urine and do a second urine test.

The results of that were serious enough that the doctor wanted me to get an MRI of the region around my prostate. A more targeted view. I headed over to Boulder Community Health and got the MRI. When I went in for a followup, I was told a lesion was found. This is a shadow on the MRI that isn’t there for the rest of the prostate, and there was a good chance (like 95%) that I had cancer in my prostate. Next up though, a biopsy taken of the prostate to be 100% sure.

Biopsy Doctor

This is a different doctor over in Superior. The procedure is I get a couple of antibiotics; one taken 2 hours before the procedure, then one 24 hours after, and a third 48 hours after.

See, the doctor is going through the rectum to get the samples for the biopsy. Joy! He was an older gentleman and we had a discussion about the procedure. He also said that generally he sees men where the PSA is increasing quite a bit so the fact that mine is fairly steady, is a good sign.

The day of the biopsy, the doctor had me lay on my left side, knees up, and injected several shots in the area to reduce the pain levels. Then inserted a cucumber sized laser to make sure he was getting samples from the right place, which was followed up with the tool that retrieves the samples. He retrieve 17 samples from various places in the prostate. This gave him samples of the cancer but also samples to compare healthy tissue with the cancerous tissue. Sound-wise, the “grab” was a bit of a loud pop. The actual extraction was kind of an internal itch feeling with one of them being a bit of an ouchie.

Urologist Discussion

Back to the Urologist. We had a discussion about the results of the biopsy. The range is generally 6 to 10 where 6 is the lowest severity. But there were two questions in the results where the two of us concluded it was more of a 6.5, but officially a 6.

The doctor wanted my permission to do a gene test against the samples. Basically they would look for cancer markers in the samples and compare them against men going through the same situation. He’d provide the information to the Oncologist, who was the next person I was to see.

Oncologist

This is the moment of truth. Yes, I have prostate cancer. Now then, what’s the level and what’s the path forward.

We had a good discussion as to what the results meant. Stage 1 Prostate Cancer. Stage 1 is very low as to the level of risk. Slow growing. But with the gene comparison test, I was .7 on a scale of .5 to .9 so middle of the road.

The doctor was very patient with Jeanne and me, with our questions and how to make an informed decision on what to do.

There are four options.

  1. Do nothing. The current situation is early and low risk, so I could wait and deal with it later. If there were other health issues, this might be optimum as that might claim me before Prostate Cancer. Also the problem is it may get worse and have to be dealt with with more urgency and a lower chance of survival.
  2. Hormones. The doctor said this was probably a bad idea. Hormones didn’t seem to make a difference and since the idea is to reduce the testosterone level as cancer is fed by testosterone, it would fog the brain, make you sleepy, and depressed.
  3. Radiation. The procedure is relatively short, 28 sessions over 5 weeks and had good results.
  4. Removal of the Prostate. This would be the most invasive and could result is urinary tract issues and if a nerve is nicked, loss of the ability to get an erection.

The option that seemed to be best would be radiation treatment.

After the initial discussion, the doctor took me to an exam room and checked my lymph nodes to see if they were involved. No pain in any of them so no issue there. He then did a digital exam of the prostate and again, no nodules or abnormalities. Ready now for the preparation.

Radiation Preparation

There are three preparation tasks that need to be done before starting radiation treatment.

Stabilizing Platform

I need to head down to Greenwood Village to get fitted for a platform that keeps me secure when getting treatment. This is basically a mold around my ass and thighs which will keep me in place during the procedure. There will be three tiny tattoos on the belt line. One front and center and two on the hips at the rear.

Registration Markers

Next is a visit for the markers for the radiation gun. This is an insertion of three gold chips, about the size of a grain of rice, into the prostate. These will be used by the radiation gun to ensure a precise location where radiation will be focused. In addition, a small, 2″x2″ or so gel pad will be injected between the prostate and the wall of the large intestine.

Physicist Planning

The information will then be submitted to a Physicist to create a program for the radiation gun so it will precisely target the cancerous cells and destroy them.

Radiation Treatment

This is the final result of all this. I’ll go to the Oncologist’s office every workday for 5 weeks. It’s about 10 minutes to prepare, 15 minutes under the gun, and probably 5 minutes to prepare to head out. As it’ll be in the January time frame, we may have issues with snow. The doc said no problem, we can skip days without issue and will just add them to the end.

The gun itself is a high radiation gun. It’s fairly far away from me so I can’t get in the way. The table itself moves in order for the registration markings to line up so the gun can target the areas exactly. The gun rotates 360 degrees around my hip area. This reduces the amount of radiation around the body. It’s still strong but the cancerous tissue will feel the full effect.

Side Effects

Per the Oncologist, side effects are minimal. An increase in the frequency of urination including a couple more times each night. This can last for a few months.

Final For Now

That’s it for now. The assistant will provide times I’m to do the prep work. I’ll head in either before work or at night on the way home. And crossing fingers, it’ll be clear. This will likely begin in January of 2026 so stay tuned.

Posted in Health | Tagged , , , , | Leave a comment

AWX And Requirements

Overview

This article provides information and instructions in the use of the requirements.y[a]ml and requirements.txt file when running playbooks from AWX or Ansible Automation Platform (AAP).

Galaxy Collections and Roles

When running ansible on the command line interface (CLI), you may need to install a Galaxy Collection or Role for a task that the default ansible collections and roles don’t provide. It’s a positive feature of Ansible that it’s extendable.

You can view which collections and roles are installed by using the ansible-galaxy commands. For more information, pass the -h flag.

ansible-galaxy collection list
ansible-galaxy role list

Use the -p flag to indicate the installation is in a non-standard location. For example, in the automation container in AWX, ansible collections are located in the /runner/requirements_collections directory and not in the .ansible directory.

You’ll run the ansible-galaxy command to install the needed collection. For example, for vmware, you’d run the following command.

ansible-galaxy collection install vmware.vmware

For a role, you’d run the following command.

ansible-galaxy role install geerlingguy.docker

If you need to make sure another maintainer of your playbooks has the proper collections and roles installed before running the playbooks, you can list them in a README.md file and have them manually install them, or simply create a requirements.yaml (or .yml; both work) file.

For the CLI, there are three places where the requirements.yaml file can be located.

[project]/requirements.yaml
[project]/collections/requirements.yaml
[project]/roles/requirements.yaml

When I ran the playbook using a galaxy collection and the requirements.yaml file was in the roles directory, it failed to locate the collection.

The requirements.yaml file has several available parameters. See the documentation for all the details. This is a basic requirements.yaml file.

---
collections:
  - vmware.vmware
roles:
  - geerlingguy.docker

Python Libraries

In some cases, you’ll also have python library dependencies. These can be imported using the ansible pip module. Put the requirements.txt file into your repository. Since the pip module calls out the path, it can be anywhere however putting it where the requirements.yaml file is located, makes it easier to find and manage.

The file itself is just a list of modules you need in order for your playbooks to run. There are several available options, see the documentation to explore the capabilities.

Example requirements.txt file located in the project root.

certifi

When you want to load the modules, use the ansible pip module. In AWX, the project is located in /runner/project. It’s best to use the tilde though as other automation containers might have a different home directory.

---
- name: Install dependencies
  pip:
    requirements: "~/project/requirements.txt"

When this runs, it is located in the /runner/.local/lib/python3.9/site-packages directory.

Final

I write these articles in large part because I can’t find all the information I personally am looking for in one place. This is my view, my project, and of course my documentation. 🙂

References

Posted in ansible, Computers | Tagged , , , , , , , | Leave a comment

Ansible Automation Platform Workflows

Overview

AWX Workflows let you chain tasks together and act on the outcome. This article provides instructions in how to create an AWX Workflow.

Templates

An AWX Workflow is a series of playbooks that are created in Templates to run a task. In this case, I have a pair of HAProxy servers configured as load balancers for my Kubernetes cluster. The servers use keepalived to ensure the VIP is always available.

keepalived monitors the live server and if it goes off line, it configures the idle server to take over the VIP until the live server comes back on line.

In addition I install monit, a tool that monitors the configured service and restarts it should it fail. It has a notification process and a web server so we’ll know if the service was restarted and can investigate.

This gives us the ideal chain of Templates to try out AWX Workflows.

Workflows

The expectation before you create the AWX Workflow is that you’ve run each task individually and they all run successfully.

Under Templates, click the Add drop down but select Add workflow template.

Fill out the information in the form.

  • Name – I added HAProxy Workflow
  • Description – Installs and configures HAProxy, keepalived, and monit
  • Organization – Since this is Development, I selected my Dev organization
  • Inventory – I only have the Development Inventory.

The remainder I left for another time. I clicked Save and it brought me into the Workflow Details page. I clicked Launch and the workflow started with the Visualizer.

Visualizer

In the Visualizer, you begin with a Start block. Click it to begin creating your workflow

You are now presented with an Add Node dialog box with all of your Templates.

The Node Type lets you do pre-run actions such as synchronizing your Project or Inventory before the run, identifying someone that needs to Approve the next task before proceeding, and even merging in another Workflow. In this case, we’ll simply use the default Job Template and build a simple Workflow.

For this example, select the HAProxy Install Template and click Save.

Now we’re presented with the Virtualizer that shows the Start box plus the first Node we created, the HAProxy Install node. When hovering over the node, multiple options become available.

  • Plus – Add a new node
  • i – See details about this node
  • Pencil – Edit this node
  • Link – Link in a node
  • Trashcan – Delete this node

Click the Plus and you’ll be presented with a Add Node dialog box. This one first lets you select how to proceed. On Success, On Failure, Always. In this case we want to simply continue so select On Success.

Click Next and the second task is available. Like the first time, you can select Approve, sync Project or Inventory, link in a Workflow, or simply add a new Job Template. Select the HAProxy Config Job Template and click Save.

Continue until you have a Workflow that consists of HAProxy, keepalived, and Monit. There doesn’t seem to be a way to move the Workflow tasks so it’s a straight line. You can move the Workflow to see the rightmost task and continue to add Nodes.

When done, click the Save button at the top right and you’re ready to rock!

Run Workflow

When you’re ready to run the new Workflow, simply go to the Templates task and click the Launch icon next to the Workflow.

Sibling Nodes

In the example, we created a long chain of events. Basically running each task after the prior task completed. But do these really need to be run in such a way? AWX Workflows lets you create Sibling Nodes. These Nodes are in the same column so are run simultaneously. For our example, we create Sibling Nodes for the three binary installations and then Child Nodes to configure the software.

Errors

Of course errors can occur. When they do, the Node will indicate an error status and if you selected On Success, the next Node will not start.

In case of error, simply click on the Node tile and it’ll take you to the job so you can troubleshoot.

My issue in this case the error is that the image couldn’t be pulled from quay.io. This is a problem where I live in that I’m on High Speed WiFi which isn’t always sufficient to pull the necessary image in time before it times out. I do want these containers (awx-ee:latest) to be local so the image is pulled locally vs pulled from quay.io every time I run a job. But I’ve been unable to identify where this is defined in the AWX manifest files.

Kubernetes

Just for some background, the AWX process creates multiple containers in the AWX namespace in Kubernetes. When you execute a Template, be it a Job Template or Workflow Template, an automation-job-[job id]-[unique id] container is created. This lets the orchestration environment start containers where they have sufficient resources to run.

References

In the Visualizer Editor, there’s a link to the Visualizer documentation that provides more detail on the process of creating and running AWX Workflows. I’ve added the link here as well.

Posted in ansible, Computers | Tagged , , , | 1 Comment

Banana Nut Bread

I’ve made Banana Nut Bread multiple times over the years. I thought I’d post up the recipe so I can make sure I have the ingredients when I’m out shopping. Of course I scraped it off of the ‘net so it’s a different one most times. This time though, again I thought I’d just throw the latest one up so I have it handy.

Preparation

You’ll want to have a stick (half a cup) of butter sitting on the counter warming up plus about 3 “normal” sized bananas that have been sitting out for a week or two. The skins should be just about black all the way around. Watch out for skins that have split as the banana under that spot will have dried out. And warm up the oven to 350 degrees. Heck, by the time I got it all mixed up, the oven had just hit 350.

Ingredients

  • 1/2 cup of butter
  • 1 1/4 cups of sugar
  • 1 teaspoon of vanilla
  • 2 eggs
  • 3 ripe bananas, about a cup more or less
  • 1/4 cup of milk
  • 2 cups of flour
  • 1/2 teaspoon of salt
  • 1/2 teaspoon of baking soda
  • 3/4 cup of pecans or walnuts. I basically just dumped a bunch in without measuring but you do you 🙂

Directions

You want to start with the wet ingredients first, then blend in the bananas, then the flour which turns the fairly liquid mixture a bit more firm.

In a bread pan or two, depending on how big you want it, wipe it down with some shortening or butter. This gives the sides and bottom some crispiness and makes it a little easier to remove.

You can also make muffins, same process other than use a 1/2 sized cup to fill in the tin. Fill it to just below the edge of the cup and it shouldn’t overflow.

Pour in the mixture and you’re ready to bake. Slide it into the oven and set the timer for 60 minutes (30 minutes for muffins). Check with a toothpick for doneness. Add 15 minutes to the bread and 5 minutes for the muffins if not quite done yet. It took mine 1 hour and 15 minutes for a full bread pan.

Posted in Cooking | Tagged , | Leave a comment

Ansible Web Executable Logging In

Overview

This article will describe the methodology used to manage user and team access in Ansible Web Executable (AWX).

Terminology

Ansible Web Executable (AWX) is the upstream open source software that is used in Ansible Automation Platform (AAP). Prior versions were also called Ansible Tower. I may use AWX, AAP or even Tower in this and following related articles.

Environment Methodology

The AWX Quickstart documentation describes the process in configuring AWX by creating an Organization, Users and Teams, Inventory, Credentials, Projects, and a Job Template.

The problem with this approach is objects created by Users are only visible to Users until they are added as a Role to a Team. This task would be done by the AWX automation admin, someone on the automation team. For smaller organizations, this could be acceptable, however as the organization grows, it’s going to require more members of the automation team in order to process tickets.

One of the problems with Roles is they can only be assigned for existing objects. Under the various tasks such as Credentials, there is no overall admin Role. This means you can’t give an admin privileges to just manage Credentials within the Roles.

However there is a way around this in AWX which is how my environments have been configured. I did follow the process to create an Organization, Users, and two Teams; an Admin team and a User team. This is all described below.

For permissions though, I decided to work at the Organization level and gave the Admin Team full access to the Organization via Roles and the Users Team the ability to view objects and run Job Templates. This takes the task of an automation admin having to work tickets for any team and gives it to the admins for the group that use AWX.

I was reading an article on User access and the proposal was that Users and Teams would be part of the Default Organization. This would give anyone who’s in the Default Organization the ability to view objects in any Organization. And the Organization itself would only be used to manage objects. This keeps things tidy but also permits troubleshooting without having to be a member of 1 or more Organizations.

AWX Logins

There are three instances of AWX here on my homelab.

Organizations

Within each instance, there is a Default Organization and an instance specific Organization for the Unix Admins.

  • HCS-AWX-DEV-EXUX
  • HCS-AWX-QA-EXUX
  • HCS-AWX-PROD-EXUX

Teams

There are two Teams in each Organization. One for users who administer the objects in the Organization and one for users to are tasked with running jobs.

  • HCS-AWX-DEV-EXUX-ADMINS
  • HCS-AWX-DEV-EXUX-USERS
  • HCS-AWX-QA-EXUX-ADMINS
  • HCS-AWX-QA-EXUX-USERS
  • HCS-AWX-PROD-EXUX-ADMINS
  • HCS-AWX-PROD-EXUX-USERS

References

Posted in ansible, Computers | Tagged , , , | 1 Comment

Gitlab Troubleshooting

Up here in the mountains, we get an occasional power outage. I have the servers on UPSs but generally only have 5 minutes or so to snag my laptop, log in, and start shutting down VMs before the power fails. And sometimes I’m not even around so servers basically crash.

In this case, gitlab failed to start. When checking the gitlab-ctl status output, redis is identified as down.

-bash-4.2# gitlab-ctl status
run: alertmanager: (pid 4395) 215617s; run: log: (pid 2085) 231156s
run: gitaly: (pid 4416) 215615s; run: log: (pid 2084) 231156s
run: gitlab-exporter: (pid 4446) 215614s; run: log: (pid 2088) 231156s
run: gitlab-kas: (pid 4600) 215604s; run: log: (pid 2093) 231153s
run: gitlab-workhorse: (pid 4623) 215601s; run: log: (pid 2079) 231156s
run: grafana: (pid 4658) 215598s; run: log: (pid 2086) 231156s
run: logrotate: (pid 6103) 2852s; run: log: (pid 2090) 231155s
run: nginx: (pid 4731) 215580s; run: log: (pid 2080) 231156s
run: node-exporter: (pid 4750) 215578s; run: log: (pid 2077) 231156s
run: postgres-exporter: (pid 4757) 215578s; run: log: (pid 2078) 231156s
run: postgresql: (pid 23262) 26s; run: log: (pid 2083) 231156s
run: prometheus: (pid 4782) 215576s; run: log: (pid 2075) 231156s
run: puma: (pid 23226) 28s; run: log: (pid 2087) 231156s
down: redis: 0s, normally up, want up; run: log: (pid 2082) 231156s
run: redis-exporter: (pid 4816) 215574s; run: log: (pid 2076) 231156s
run: sidekiq: (pid 23237) 27s; run: log: (pid 2081) 231156s

Okay. So I checked the logs and ran the gitlab-redis-cli stat but again got an error.

-bash-4.2# gitlab-redis-cli stat
Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: Connection refused

The socket does exist but since redis is down, there’s nothing to connect to. After a bit more sleuthing, I tried a gitlab-ctl reconfigure but that kicked out an error as well.

[2024-03-17T13:03:34+00:00] FATAL: RuntimeError: redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (redis::enable line 68) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1)
stdout:
stderr: Error: Connection reset by peer

At this point, redis is the problem. I did a gitlab-ctl tail to see the logs and redis keeps trying to start but kicks out a rdb error.

2024-03-17_13:09:09.00014 25036:C 17 Mar 2024 13:09:08.999 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2024-03-17_13:09:09.00020 25036:C 17 Mar 2024 13:09:08.999 # Redis version=6.2.8, bits=64, commit=423c78f4, modified=1, pid=25036, just started
2024-03-17_13:09:09.00022 25036:C 17 Mar 2024 13:09:08.999 # Configuration loaded
2024-03-17_13:09:09.00208 25036:M 17 Mar 2024 13:09:09.001 * monotonic clock: POSIX clock_gettime
2024-03-17_13:09:09.00394                 _._
2024-03-17_13:09:09.00397            _.-``__ ''-._
2024-03-17_13:09:09.00398       _.-``    `.  `_.  ''-._           Redis 6.2.8 (423c78f4/1) 64 bit
2024-03-17_13:09:09.00399   .-`` .-```.  ```\/    _.,_ ''-._
2024-03-17_13:09:09.00400  (    '      ,       .-`  | `,    )     Running in standalone mode
2024-03-17_13:09:09.00401  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 0
2024-03-17_13:09:09.00402  |    `-._   `._    /     _.-'    |     PID: 25036
2024-03-17_13:09:09.00406   `-._    `-._  `-./  _.-'    _.-'
2024-03-17_13:09:09.00407  |`-._`-._    `-.__.-'    _.-'_.-'|
2024-03-17_13:09:09.00408  |    `-._`-._        _.-'_.-'    |           https://redis.io
2024-03-17_13:09:09.00409   `-._    `-._`-.__.-'_.-'    _.-'
2024-03-17_13:09:09.00410  |`-._`-._    `-.__.-'    _.-'_.-'|
2024-03-17_13:09:09.00411  |    `-._`-._        _.-'_.-'    |
2024-03-17_13:09:09.00411   `-._    `-._`-.__.-'_.-'    _.-'
2024-03-17_13:09:09.00412       `-._    `-.__.-'    _.-'
2024-03-17_13:09:09.00413           `-._        _.-'
2024-03-17_13:09:09.00417               `-.__.-'
2024-03-17_13:09:09.00417
2024-03-17_13:09:09.00418 25036:M 17 Mar 2024 13:09:09.003 # Server initialized
2024-03-17_13:09:09.00419 25036:M 17 Mar 2024 13:09:09.003 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2024-03-17_13:09:09.00443 25036:M 17 Mar 2024 13:09:09.004 * Loading RDB produced by version 6.2.8
2024-03-17_13:09:09.00445 25036:M 17 Mar 2024 13:09:09.004 * RDB age 264959 seconds
2024-03-17_13:09:09.00448 25036:M 17 Mar 2024 13:09:09.004 * RDB memory usage when created 6.59 Mb
2024-03-17_13:09:09.05154 25036:M 17 Mar 2024 13:09:09.051 # Short read or OOM loading DB. Unrecoverable error, aborting now.
2024-03-17_13:09:09.05159 25036:M 17 Mar 2024 13:09:09.051 # Internal error in RDB reading offset 0, function at rdb.c:2750 -> Unexpected EOF reading RDB file
2024-03-17_13:09:09.08898 [offset 0] Checking RDB file dump.rdb
2024-03-17_13:09:09.08905 [offset 26] AUX FIELD redis-ver = '6.2.8'
2024-03-17_13:09:09.08906 [offset 40] AUX FIELD redis-bits = '64'
2024-03-17_13:09:09.08907 [offset 52] AUX FIELD ctime = '1710415990'
2024-03-17_13:09:09.08908 [offset 67] AUX FIELD used-mem = '6911800'
2024-03-17_13:09:09.08908 [offset 83] AUX FIELD aof-preamble = '0'
2024-03-17_13:09:09.08909 [offset 85] Selecting DB ID 0
2024-03-17_13:09:09.08910 --- RDB ERROR DETECTED ---
2024-03-17_13:09:09.08910 [offset 966671] Unexpected EOF reading RDB file
2024-03-17_13:09:09.08911 [additional info] While doing: read-object-value
2024-03-17_13:09:09.08912 [additional info] Reading key 'cache:gitlab:flipper/v1/feature/ci_use_run_pipeline_schedule_worker'
2024-03-17_13:09:09.08913 [additional info] Reading type 0 (string)
2024-03-17_13:09:09.08913 [info] 4828 keys read
2024-03-17_13:09:09.08914 [info] 3570 expires
2024-03-17_13:09:09.08915 [info] 42 already expired

Unexpected EOF reading RDB file. The final solution is to delete the /var/opt/gitlab/redis/dump.rdb. I’m not a fan of just deleting the file so I backed it up and restarted redis.

-bash-4.2# mv dump.rdb ~
-bash-4.2# gitlab-ctl stop redis
ok: down: redis: 0s, normally up
-bash-4.2# gitlab-ctl start redis
ok: run: redis: (pid 26114) 1s
-bash-4.2# gitlab-ctl status
run: alertmanager: (pid 4395) 216115s; run: log: (pid 2085) 231654s
run: gitaly: (pid 4416) 216113s; run: log: (pid 2084) 231654s
run: gitlab-exporter: (pid 4446) 216112s; run: log: (pid 2088) 231654s
run: gitlab-kas: (pid 4600) 216102s; run: log: (pid 2093) 231651s
run: gitlab-workhorse: (pid 4623) 216099s; run: log: (pid 2079) 231654s
run: grafana: (pid 4658) 216096s; run: log: (pid 2086) 231654s
run: logrotate: (pid 6103) 3350s; run: log: (pid 2090) 231653s
run: nginx: (pid 4731) 216078s; run: log: (pid 2080) 231654s
run: node-exporter: (pid 4750) 216076s; run: log: (pid 2077) 231654s
run: postgres-exporter: (pid 4757) 216076s; run: log: (pid 2078) 231654s
run: postgresql: (pid 26191) 2s; run: log: (pid 2083) 231654s
run: prometheus: (pid 4782) 216074s; run: log: (pid 2075) 231654s
run: puma: (pid 26123) 27s; run: log: (pid 2087) 231654s
run: redis: (pid 26114) 28s; run: log: (pid 2082) 231654s
run: redis-exporter: (pid 4816) 216072s; run: log: (pid 2076) 231654s
run: sidekiq: (pid 26162) 10s; run: log: (pid 2081) 231654s

And that seems to have done the trick for redis. I ran gitlab-ctl stop to completely stop gitlab and then rebooted the server.

Once up though, postgresql failed to start. In checking, I received the following error

PANIC:  could not locate a valid checkpoint record

It took a bit of hunting to find a basic solution. Basically I needed to reset by running the pg_resetwal program in /opt/gitlab/ but by becoming the gitlab-psql user. But first I had to stop gitlab and then become the gitlab-psql user and run the program to reset the record. It’s not a great solution but as a result of the server resetting due to a power outage.

-bash-4.2# su - gitlab-psql
Last login: Sun Mar 17 17:52:00 UTC 2024 on pts/0
-sh-4.2$ pg_resetwal -f /var/opt/gitlab/postgresql/data
Write-ahead log reset

Then I started the server again and checked the status.

-bash-4.2# gitlab-ctl status
run: alertmanager: (pid 22057) 35s; run: log: (pid 1780) 16764s
run: gitaly: (pid 22068) 35s; run: log: (pid 1801) 16764s
run: gitlab-exporter: (pid 22085) 34s; run: log: (pid 1781) 16764s
run: gitlab-kas: (pid 22087) 34s; run: log: (pid 1814) 16763s
run: gitlab-workhorse: (pid 22098) 33s; run: log: (pid 1798) 16764s
run: grafana: (pid 22108) 33s; run: log: (pid 1779) 16764s
run: logrotate: (pid 22118) 32s; run: log: (pid 1808) 16764s
run: nginx: (pid 22125) 32s; run: log: (pid 1796) 16764s
run: node-exporter: (pid 22133) 32s; run: log: (pid 1787) 16764s
run: postgres-exporter: (pid 22139) 31s; run: log: (pid 1785) 16764s
run: postgresql: (pid 22147) 31s; run: log: (pid 1799) 16764s
run: prometheus: (pid 22150) 30s; run: log: (pid 1786) 16764s
run: puma: (pid 22166) 30s; run: log: (pid 1778) 16764s
run: redis: (pid 22172) 29s; run: log: (pid 1797) 16764s
run: redis-exporter: (pid 22178) 29s; run: log: (pid 1784) 16764s
run: sidekiq: (pid 22185) 29s; run: log: (pid 1800) 16764s

Resources:

  • https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/293
  • https://forum.gitlab.com/t/postgresql-down-after-upgrade-from-13-10-to-13-12/57018
Posted in Computers, gitlab | Tagged , , | Leave a comment

Ansible Automation Platform Projects

Overview

This article will provide instructions in how to configure Ansible Automation Platform (AAP) and how to get your Project working. Links for various fields that I don’t need in my environment is provided at the end of this article.

Organizations

Before you can do anything, you need to create an Organization as all additional information is associated with an Organization.

In the AAP console, click on Organizations, then Add.

  • Name – This is selected for every additional task done in AAP.
  • Description – A reasonable description of the Organization.
  • Instance Group – A collection of Container Instances. This lets you run jobs on specific isolated containers. Otherwise all jobs run on the AAP container.
  • Execution Environment -This replaces the Python virtual environment. This gives you a customized image to run jobs where some dependency isn’t impacted by the requirements of a different job.
  • Galaxy Credentials

User Management

While the Admin can make changes, the Admin should really only be administering the cluster and not dealing with the day-to-day work.

The next step then is to add the users. Under Access, click Users and fill in the form. Under User Type, there are three. Normal User, System Auditor, and System Administrator.

  • Normal User – General access used by all members
  • System Auditor – Read-Only access to the entire environment
  • System Administrator – Admin access to the cluster.

Group Management

In AAP, Teams manage access to the various parts of an Organization. When a team member creates something such as Credentials, only that team member can use that Credential. If the Credential is to be used by all, then the Team would need to be given permission to access that Credential. Then all members of the team have access.

The next step is to create the necessary team(s). Under Access, click Teams and create the new team.

To add members to the team, under Access, click the Users link. Select the user you want to add. Under the Teams tab, select the team(s) the member should belong to.

GitLab Personal Access Token

Next up we need to access Gitlab in order to pull a repository into AAP. To do so, we need to create a Group Access Token. In Gitlab, under the group account, External-Unix in my case, your account and preferences, click on the Access Tokens link and create the token. I don’t need AAP to write back to the repository as it’s just applying configuration information for Kubernetes. So just select the read_respository option and a reasonable name and expiration date.

In AAP , click on the Credentials link and click the Add button. Add a Name, I used Gitlab Access Token, a Description, and of course the Organization you created. Under Credential Type, select Gitlab Personal Access Token. While it’s actually a Group Access Token, there isn’t an option for that in the menu. When you select it, a Token field is displayed. You’ll add the Group Access Token here.

Machine Credential

I’m using a SSH private/public RSA key and a service account to run ansible playbooks. My service account has passwordless access to sudo to root on all servers so I don’t need to pass in a service account password.

I need to create a Machine Credential for my Service Account. In the Credentials link, select the Machine Credential Type. Several fields are now available. If you use password access, you will fill in the necessary information. If you’re using a SSH Key, you can enter your passphrase here as well. In my case, I add my SSH Private Key for the service account and it’s ready to go.

Source Control Credential

In order to pull code from GitLab using the ssh link, we need to create a Source Control Credential. Click Add to create a new Source Control Credential. Similar to the Machine Credential, fill out the fields and then add the private ssh key used when you push git updates to GitLab.

Projects

Now that all the Credentials have been created, we can access the repository and bring in the Ansible playbooks. First though, you’ll need the ssh link to your repository. In GitLab, navigate to that repository, click on Clone, and copy the Clone with SSH link.

In AAP, click on Projects and Add. Enter a Project Name and Description. Select the Organization. If you’re validating content, select the appropriate Content Signature Validation Credential. And for the Source Control Type, select Git. This brings up several additional fields specific to the Git selection. Paste the SSH Clone Link into the Source Control URL field. Under the Source Control Branch/Tag/Commit field, add in the branch, tag, or commit id you want to use for the Project. And under the Source Code Credential field, select the SSH Access to GitLabcredential you created earlier.

Branch/Tag/Commit

A quick aside here on this field. If you are deploying to multiple environments, you might consider different strategies for the Projects. For example, the Development environment might be better using a branch strategy as every change then gets applied to the development server(s) for review. For pre-Production environments such as a QA or Staging environment, you might use a git tag to lock in a release. And for Production, you’d use a commit id. This locks Production so even an accidental push to the repository won’t cause Production to update until you’re absolutely ready.

Inventories

One of the things I want to do is have git manage the inventory files, for my environment that’s GitLab. There are three options for an Inventory but for inventories managed in a repository, select just Inventory from the drop menu and fill out the Name, Description, and Organization, then Save the Inventory.

Once the inventory is created, select it. There are several tabs, one is Sources. Click the Sources tab and click on Add. Enter a Name and Description for the inventory, then select Sourced from Project from the Source drop down. Additional fields will be displayed.

Under Project, select the Playbooks project. If there, select inventory/hosts from the Inventory file drop down. If it’s not there, you can manually enter it. Check the Overwrite checkbox to clear old systems that have been retired or otherwise removed from the hosts file. Upon the next sync, they should disappear from the list of Hosts. Verify it by checking the Hosts link. If they still exist, you can click the slider next to the host to remove it from consideration until AAP clears the entry.

I will note that my inventory files are automatically created by my Inventory web application. This file is then copied into the inventory directory of the Playbooks repository. Since it’s unlikely to happen often, you don’t have to set up a job to sync regularly.

Click Save and then either click the Sync all button or if more than one inventory exists, click the circling arrows to the right of the Inventory. Depending on the number of hosts, it may take a few minutes for all the hosts to register.

Templates

Finally in order to run your playbooks, you need to create a Template. Select Templates and click on Add. There are two options on the drop down menu; a Job Template for a single task or a Workflow Template for multiple tasks such as a pipeline. For this article, select a Job Template.

Several fields are now available to create the template. Enter in a Name and Description. A Job Type drop down lets you Run the job or Check the job before running (the -C or –check on the command line). Select the Inventory to use, the Project, and within the Project, the Playbook you want to run. For Credentials, I’m selecting the Machine ssh one as it’s my service account that has authority to run the playbooks.

For this article, I’m using my Unixsuite playbook which ensures my standard scripts are on all my servers. I do want to have each environment run separately though so I’m creating multiple Templates. I typically pass the variable as ‘pattern=[env]’. Since this isn’t the command line, I’ll have to add it in the Variables box. My QA environment uses the ‘cabo0’ prefix and pattern is the variable in the playbook so the following should be entered in the Variables box:

---
pattern: "cabo0"

Once saved, click the Template, then click the Launch button to make sure it works. Once you’re sure it’s working as expected, go back to the Templates and select the newly created Template. Click on Schedules and Add. Create a Name and Description. Select a Repeat frequency, I selected weekly. This brings up a few more fields where you can further customize the schedule. I selected Saturday and Never for when it Ends. Then a Start date/time. Since I’m doing it weekly on Saturdays, I selected 11:15pm. Then Save it.

Teams Management

Now that you’ve created all these entries to have your project run regularly, no one else on the team will be able to manage the bits you’ve created. Not until the Team has access.

Under Access, click Teams and then click on the Team you want to modify. Click the Roles tab to begin to assign Roles.

You’ll be presented with multiple options for what can be added to the team. I’ve selected Credentials in this case as I want the team to be able to access GitLab. Click Next to select the Credentials to add.

As you can see, the last three are the Credentials we created in this article. Click the three checkboxes and click Next.

Finally select the rights the team has to the selected credentials. In general Use access is sufficient and assumes you want no one else to manage the selected credentials. Once you’ve selected the rights, click Save and all members of the team have access.

You’ll want to do the same thing with other permissions. Access to inventories, playbooks, templates, and so on are granted through the teams interface. By default only the creator has access to the tasks they created.

References

This section provides links to the Ansible documentation. The steps in this article provide the order in which to configure my Ansible Automation Platform (AAP) installation. The links provide more detail that might be beyond the scope of my requirements.

Posted in ansible, Computers, Git | Tagged , , | 1 Comment

Game Store Move: Todo List

I created a shared Google Doc document and shared it with the team. I am popping over and updating this list but the Google Doc is the final doc.

Work Estimates

  • Asbestos Testing. I called Rex Environmental. 5 Samples need to be taken. Same day testing, 24-48 hour results.
  • Architectural Drawings. The city requires drawings of the before and after. I called F9 Productions Inc he did suggest calling the City Inspector to see if it was needed. The City FAQ says yes. I did get a fast reply, no drawings are needed.
  • Demolition Permit. Longmont charges $50 for the permit.
  • Demolition. I called Gorilla Demolition.
  • Carpeting. I called Family Carpet One Floor & Home. These folks did the current shop’s carpet. After a review of the space, we’ll put carpeting in the retail space and Luxury Plank in the gaming space.
  • Shop Signage. I called Rabbit Hill Graphics and got a very nice sign.
  • Moving Company. I called Johnson Moving and Storage as we used them to move to the house.
  • Electrical Update. The shop needs 12 new outlets. The north wall has none at all. I called Leading Edge Electric.
  • Window Tinting. The current tinting is old, degraded, and peeling off in places. I called Spotshots Windows. It might be delayed depending on other costs.

Bathroom Work

At the moment, space wise the bathroom is ADA compliant. Currently there are no grab bars so we’ll be adding them in some time during the construction work.

Construction

  • Asbestos Testing. Landlord has someone coming out.
  • Permit for Demolition work.
  • Verify walls are removed and ready for paint.
  • Longmont needs to inspect the changes.

Preparation

  • Need paint and paint gear for the walls.
  • Bring over the two spare sheets of slatwall and mount. See how many more are needed and procure them.
  • For the storage behind the north extension, get some general utility shelves. Customers and gamers won’t see these shelves (generally).
  • Get Ikea Kallex shelf units for the used board games and for terrain in the miniatures gaming space.

Installation

  • Empty the storage shed and get set up in the shop.
  • Empty the storage space in the game shop and get it moved over.
  • Bring over non-retail and easy to carry assets. Posters, pictures, board games, anything else that isn’t needed at the shop.

Utilities and Services

  • Transfer phone number and internet to my LLC.
  • Start Electricity
  • Start Longmont Services (water, trash, sewer).
  • Investigate Security Services including cameras
  • Investigate a Cleaning Service. Carpets, planks, windows, bathroom?
  • Investigate card singles insurance requirements.

Moving Company

Basically moving the shop to the new space. Wrapping up the shelf displays and loading them into the truck. Boxing up wall games and fixtures.

Final Move

  • Notify current lessor that we will not be extending the lease.
  • Box up games and accessories on the slatwall.
  • Remove slatwall and mount in the new shop.
  • Remove security system, camers, cables, and mirror.
  • Remove wall hangings.
  • Remove posters and stickers and clean front windows.
  • Stop electricity
  • Stop phone/internet (change to new space)
  • Stop Longmont services (water, trash, sewer)

Finals to not forget

  • Purchase safe and have installed
  • Review lease for what needs to be done at the old shop
  • Update distributor and publisher addresses (see wiki)
  • Update USPS (mailbox) and turn in key.
  • Update Google address so folks can find us
  • Update business license and deliver to vendors.
  • Notify the IRS
  • Update business cards
  • Get a drink cooler as we can sell refreshments at the new place.
  • Replace the phone
  • Replace the POS as it’s end of life.
  • Build Miniatures tables.
  • Shut down storage unit if empty
  • Need a desk and chair for the office.
  • Find a better chair for the POS area.
  • Notify bookkeeper
  • Replace the 12′ ladder. The current one is owned by James
  • Look into demo tables for games.

Posted in Game Store, Gaming | Tagged , | Leave a comment