This article provides information and instructions in the use of the requirements.y[a]ml and requirements.txt file when running playbooks from AWX or Ansible Automation Platform (AAP).
Galaxy Collections and Roles
When running ansible on the command line interface (CLI), you may need to install a Galaxy Collection or Role for a task that the default ansible collections and roles don’t provide. It’s a positive feature of Ansible that it’s extendable.
You can view which collections and roles are installed by using the ansible-galaxy commands. For more information, pass the -h flag.
ansible-galaxy collection list
ansible-galaxy role list
Use the -p flag to indicate the installation is in a non-standard location. For example, in the automation container in AWX, ansible collections are located in the /runner/requirements_collections directory and not in the .ansible directory.
You’ll run the ansible-galaxy command to install the needed collection. For example, for vmware, you’d run the following command.
ansible-galaxy collection install vmware.vmware
For a role, you’d run the following command.
ansible-galaxy role install geerlingguy.docker
If you need to make sure another maintainer of your playbooks has the proper collections and roles installed before running the playbooks, you can list them in a README.md file and have them manually install them, or simply create a requirements.yaml (or .yml; both work) file.
For the CLI, there are three places where the requirements.yaml file can be located.
In some cases, you’ll also have python library dependencies. These can be imported using the ansible pip module. Put the requirements.txt file into your repository. Since the pip module calls out the path, it can be anywhere however putting it where the requirements.yaml file is located, makes it easier to find and manage.
The file itself is just a list of modules you need in order for your playbooks to run. There are several available options, see the documentation to explore the capabilities.
Example requirements.txt file located in the project root.
certifi
When you want to load the modules, use the ansible pip module. In AWX, the project is located in /runner/project. It’s best to use the tilde though as other automation containers might have a different home directory.
When this runs, it is located in the /runner/.local/lib/python3.9/site-packages directory.
Final
I write these articles in large part because I can’t find all the information I personally am looking for in one place. This is my view, my project, and of course my documentation. 🙂
References
Galaxy Documentation – This doc does describe the options available in a requirements.yaml file.
Article on the requirements.yaml file – This article discusses the requirements.txt file but isn’t as complete there as with the requirements.yaml file description.
AWX Workflows let you chain tasks together and act on the outcome. This article provides instructions in how to create an AWX Workflow.
Templates
An AWX Workflow is a series of playbooks that are created in Templates to run a task. In this case, I have a pair of HAProxy servers configured as load balancers for my Kubernetes cluster. The servers use keepalived to ensure the VIP is always available.
keepalived monitors the live server and if it goes off line, it configures the idle server to take over the VIP until the live server comes back on line.
In addition I install monit, a tool that monitors the configured service and restarts it should it fail. It has a notification process and a web server so we’ll know if the service was restarted and can investigate.
This gives us the ideal chain of Templates to try out AWX Workflows.
Workflows
The expectation before you create the AWX Workflow is that you’ve run each task individually and they all run successfully.
Under Templates, click the Add drop down but select Add workflow template.
Fill out the information in the form.
Name – I added HAProxy Workflow
Description – Installs and configures HAProxy, keepalived, and monit
Organization – Since this is Development, I selected my Dev organization
Inventory – I only have the Development Inventory.
The remainder I left for another time. I clicked Save and it brought me into the Workflow Details page. I clicked Launch and the workflow started with the Visualizer.
Visualizer
In the Visualizer, you begin with a Start block. Click it to begin creating your workflow
You are now presented with an Add Node dialog box with all of your Templates.
The Node Type lets you do pre-run actions such as synchronizing your Project or Inventory before the run, identifying someone that needs to Approve the next task before proceeding, and even merging in another Workflow. In this case, we’ll simply use the default Job Template and build a simple Workflow.
For this example, select the HAProxy Install Template and click Save.
Now we’re presented with the Virtualizer that shows the Start box plus the first Node we created, the HAProxy Install node. When hovering over the node, multiple options become available.
Plus – Add a new node
i – See details about this node
Pencil – Edit this node
Link – Link in a node
Trashcan – Delete this node
Click the Plus and you’ll be presented with a Add Node dialog box. This one first lets you select how to proceed. On Success, On Failure, Always. In this case we want to simply continue so select On Success.
Click Next and the second task is available. Like the first time, you can select Approve, sync Project or Inventory, link in a Workflow, or simply add a new Job Template. Select the HAProxy Config Job Template and click Save.
Continue until you have a Workflow that consists of HAProxy, keepalived, and Monit. There doesn’t seem to be a way to move the Workflow tasks so it’s a straight line. You can move the Workflow to see the rightmost task and continue to add Nodes.
When done, click the Save button at the top right and you’re ready to rock!
Run Workflow
When you’re ready to run the new Workflow, simply go to the Templates task and click the Launch icon next to the Workflow.
Sibling Nodes
In the example, we created a long chain of events. Basically running each task after the prior task completed. But do these really need to be run in such a way? AWX Workflows lets you create Sibling Nodes. These Nodes are in the same column so are run simultaneously. For our example, we create Sibling Nodes for the three binary installations and then Child Nodes to configure the software.
Errors
Of course errors can occur. When they do, the Node will indicate an error status and if you selected On Success, the next Node will not start.
In case of error, simply click on the Node tile and it’ll take you to the job so you can troubleshoot.
My issue in this case the error is that the image couldn’t be pulled from quay.io. This is a problem where I live in that I’m on High Speed WiFi which isn’t always sufficient to pull the necessary image in time before it times out. I do want these containers (awx-ee:latest) to be local so the image is pulled locally vs pulled from quay.io every time I run a job. But I’ve been unable to identify where this is defined in the AWX manifest files.
Kubernetes
Just for some background, the AWX process creates multiple containers in the AWX namespace in Kubernetes. When you execute a Template, be it a Job Template or Workflow Template, an automation-job-[job id]-[unique id] container is created. This lets the orchestration environment start containers where they have sufficient resources to run.
References
In the Visualizer Editor, there’s a link to the Visualizer documentation that provides more detail on the process of creating and running AWX Workflows. I’ve added the link here as well.
I’ve made Banana Nut Bread multiple times over the years. I thought I’d post up the recipe so I can make sure I have the ingredients when I’m out shopping. Of course I scraped it off of the ‘net so it’s a different one most times. This time though, again I thought I’d just throw the latest one up so I have it handy.
Preparation
You’ll want to have a stick (half a cup) of butter sitting on the counter warming up plus about 3 “normal” sized bananas that have been sitting out for a week or two. The skins should be just about black all the way around. Watch out for skins that have split as the banana under that spot will have dried out. And warm up the oven to 350 degrees. Heck, by the time I got it all mixed up, the oven had just hit 350.
Ingredients
1/2 cup of butter
1 1/4 cups of sugar
1 teaspoon of vanilla
2 eggs
3 ripe bananas, about a cup more or less
1/4 cup of milk
2 cups of flour
1/2 teaspoon of salt
1/2 teaspoon of baking soda
3/4 cup of pecans or walnuts. I basically just dumped a bunch in without measuring but you do you 🙂
Directions
You want to start with the wet ingredients first, then blend in the bananas, then the flour which turns the fairly liquid mixture a bit more firm.
In a bread pan or two, depending on how big you want it, wipe it down with some shortening or butter. This gives the sides and bottom some crispiness and makes it a little easier to remove.
You can also make muffins, same process other than use a 1/2 sized cup to fill in the tin. Fill it to just below the edge of the cup and it shouldn’t overflow.
Pour in the mixture and you’re ready to bake. Slide it into the oven and set the timer for 60 minutes (30 minutes for muffins). Check with a toothpick for doneness. Add 15 minutes to the bread and 5 minutes for the muffins if not quite done yet. It took mine 1 hour and 15 minutes for a full bread pan.
This article will describe the methodology used to manage user and team access in Ansible Web Executable (AWX).
Terminology
Ansible Web Executable (AWX) is the upstream open source software that is used in Ansible Automation Platform (AAP). Prior versions were also called Ansible Tower. I may use AWX, AAP or even Tower in this and following related articles.
Environment Methodology
The AWX Quickstart documentation describes the process in configuring AWX by creating an Organization, Users and Teams, Inventory, Credentials, Projects, and a Job Template.
The problem with this approach is objects created by Users are only visible to Users until they are added as a Role to a Team. This task would be done by the AWX automation admin, someone on the automation team. For smaller organizations, this could be acceptable, however as the organization grows, it’s going to require more members of the automation team in order to process tickets.
One of the problems with Roles is they can only be assigned for existing objects. Under the various tasks such as Credentials, there is no overall admin Role. This means you can’t give an admin privileges to just manage Credentials within the Roles.
However there is a way around this in AWX which is how my environments have been configured. I did follow the process to create an Organization, Users, and two Teams; an Admin team and a User team. This is all described below.
For permissions though, I decided to work at the Organization level and gave the Admin Team full access to the Organization via Roles and the Users Team the ability to view objects and run Job Templates. This takes the task of an automation admin having to work tickets for any team and gives it to the admins for the group that use AWX.
I was reading an article on User access and the proposal was that Users and Teams would be part of the Default Organization. This would give anyone who’s in the Default Organization the ability to view objects in any Organization. And the Organization itself would only be used to manage objects. This keeps things tidy but also permits troubleshooting without having to be a member of 1 or more Organizations.
AWX Logins
There are three instances of AWX here on my homelab.
Within each instance, there is a Default Organization and an instance specific Organization for the Unix Admins.
HCS-AWX-DEV-EXUX
HCS-AWX-QA-EXUX
HCS-AWX-PROD-EXUX
Teams
There are two Teams in each Organization. One for users who administer the objects in the Organization and one for users to are tasked with running jobs.
Up here in the mountains, we get an occasional power outage. I have the servers on UPSs but generally only have 5 minutes or so to snag my laptop, log in, and start shutting down VMs before the power fails. And sometimes I’m not even around so servers basically crash.
In this case, gitlab failed to start. When checking the gitlab-ctl status output, redis is identified as down.
Okay. So I checked the logs and ran the gitlab-redis-cli stat but again got an error.
-bash-4.2# gitlab-redis-cli stat
Could not connect to Redis at /var/opt/gitlab/redis/redis.socket: Connection refused
The socket does exist but since redis is down, there’s nothing to connect to. After a bit more sleuthing, I tried a gitlab-ctl reconfigure but that kicked out an error as well.
[2024-03-17T13:03:34+00:00] FATAL: RuntimeError: redis_service[redis] (redis::enable line 19) had an error: RuntimeError: ruby_block[warn pending redis restart] (redis::enable line 68) had an error: RuntimeError: Execution of the command `/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket INFO` failed with a non-zero exit code (1)
stdout:
stderr: Error: Connection reset by peer
At this point, redis is the problem. I did a gitlab-ctl tail to see the logs and redis keeps trying to start but kicks out a rdb error.
2024-03-17_13:09:09.00014 25036:C 17 Mar 2024 13:09:08.999 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2024-03-17_13:09:09.00020 25036:C 17 Mar 2024 13:09:08.999 # Redis version=6.2.8, bits=64, commit=423c78f4, modified=1, pid=25036, just started
2024-03-17_13:09:09.00022 25036:C 17 Mar 2024 13:09:08.999 # Configuration loaded
2024-03-17_13:09:09.00208 25036:M 17 Mar 2024 13:09:09.001 * monotonic clock: POSIX clock_gettime
2024-03-17_13:09:09.00394 _._
2024-03-17_13:09:09.00397 _.-``__ ''-._
2024-03-17_13:09:09.00398 _.-`` `. `_. ''-._ Redis 6.2.8 (423c78f4/1) 64 bit
2024-03-17_13:09:09.00399 .-`` .-```. ```\/ _.,_ ''-._
2024-03-17_13:09:09.00400 ( ' , .-` | `, ) Running in standalone mode
2024-03-17_13:09:09.00401 |`-._`-...-` __...-.``-._|'` _.-'| Port: 0
2024-03-17_13:09:09.00402 | `-._ `._ / _.-' | PID: 25036
2024-03-17_13:09:09.00406 `-._ `-._ `-./ _.-' _.-'
2024-03-17_13:09:09.00407 |`-._`-._ `-.__.-' _.-'_.-'|
2024-03-17_13:09:09.00408 | `-._`-._ _.-'_.-' | https://redis.io
2024-03-17_13:09:09.00409 `-._ `-._`-.__.-'_.-' _.-'
2024-03-17_13:09:09.00410 |`-._`-._ `-.__.-' _.-'_.-'|
2024-03-17_13:09:09.00411 | `-._`-._ _.-'_.-' |
2024-03-17_13:09:09.00411 `-._ `-._`-.__.-'_.-' _.-'
2024-03-17_13:09:09.00412 `-._ `-.__.-' _.-'
2024-03-17_13:09:09.00413 `-._ _.-'
2024-03-17_13:09:09.00417 `-.__.-'
2024-03-17_13:09:09.00417
2024-03-17_13:09:09.00418 25036:M 17 Mar 2024 13:09:09.003 # Server initialized
2024-03-17_13:09:09.00419 25036:M 17 Mar 2024 13:09:09.003 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2024-03-17_13:09:09.00443 25036:M 17 Mar 2024 13:09:09.004 * Loading RDB produced by version 6.2.8
2024-03-17_13:09:09.00445 25036:M 17 Mar 2024 13:09:09.004 * RDB age 264959 seconds
2024-03-17_13:09:09.00448 25036:M 17 Mar 2024 13:09:09.004 * RDB memory usage when created 6.59 Mb
2024-03-17_13:09:09.05154 25036:M 17 Mar 2024 13:09:09.051 # Short read or OOM loading DB. Unrecoverable error, aborting now.
2024-03-17_13:09:09.05159 25036:M 17 Mar 2024 13:09:09.051 # Internal error in RDB reading offset 0, function at rdb.c:2750 -> Unexpected EOF reading RDB file
2024-03-17_13:09:09.08898 [offset 0] Checking RDB file dump.rdb
2024-03-17_13:09:09.08905 [offset 26] AUX FIELD redis-ver = '6.2.8'
2024-03-17_13:09:09.08906 [offset 40] AUX FIELD redis-bits = '64'
2024-03-17_13:09:09.08907 [offset 52] AUX FIELD ctime = '1710415990'
2024-03-17_13:09:09.08908 [offset 67] AUX FIELD used-mem = '6911800'
2024-03-17_13:09:09.08908 [offset 83] AUX FIELD aof-preamble = '0'
2024-03-17_13:09:09.08909 [offset 85] Selecting DB ID 0
2024-03-17_13:09:09.08910 --- RDB ERROR DETECTED ---
2024-03-17_13:09:09.08910 [offset 966671] Unexpected EOF reading RDB file
2024-03-17_13:09:09.08911 [additional info] While doing: read-object-value
2024-03-17_13:09:09.08912 [additional info] Reading key 'cache:gitlab:flipper/v1/feature/ci_use_run_pipeline_schedule_worker'
2024-03-17_13:09:09.08913 [additional info] Reading type 0 (string)
2024-03-17_13:09:09.08913 [info] 4828 keys read
2024-03-17_13:09:09.08914 [info] 3570 expires
2024-03-17_13:09:09.08915 [info] 42 already expired
Unexpected EOF reading RDB file. The final solution is to delete the /var/opt/gitlab/redis/dump.rdb. I’m not a fan of just deleting the file so I backed it up and restarted redis.
And that seems to have done the trick for redis. I ran gitlab-ctl stop to completely stop gitlab and then rebooted the server.
Once up though, postgresql failed to start. In checking, I received the following error
PANIC: could not locate a valid checkpoint record
It took a bit of hunting to find a basic solution. Basically I needed to reset by running the pg_resetwal program in /opt/gitlab/ but by becoming the gitlab-psql user. But first I had to stop gitlab and then become the gitlab-psql user and run the program to reset the record. It’s not a great solution but as a result of the server resetting due to a power outage.
-bash-4.2# su - gitlab-psql
Last login: Sun Mar 17 17:52:00 UTC 2024 on pts/0
-sh-4.2$ pg_resetwal -f /var/opt/gitlab/postgresql/data
Write-ahead log reset
Then I started the server again and checked the status.
This article will provide instructions in how to configure Ansible Automation Platform (AAP) and how to get your Project working. Links for various fields that I don’t need in my environment is provided at the end of this article.
Organizations
Before you can do anything, you need to create an Organization as all additional information is associated with an Organization.
In the AAP console, click on Organizations, then Add.
Name – This is selected for every additional task done in AAP.
Description – A reasonable description of the Organization.
Instance Group – A collection of Container Instances. This lets you run jobs on specific isolated containers. Otherwise all jobs run on the AAP container.
Execution Environment -This replaces the Python virtual environment. This gives you a customized image to run jobs where some dependency isn’t impacted by the requirements of a different job.
Galaxy Credentials
User Management
While the Admin can make changes, the Admin should really only be administering the cluster and not dealing with the day-to-day work.
The next step then is to add the users. Under Access, click Users and fill in the form. Under User Type, there are three. Normal User, System Auditor, and System Administrator.
Normal User – General access used by all members
System Auditor – Read-Only access to the entire environment
System Administrator – Admin access to the cluster.
Group Management
In AAP, Teams manage access to the various parts of an Organization. When a team member creates something such as Credentials, only that team member can use that Credential. If the Credential is to be used by all, then the Team would need to be given permission to access that Credential. Then all members of the team have access.
The next step is to create the necessary team(s). Under Access, click Teams and create the new team.
To add members to the team, under Access, click the Users link. Select the user you want to add. Under the Teams tab, select the team(s) the member should belong to.
GitLab Personal Access Token
Next up we need to access Gitlab in order to pull a repository into AAP. To do so, we need to create a Group Access Token. In Gitlab, under the group account, External-Unix in my case, your account and preferences, click on the Access Tokens link and create the token. I don’t need AAP to write back to the repository as it’s just applying configuration information for Kubernetes. So just select the read_respository option and a reasonable name and expiration date.
In AAP , click on the Credentials link and click the Add button. Add a Name, I used Gitlab Access Token, a Description, and of course the Organization you created. Under Credential Type, select Gitlab Personal Access Token. While it’s actually a Group Access Token, there isn’t an option for that in the menu. When you select it, a Token field is displayed. You’ll add the Group Access Token here.
Machine Credential
I’m using a SSH private/public RSA key and a service account to run ansible playbooks. My service account has passwordless access to sudo to root on all servers so I don’t need to pass in a service account password.
I need to create a Machine Credential for my Service Account. In the Credentials link, select the Machine Credential Type. Several fields are now available. If you use password access, you will fill in the necessary information. If you’re using a SSH Key, you can enter your passphrase here as well. In my case, I add my SSH Private Key for the service account and it’s ready to go.
Source Control Credential
In order to pull code from GitLab using the ssh link, we need to create a Source Control Credential. Click Add to create a new Source Control Credential. Similar to the Machine Credential, fill out the fields and then add the private ssh key used when you push git updates to GitLab.
Projects
Now that all the Credentials have been created, we can access the repository and bring in the Ansible playbooks. First though, you’ll need the ssh link to your repository. In GitLab, navigate to that repository, click on Clone, and copy the Clone with SSH link.
In AAP, click on Projects and Add. Enter a Project Name and Description. Select the Organization. If you’re validating content, select the appropriate Content Signature Validation Credential. And for the Source Control Type, select Git. This brings up several additional fields specific to the Git selection. Paste the SSH Clone Link into the Source Control URL field. Under the Source Control Branch/Tag/Commit field, add in the branch, tag, or commit id you want to use for the Project. And under the Source Code Credential field, select the SSH Access to GitLabcredential you created earlier.
Branch/Tag/Commit
A quick aside here on this field. If you are deploying to multiple environments, you might consider different strategies for the Projects. For example, the Development environment might be better using a branch strategy as every change then gets applied to the development server(s) for review. For pre-Production environments such as a QA or Staging environment, you might use a git tag to lock in a release. And for Production, you’d use a commit id. This locks Production so even an accidental push to the repository won’t cause Production to update until you’re absolutely ready.
Inventories
One of the things I want to do is have git manage the inventory files, for my environment that’s GitLab. There are three options for an Inventory but for inventories managed in a repository, select just Inventory from the drop menu and fill out the Name, Description, and Organization, then Save the Inventory.
Once the inventory is created, select it. There are several tabs, one is Sources. Click the Sources tab and click on Add. Enter a Name and Description for the inventory, then select Sourced from Project from the Source drop down. Additional fields will be displayed.
Under Project, select the Playbooks project. If there, select inventory/hosts from the Inventory file drop down. If it’s not there, you can manually enter it. Check the Overwrite checkbox to clear old systems that have been retired or otherwise removed from the hosts file. Upon the next sync, they should disappear from the list of Hosts. Verify it by checking the Hosts link. If they still exist, you can click the slider next to the host to remove it from consideration until AAP clears the entry.
I will note that my inventory files are automatically created by my Inventory web application. This file is then copied into the inventory directory of the Playbooks repository. Since it’s unlikely to happen often, you don’t have to set up a job to sync regularly.
Click Save and then either click the Sync all button or if more than one inventory exists, click the circling arrows to the right of the Inventory. Depending on the number of hosts, it may take a few minutes for all the hosts to register.
Templates
Finally in order to run your playbooks, you need to create a Template. Select Templates and click on Add. There are two options on the drop down menu; a Job Template for a single task or a Workflow Template for multiple tasks such as a pipeline. For this article, select a Job Template.
Several fields are now available to create the template. Enter in a Name and Description. A Job Type drop down lets you Run the job or Check the job before running (the -C or –check on the command line). Select the Inventory to use, the Project, and within the Project, the Playbook you want to run. For Credentials, I’m selecting the Machine ssh one as it’s my service account that has authority to run the playbooks.
For this article, I’m using my Unixsuite playbook which ensures my standard scripts are on all my servers. I do want to have each environment run separately though so I’m creating multiple Templates. I typically pass the variable as ‘pattern=[env]’. Since this isn’t the command line, I’ll have to add it in the Variables box. My QA environment uses the ‘cabo0’ prefix and pattern is the variable in the playbook so the following should be entered in the Variables box:
---
pattern: "cabo0"
Once saved, click the Template, then click the Launch button to make sure it works. Once you’re sure it’s working as expected, go back to the Templates and select the newly created Template. Click on Schedules and Add. Create a Name and Description. Select a Repeat frequency, I selected weekly. This brings up a few more fields where you can further customize the schedule. I selected Saturday and Never for when it Ends. Then a Start date/time. Since I’m doing it weekly on Saturdays, I selected 11:15pm. Then Save it.
Teams Management
Now that you’ve created all these entries to have your project run regularly, no one else on the team will be able to manage the bits you’ve created. Not until the Team has access.
Under Access, click Teams and then click on the Team you want to modify. Click the Roles tab to begin to assign Roles.
You’ll be presented with multiple options for what can be added to the team. I’ve selected Credentials in this case as I want the team to be able to access GitLab. Click Next to select the Credentials to add.
As you can see, the last three are the Credentials we created in this article. Click the three checkboxes and click Next.
Finally select the rights the team has to the selected credentials. In general Use access is sufficient and assumes you want no one else to manage the selected credentials. Once you’ve selected the rights, click Save and all members of the team have access.
You’ll want to do the same thing with other permissions. Access to inventories, playbooks, templates, and so on are granted through the teams interface. By default only the creator has access to the tasks they created.
References
This section provides links to the Ansible documentation. The steps in this article provide the order in which to configure my Ansible Automation Platform (AAP) installation. The links provide more detail that might be beyond the scope of my requirements.
I created a shared Google Doc document and shared it with the team. I am popping over and updating this list but the Google Doc is the final doc.
Work Estimates
Asbestos Testing. I called Rex Environmental. 5 Samples need to be taken. Same day testing, 24-48 hour results.
Architectural Drawings. The city requires drawings of the before and after. I called F9 Productions Inc he did suggest calling the City Inspector to see if it was needed. The City FAQ says yes. I did get a fast reply, no drawings are needed.
Demolition Permit. Longmont charges $50 for the permit.
Demolition. I called Gorilla Demolition.
Carpeting. I called Family Carpet One Floor & Home. These folks did the current shop’s carpet. After a review of the space, we’ll put carpeting in the retail space and Luxury Plank in the gaming space.
Shop Signage. I called Rabbit Hill Graphics and got a very nice sign.
Moving Company. I called Johnson Moving and Storage as we used them to move to the house.
Electrical Update. The shop needs 12 new outlets. The north wall has none at all. I called Leading Edge Electric.
Window Tinting. The current tinting is old, degraded, and peeling off in places. I called Spotshots Windows. It might be delayed depending on other costs.
Bathroom Work
At the moment, space wise the bathroom is ADA compliant. Currently there are no grab bars so we’ll be adding them in some time during the construction work.
Construction
Asbestos Testing. Landlord has someone coming out.
Permit for Demolition work.
Verify walls are removed and ready for paint.
Longmont needs to inspect the changes.
Preparation
Need paint and paint gear for the walls.
Bring over the two spare sheets of slatwall and mount. See how many more are needed and procure them.
For the storage behind the north extension, get some general utility shelves. Customers and gamers won’t see these shelves (generally).
Get Ikea Kallex shelf units for the used board games and for terrain in the miniatures gaming space.
Installation
Empty the storage shed and get set up in the shop.
Empty the storage space in the game shop and get it moved over.
Bring over non-retail and easy to carry assets. Posters, pictures, board games, anything else that isn’t needed at the shop.
Utilities and Services
Transfer phone number and internet to my LLC.
Start Electricity
Start Longmont Services (water, trash, sewer).
Investigate Security Services including cameras
Investigate a Cleaning Service. Carpets, planks, windows, bathroom?
Investigate card singles insurance requirements.
Moving Company
Basically moving the shop to the new space. Wrapping up the shelf displays and loading them into the truck. Boxing up wall games and fixtures.
Final Move
Notify current lessor that we will not be extending the lease.
Box up games and accessories on the slatwall.
Remove slatwall and mount in the new shop.
Remove security system, camers, cables, and mirror.
Remove wall hangings.
Remove posters and stickers and clean front windows.
Stop electricity
Stop phone/internet (change to new space)
Stop Longmont services (water, trash, sewer)
Finals to not forget
Purchase safe and have installed
Review lease for what needs to be done at the old shop
Update distributor and publisher addresses (see wiki)
Update USPS (mailbox) and turn in key.
Update Google address so folks can find us
Update business license and deliver to vendors.
Notify the IRS
Update business cards
Get a drink cooler as we can sell refreshments at the new place.
Replace the phone
Replace the POS as it’s end of life.
Build Miniatures tables.
Shut down storage unit if empty
Need a desk and chair for the office.
Find a better chair for the POS area.
Notify bookkeeper
Replace the 12′ ladder. The current one is owned by James
Now the fun begins. I passed the lease on to my lawyer and spent the evening reviewing it myself to be familiar with what’s being said. In general most seemed okay however I pushed back on the requirement to provide financial details of my business every year.
I got the updated lease back from my lawyer. He said the phrasing was a bit outdated and several bits needed clarification. I was able to update the definitions for Lease Commencement and Rent Commencement. Here are the pieces my lawyer either excluded or recommended review.
There was a statement about future assigned parking spaces, making sure I had at least 2.
For the pylon sign (next to the street that has several retail space signs), they had a one time $500 charge. I’d asked if they were creating a sign for that fee since maintenance was included in the triple net fee (external maintenance fee; calculate the costs over the past year, divide by 12, and again by the space I occupy).
Some assignment questions (that’s when you sell your business to someone else for example). They wanted me to continue to be liable and to pay them 50% of what I would receive if I sold the business. Both statements were removed.
Then if I overstay the term of my lease, a 150% monthly rent which my lawyer wanted reduced to 120% if possible.
Then that I was not getting interest on the security deposit. My lawyer said it’s somewhat common but we can ask.
One of the problems right now is the current tenant is waiting on his space to be cleared of kitchen gear so he can move in. Since I’m supposed to take possession Oct 1st, and the current tenant hasn’t moved a thing yet, this weekend should prove interesting. The lease does have a statement that due to unforeseen circumstances, possession might be delayed. I’d sent an email over to ask if there’s a timeline as I need to schedule several work crews to get the space ready.
The following announcement has been posted to social media!
Important announcement. The Atomic Goblin, after being in one place for over 10 years, has exploded in popularity to the point that we absolutely must move into larger digs. Over the next 5 months we’ll be doing construction and preparing the new spot for all of our gaming friends. We’ll have twice as much space for gaming and the ability to bring in more of your favorite games and accessories. We look forward to continuing to see all of our friendly faces at the new location. Expect to see flyers with every purchase at The Goblin providing more details on the move. Stay tuned!
On the assumption we’d get the new space, I created a 5 page Todo list of things we’d need to do. I had a business dinner with the team last Saturday, bringing folders for all with the todo list, the possible business lines list, the above floor plans plus some alternatives, and a roughed out business plan (I’m going through a class to make it more professional and focused.
The todo list also contained several estimates on getting work done such as an electrician to add more outlets, demolition company to remove the walls, carpet company, and a few more things to think about, cost wise such as painting the walls. I also reached out to the graphics firm I’ve used for flyers and business cards in the past.
I sent the Letter of Intent off to my lawyer for review after I made a few changes to the original one, got updates from him and sent it off to the landlords.
And we got the place! We take possession October 1st and begin work on getting it ready for moving in in January and being 100% the gaming destination February 1st.
Yesterday I received a proposed sign for the new shop. I did add some text in the top windows and the store logo to the front doors. I’m not sure yet if those will be approved so we’ll see.
Here’s the current store front.
And here’s a mock up of the proposed new sign. I passed it along to the landlords and they approve!