Game Shop Move

Okay if you recall I bought my Friendly Local Game Store (FLGS) last year. In fact, in a couple of weeks, it’ll be exactly a year. As a reminder, I stumbled onto an ad for a Board Game Cafe up in Saskatoon Canada. Looked interesting and I spoke to my good friend, who was the owner of my FLGS along with his wife. Just to be clear, I do go to many of the surrounding game shops but Atomic Goblin Games is my F LGS. After discussion and months of review, getting financials together, getting a loan, approvals from my wife, I purchased my FLGS, which is now actually my FLGS.

I did hire the former owner as my store manager/purchasing manager. I also kept on the other employees and gave them all raises, now being paid above minimum wage (just a little though, they do get a 30% discount on games). I also brought in the former game store owner where my guy used to work, as a contractor for an occasional couple of hours of work when needed and as a mentor to help me on my way as a retail game store owner. These guys have been great, let me just say. Plus the number 1 ‘senior associate’ is in line to be assistant manager.

Anyway, we’re now bursting at the seams. Lots of gamers, lots of customers, and lots of product. Humorously last year I told the “management team” that I’d want to open a second Atomic Goblin Games Too or move to a bigger location and they were all, “well, get some experience and let’s see in a year or two where we are”.

Since I took over and since I implemented changes, over the course of the year since I purchased it, we’ve had the best 12 months in the almost 11 year history of the store, most of the last 12 months have been the best month in the 11 year history, and right now we’re better than 50% higher in sales than last year at this time.

The problem with where we are is the management companies (yes two) have not been helpful in us expanding into the mainly empty space next door. This would let us expand another 600 or so square feet, from 1,400 to 2,000 square feet mainly (I don’t have an exact number so close enough). It’s being used a storage for the end unit, a gas station/convenience store. Not only that, I’ve been trying to get the lease in my name for over a year, and a couple of places won’t let me take over the old account until the lease is in my name. I can’t even replace the carpet (3 year old carpet and it’s been 10 years) without permission from the owner due to the cost (over $10,000 amount requires approval).

Anyway, enough of that. What’s new?

Well, since I can’t get the lease and can’t expand, and must expand we’ve been checking out possibilities in the city. There are three other game stores. One is in downtown (5th Street) and two are close to each other on the corner of Main Street and 119, another major road that passes through Longmont. The downtown shop is mainly comics with games and puzzles. The two on the corner are either mainly Magic specific or Warhammer specific. We are the board game shop in town.

We had one really good choice. A bit larger than we needed but with some leeway from the owners, and they were very positive, we could grow into the space. It’s a bit over 5,000 square feet which doesn’t count the upstairs that they were throwing in for free. Unfortunately they were working with a nationwide Salon first. But we were lower initial maintenance costs, no need to drill holes in the foundation for water drainage and just needed the bathrooms upgrade and replace the carpet (which the salon would need anyway). It’s also a pretty good price in general, $12 sq/ft NNN ($7.50 sq/ft) (NNN is basically the exterior maintenance costs; parking lot, grass, trees, roof, etc).

It was the spot Hobbytown occupied so was already a gaming/hobby destination in town. Win-Win, but unfortunately again, not really available. I did send a couple of emails asking if there was any way we could slide in but in the event we couldn’t, we would still look around.

Link to the listing if you’re interested: https://www.coloradogroup.com/property/1935-main-street-suite-b-longmont-co/ (it may be gone in the future of course 🙂 ).

Here’s the floor plan. The lower piece they were throwing in for free as it’s inaccessible otherwise.

And here’s one of the pictures.

The nice thing is it is on Main Street, there’s a BBQ Place in the end unit, Dairy Queen and Wendy’s next door. Pretty good for the gamers and such coming into the shop.

Anyway, my wife and I went driving around town today to check out places based on several commercial listings we found on line. One is a former Big 5 Sports big box store. 10,000 square feet. We checked it out and it’s really 5,000 of retail and a second floor with an additional 5,000 of office space. Could be interesting however it’s really quite large but I will call to check.

There were several other possible places. A couple in a kind of skeevy strip mall. Not horrible in general, there are several restaurants and fast food places (like Five Guys and Chipotle’s) but I dunno, kind of an off-putting feel. Plus there were 4 open spaces, not lending itself to confidence in the customers.

A second block was better, a former K-Mart converted into several shops like a Big Lots, a Cricket, and an Arc shop. It looked good looking into the window, but when I checked on line, it’s almost 8,000 square feet.

We went into one strip mall and from the listing, it looked like the open space was taken. We continued on but my wife said there was another open space that wasn’t in the listing. We were able to locate the commercial listing on another site.

Listing here: https://www.loopnet.com/viewer/pdf?…Bridge%20Park%20Plaza15151517%20Main%20St.pdf

In the picture, it’s 1517 units A and B.

It’s a touch over 3,000 square feet which fits nicely into our expansion needs. Unit A is separated from Unit B by a wall with a double wide door like opening (it was a single use). The front part is 1,500 square feet, the back the same. With us using the front part as retail, we could use Unit B as the gaming area. More room for tables and such. Plus if we are able, we might pick up Unit C and add it as an RPG gaming area with two walls as a corridor so RPGers could have quiet to play.

The cool thing? There’s a sliding garage type door. The gap in the wall between Unit B and Unit C. Now that would be great during nicer days. Open the door for the gamers and expose them to fresh air.

The bottom of the pic faces Main Street. If you clicked the link, there’s a Taco place, Pizza Hut, and an ice cream place. It feels so much nicer, neighborhood wise with outside seating at the Taco place. It just feels like a good place to move to. At 3,000 square feet (or more if we take Unit C or even Units D and E (the office space)), it would let us grow just a little better and saner than the 5,600 or 10,000 square feet places and in 5 years, maybe then move into a much larger space.

Cool beans.

Looking forward to seeing how this shakes out.

Posted in Game Store, Gaming | Tagged , | Leave a comment

Pan Fried Rainbow Trout

Basically I do this when I find a recipe on line that I like and want to make sure I can find it again 🙂

Ingredients

  • 2 Rainbow Trout fillets, boned and with skin.
  • 1/2 teaspoon kosher salt
  • 1/4 teaspoon black pepper
  • 1/4 teaspoon garlic granules
  • 1/2 cup butter (half a stick)
  • 1/2 tablespoon fresh lemon juice
  • 2 tablespoons minced parsley

Instructions

  • Rinse off the trout and pat dry with a papertowel.
  • Sprinkle the skin side with half of the mixture of the salt and pepper. I find doing it as a pinch makes sure it’s spread evenly.
  • Heat the butter in a 12 inch or so nonstick skillet. When the butter starts bubbling, swirl it around to make sure it’s spread evenly around the pan.
  • Add the two fillets skin side down and sprinkle the rest of the salt and pepper mixture on the fleshy side. Press on the fish to make sure the skin touches the pan.
  • Cook time is about 3 minutes.
  • When the skin side is done, sprinkle the garlic on the fillets and flip it over. Cook for about 3 minutes.
  • After removing the fish, put the lemon in the pan and mix it up. Then drizzle the sauce over the fish.
  • Sprinkle with parsley and serve.
Posted in Cooking | Tagged , , | Leave a comment

Quick and Dirty Kubernetes

Overview

At times you want to quickly throw up a Kubernetes cluster for some quick test or another. While I do have several Kubernetes clusters on my Homelab once in a while you want to do a quick test or even follow a tutorial to get familiar with this or that tool.

This time I’m using a tool called kind. See the References below to find the link to the site.

Installation

First off you’ll need to have a docker server in order to install the various tools. Next you’ll have to install the kind tool.

# go install sigs.k8s.io/kind@v0.17.0
go: downloading sigs.k8s.io/kind v0.17.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/spf13/cobra v1.4.0
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/alessio/shellescape v1.4.1
go: downloading github.com/mattn/go-isatty v0.0.14
go: downloading golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c
go: downloading github.com/pelletier/go-toml v1.9.4
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/evanphx/json-patch/v5 v5.6.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading sigs.k8s.io/yaml v1.3.0
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/google/safetext v0.0.0-20220905092116-b49f7bc46da2

Once kind is installed, start up your test cluster. Note that kind is installed off your home directory in the go/bin directory. Either add it to your path or add it to the command.

# go/bin/kind create cluster --name nginx-ingress --image kindest/node:v1.23.5
Creating cluster "nginx-ingress" ...
 ✓ Ensuring node image (kindest/node:v1.23.5) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-nginx-ingress"
You can now use your cluster with:

kubectl cluster-info --context kind-nginx-ingress

Thanks for using kind! 😊

See if the cluster is up.

# kubectl get nodes
NAME                          STATUS   ROLES                  AGE     VERSION
nginx-ingress-control-plane   Ready    control-plane,master   3m21s   v1.23.5

And it’s ready to be used. Pretty interesting.

References

  • https://kind.sigs.k8s.io/
Posted in Computers, Kubernetes | Tagged , | Leave a comment

Ansible Automation Platform Installation

Overview

In order to use AWX, aka the upstream product of Ansible Automation Platform, formerly Ansible Tower, we need to have a working cluster. This article provides instructions in how to install and use AWX.

Installation

Before doing the installation, you’ll either need to install the Postgresql server or simply create the postgres user account on the storage server (NFS in my case). Otherwise the postgres container won’t start without mucking with the directory permissions.

# groupadd -g 26 postgres
# useradd -c "PostgreSQL Server" -d /var/lib/pgsql -s /bin/bash \
  -c 26 -g 26 -G postgres -m postgres

The installation process for AWX is pretty simple. In the Gitops repo under [dev/qa/prod]/cluster/awx, you’ll apply the registry-pv.yaml first. This creates the external file system on the NFS server under /srv/nfs4/registry where the postgres container will store data.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi
  nfs:
    path: /srv/nfs4/registry
    server: 192.168.101.170
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

And it worked. The PVC was allocated and the pods started.

Once the registry PV has been created, you might want to update the AWX tag in the kustomization.yaml file. The one in the Gitops repo is at 2.16.1 however as a note, when you kick it off, AWX will upgrade so as long as you’re not too far off of the next version, you can probably just apply as defined. Not it is ‘-k’ to tell kubectl this is a kustomization file and not a “normal” yaml file.

$ kubectl apply -k .
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxmeshingresses.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created
awx.awx.ansible.com/awx-demo created

Postgres Database and Storage

The postgres container has a default configuration that uses attached storage (PV and PVC) for the database information. This is an 8g slice. The problem is it creates a [share]/data/pgdata directory with the postgres database. This means you have to ensure you have a unique PV for each postgres container.

Of course if you’re using an external postgres server, make sure you make the appropriate updates to the configmap.

Ingress Access

In addition to the pods, we need to create a DNS entry plus an ingress route.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: awx-demo
  namespace: awx
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: awx.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: awx-demo-service
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - awx.dev.internal.pri

Running!

And now AWX should be up and running. It may do an upgrade so you may have to wait a few minutes.

$ kubectl get pods -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-postgres-13-0                             1/1     Running   0          24m
awx-demo-task-857c895bf9-rt2h8                     4/4     Running   0          23m
awx-demo-web-6c4df77799-6mn9p                      3/3     Running   0          21m
awx-operator-controller-manager-6544864fcd-tbpbm   2/2     Running   0          2d13h

Local Repository

One of the issues with being on a high speed WiFi internet connection is we don’t want to keep pulling images from the internet. I have a Docker Directory, aka a local Docker Registry where I’ll identify images that Kubernetes or Openshift uses, pull them to my docker server, tag them locally, push them up to the local Repository, and then update the various configurations either before adding them to the cluster or updating them after they’ve been installed. The main container we want to have locally is the automation one (awx-ee:latest) as it’s created every time a job runs.

Here is the list of images found while parsing through the awx namespace. Note we also update the imagePullPolicy to Always as a security measure.

  • quay.io/ansible/awx:24.3.1
  • quay.io/ansible/awx-ee:24.3.1
  • quay.io/ansible/awx-ee:latest
  • quay.io/ansible/awx-operator:2.16.1
  • kubebuilder/kube-rbac-proxy:v0.15.0
  • docker.io/redis:7
  • quay.io/sclorg/postgresql-15-c9s:latest

For the local docker repository, I’ll pull the image, tag it locally, then push it to the local server. Like so:

# docker pull quay.io/ansible/awx-ee:latest
latest: Pulling from ansible/awx-ee
a0a261c93a1a: Already exists
aa356d862a3f: Pull complete
ba1bfe7741c6: Pull complete
60791ff3f035: Pull complete
af540118867f: Pull complete
907a534f67c1: Pull complete
d6e203406e7e: Pull complete
e752d2a39a5b: Pull complete
08538fc74157: Pull complete
1b5fcec6379e: Pull complete
3924e18106c9: Pull complete
660e2ba9814c: Pull complete
4f4fb700ef54: Pull complete
8bd6dd298579: Pull complete
278858ab53b1: Pull complete
9c8ccd66c6ea: Pull complete
bfcd72bbf16f: Pull complete
Digest: sha256:53a523b6257abc3bf142244f245114e74af8eac17065fce3eaca7b7d9627fb0d
Status: Downloaded newer image for quay.io/ansible/awx-ee:latest
quay.io/ansible/awx-ee:latest
# docker tag quay.io/ansible/awx-ee:latest bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee:latest
# docker push bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee:latest
The push refers to repository [bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee]
5487431a4ee2: Layer already exists
5f70bf18a086: Layer already exists
f0a66f6d7663: Pushed
08832c5d2e19: Pushed
a6dcafa3a2bc: Pushed
2f3c22398896: Pushed
9f7095066a3e: Pushed
a39187c28f90: Pushed
e0f7e9d6d56c: Pushed
6c9e2e4049d1: Pushed
1c32be0b7423: Pushed
fad1f4d67115: Pushed
2755eb3d7410: Pushed
a3f08e56aa71: Pushed
7823a8ff9e75: Pushed
e679d25c8a79: Pushed
8a2b10aa0981: Mounted from sclorg/postgresql-15-c9s
latest: digest: sha256:242456e2ac473887c3ac385ff82cdb04574043ab768b70c1f597bf3d14e83e99 size: 4083

Admin Password

You’ll have to get the admin password, run the following command to retrieve it. Once retrieved, log in to https://awx.dev.internal.pri (or whatever you’re using) as admin and use the password. When you log in, the password is cleared so make sure you save it somewhere.

$ kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" -n awx | base64 --decode; echo
G4XfwRsfk9MycbxnS9cE8CDfqSKIuNMW

If you forget the admin password or simply want to reset it, you would log into the web container and reset it there.

$ kubectl exec awx-demo-web-6c4df77799-6mn9p -n awx --stdin --tty -- /bin/bash
bash-5.1$ awx-manage changepassword admin
Changing password for user 'admin'
Password:
Password (again):
Password changed successfully for user 'admin'
bash-5.1$

Troubleshooting

Persistent Volumes

The one issue I had was the persistent volume claim (PVC) failed to find appropriate storage.

$ kubectl describe pvc postgres-13-awx-demo-postgres-13-0 -n awx
Name:          postgres-13-awx-demo-postgres-13-0
Namespace:     awx
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/component=database
               app.kubernetes.io/instance=postgres-13-awx-demo
               app.kubernetes.io/managed-by=awx-operator
               app.kubernetes.io/name=postgres-13
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       awx-demo-postgres-13-0
Events:
  Type    Reason         Age                        From                         Message
  ----    ------         ----                       ----                         -------
  Normal  FailedBinding  3m17s (x14344 over 2d11h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

However I have a persistent volume.

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
storage-pv   100Gi      RWX            Retain           Available                                   165d

It took just a little digging but I figured out the problem.

$ kubectl get pvc postgres-13-awx-demo-postgres-13-0 -n awx -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2023-09-11T02:03:34Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: database
    app.kubernetes.io/instance: postgres-13-awx-demo
    app.kubernetes.io/managed-by: awx-operator
    app.kubernetes.io/name: postgres-13
  name: postgres-13-awx-demo-postgres-13-0
  namespace: awx
  resourceVersion: "54733870"
  uid: 1574b79e-1e17-4825-bc25-d70ac4021af7
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  volumeMode: Filesystem
status:
  phase: Pending

Note the spec.accessModes setting is ReadWriteOnce however the storage-pv persistent volume is configured as ReadWriteMany (RWX).

Conclusion

When you get to the website and log in, you’re ready to add your projects!

References

Posted in ansible, Computers, Kubernetes | Tagged , , , | 1 Comment

Kubernetes Issues

Overview

This article lists a couple of issues that occurred while I was building this environment. The issues don’t fit into any of the specific articles mainly because it was likely due to my testing vs anything that occurred during the installation. The final article will be accurate and should just work however during the testing, these were identified and I had to track down a fix.

Terminating Pods

I had some pods that got stuck terminating for a long period of time. There were a couple of suggestions but the two that seemed to work best was to either force it or clear any finalizers for the pod. A finalizer is basically a task that needs to run and the pod is waiting for a successful completion of the task before it removes the pod.

The main solution I used for this was to force the delete. Make sure you know what node the pod is on before deleting it.

$ kubectl get pods -llamas -o wide
NAMESPACE            NAME                                                        READY   STATUS        RESTARTS      AGE    IP                NODE                                 NOMINATED NODE   READINESS GATES
llamas               llamas-f8448d86c-br4z8                                      1/1     Terminating   0             3d1h   10.42.232.197     tato0cuomknode3.stage.internal.pri   <none>           <none>
$ kubectl delete pod llamas-f8448d86c-br4z8 -n llamas --grace-period=0 --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "llamas-f8448d86c-br4z8" force deleted

This can be an issue as noted, the underlying resource the pod was waiting on could also be stuck. The main thing I did was to restart kubelet on the node the pod was running on. Make sure you get the node name before forcing the deletion. Otherwise it’s safest to restart kubelet on all worker nodes, no fun if there are more than a few. If you don’t have privileged access to the node though, you’ll have to get with your sysadmin team.

Application Deletion

In working with projects in ArgoCD, I mucked up one of the projects so badly it couldn’t be deleted. This was due to some process that needed to complete but had been removed outside the normal deletion process (apparently). This time I removed the finalizer process and the application simply was deleted.

$ kubectl get application -A
NAMESPACE   NAME     SYNC STATUS   HEALTH STATUS
argocd      llamas   Unknown       Unknown
$ kubectl patch application/llamas -n argocd --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
application.argoproj.io/llamas patched
$ kubectl get application -A
No resources found


Posted in Computers, Kubernetes | Tagged , , | 1 Comment

Llamas Band and Green/Blue Deployments

Overview

The last two jobs indicated they were interested in or already doing green/blue deployments. There are multiple methods of deploying in this manner. Kubernetes has two strategies in the deployment context in order to gradually roll out a new image and roll back in the event of a problem. Argo provides a Rollout deployment process that provides additional deployment strategies via controller and CRDs.

Kubernetes Strategies

There are two update strategies that are available in Kubernetes by default. The Recreate strategy and the RollingUpdate strategy.

The Recreate strategy essentially deletes all existing pods before starting up the new pods.

The RollingUpdate strategy uses maxUnavailable and maxSurge to do an rolling update. Default values for both are 25% although you can set it to an absolute number that define the number of pods.

Argo Rollouts Strategies

There are two update strategies that are provided by the Argo Rollouts tools. The blueGreen strategy and the Canary strategy. Note the Kubernetes documentation does provide information on how you can perform a Canary update in Kubernetes. In Argo Rollouts, Canary is an actual strategy.

blueGreen Strategy

The blueGreen strategy from Argo Rollouts has two services. An Active service and a Preview service. When an upgrade is made to the Rollout context, new images are started and the Preview service points to the new ReplicaSet. You can use the Analysis task which verifies the new ReplicaSet is ready for traffic. Once tests pass, the Active service is promoted to the new ReplicaSet and the old ReplicaSet is called down to zero.

Canary Strategy

The Canary strategy lets you configure the rollout so that a small percentage of the new Replicaset is available to users. You define steps in the strategy that must complete before migrating more users to the new ReplicaSet. The steps can be simple pause statements where you wait a specific period of time before migrating more users or checks for the number of active pods. You can even have it pause until someone manually promotes the ReplicaSet.

Green/Blue Process

The problem with these strategies is they don’t address the need for testing in the same environment. Some customers don’t have access to lower environments so having a second one in all environments means new changes can be tested before they go live. So if the requirement is to have two sites up and running, one live and one test, then the process needs something different to accommodate that requirement.

For the Llamas band website, there are two namespaces, a llamas-blue and a llamas-green. If we identify llamas-blue as the live site, the ingress context would have llamas.internal.pri as the ingress URL. But for testing, we might define the ingress URL as llamas-green.internal.pri in the llamas-green namespace. In this manner, we can access https://llamas-green.internal.pri, test new features, and ensure it continues to work as expected in a production environment. When ready to activate the llamas-green namespace, we update the llamas-blue ingress context for the URL to be llamas-blue.internal.pri and update the llamas-green ingress context for the URL to be llamas.internal.pri.

This gives us the capability of immediately switching back to the llamas-blue project in case something goes wrong. Otherwise all future updates now apply to the llamas-blue project.


apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: llamas
  namespace: llamas-blue
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: llamas.internal.pri
    http:
      paths:
      - backend:
          service:
            name: llamas
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - llamas.internal.pri

And the green ingress route.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: llamas
  namespace: llamas-green
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: llamas-green.internal.pri
    http:
      paths:
      - backend:
          service:
            name: llamas
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - llamas-green.internal.pri

References

Posted in CI/CD, Computers | Tagged , , | 1 Comment

Llamas Band and Continuous Delivery

Overview

In this article, I’ll be providing details on how to configure ArgoCD for the Llamas Band project including deploying to the other sites.

Continuous Delivery

With ArgoCD installed and the Llamas container CI pipeline completed, we’ll use this configuration to ensure any changes that are made to the Llamas website are automatically deployed when the container image is updated or any other configuration changes are made.

Project

In my homelab, there really isn’t a requirement for projects to be created however in a more professional environment, you’ll create an ArgoCD project for your application.

In this project, you’re defining what Kubernetes clusters the containers in the project have access to. I have four environments and since one project is across all four clusters, we need to configure the access under specs.destinations. When you created the links using the argocd cli, those are the same ones used in this file.

Since this is an ArgoCD configuration, it goes in the argocd namespace.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: llamas
  namespace: argocd
spec:
  clusterResourceWhitelist:
  - group: '*'
    kind: '*'
  description: Project to install the llamas band website
  destinations:
  - namespace: 'llamas'
    server: https://kubernetes.default.svc
  - namespace: 'llamas'
    server: https://cabo0cuomvip1.qa.internal.pri:6443
  - namespace: 'llamas'
    server: https://tato0cuomvip1.stage.internal.pri:6443
  - namespace: 'llamas'
    server: https://lnmt1cuomvip1.internal.pri:6443
  sourceRepos:
  - git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git

Application Configuration

When configuring the application, since I have four sites, I’m creating each with an extension; llamas-dev for example. The project defines what sites each application can access under the specs.destinations data set.

Much of the configuration should be easy enough to understand.

  • name – I used the site type to extend the name
  • namespace – It’s an ArgoCD configuration file so argocd
  • project – The name of the project (see above)
  • repoURL – The URL where the repo resides. I’m using an ssh like access method so git@ for this
  • targetRevision – The branch to monitor
  • path – The path to the files that belong to the llamas website
  • recurse – I’m using directories to manage files so I want argocd to check all subdirectories for changes
  • destination.server – One of the spec.destinations from the project
  • destination.namespace – The namespace for the project

The ignoreDifferences block is used due to my using Horizontal Pod Autoscaling (HPA) to manage replicas. While HPA does update the deployment, there could be a gap where argocd terminates pods before it catches up with the deployment. With this we’re just ignoring that value to prevent conflict.

You can test this easily by setting the deployment spec.replicas to 1 (the default) then adding the HPA configuration. When checking the deployment after that, you’ll see it’s now set to 3.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: llamas-dev
  namespace: argocd
spec:
  project: llamas
  source:
    repoURL: git@lnmt1cuomgitlab.internal.pri:external-unix/gitops.git
    targetRevision: dev
    path: dev/llamas/
    directory:
      recurse: true
  destination:
    server: https://kubernetes.default.svc
    namespace: llamas
  syncPolicy:
    automated:
      prune: false
      selfHeal: false
  ignoreDifferences:
    - group: apps
      kind: Deployment
      name: llamas
      namespace: llamas
      jqPathExpressions:
        - .spec.template.spec.replicas

And when checking the status.

$ kubectl get application -A
NAMESPACE   NAME         SYNC STATUS   HEALTH STATUS
argocd      llamas-dev   Synced        Healthy

Remote Clusters

Of course we also want the Dev ArgoCD instance to manage the other site installations vs installing ArgoCD to every site. Basically ArgoCD will need permission to apply the configuration files.

For the Llamas site, we’ll need a second, slightly different ArgoCD Application file. Note the difference is only the metadata.name where I added qa instead of dev, the spec.destination.server which is the api server of the cabo cluster, the spec.source.targetRevision of main instead of dev, and of course spec.source.path, the path to the Llamas files. The rest of the information should be the same.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: llamas-qa
  namespace: argocd
spec:
  project: llamas
  source:
    repoURL: git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git
    targetRevision: main
    path: qa/llamas/
    directory:
      recurse: true
  destination:
    server: https://cabo0cuomvip1.qa.internal.pri
    namespace: llamas
  syncPolicy:
    automated:
      prune: false
      selfHeal: false
  ignoreDifferences:
    - group: apps
      kind: Deployment
      name: llamas
      namespace: llamas
      jqPathExpressions:
        - .spec.template.spec.replicas

Next, a green/blue deployment for the llamas website. What fun!

Posted in CI/CD, Computers, Git | Tagged , | 1 Comment

Llamas Band Website

Overview

This article provides instructions in how to build my llamas container and then how to deploy it into my kubernetes cluster. In addition, a Horizontal Pod Autoscaling configuration is used.

Container Build

The llamas website is automatically installed in /opt/docker/llamas/llamas using the GitLab CI/CD pipeline whenever I make a change to the site. I have the docker configuration files for building the image already created.

The 000-default.conf file. This file configures the web server.

<VirtualHost *:80>
  ServerAdmin cschelin@localhost
  DocumentRoot /var/www/html

  <Directory /var/www>
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
  </Directory>
</VirtualHost>

The docker-compose.yaml file.

version: "3.7"
services:
  webapp:
    build:
      context: .
      dockerfile: ./Dockerfile.development
    ports:
      - "8000:80"
    environment:
      - APP_ENV=development
      - APP_DEBUG=true

And the Dockerfile.development file. This copies the configuration to the webserver and starts it.

FROM php:7-apache

COPY 000-default.conf /etc/apachet/sites-available/000-default.conf
COPY ./llamas/ /var/www/html.

RUN a2enmod rewrite

CMD ["apache2-foreground"]

When done, all you need to do is run the docker-compose command and your image is built.

podman-compose build

Access the running image via the docker server, port 8000 as defined in the docker-compose.yaml file to confirm the image was built as desired.

Manage Image

You’ll need to tag the image and then push it up to the local repository.

podman tag llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2

Now it’s ready to be added to the Kubernetes cluster.

GitLab Pipeline

Basically whatever server you’re using as a gitlab runner will need to have podman and podman-compose installed. Once that’s done, you can then automatically build images. I’m also using tagging to make sure I only remake the image when I’m ready vs every time I make an update. Since it’s also a website, I can check the status without building an image.

You’ll use the git tag command to tag the version then use git push –tags to have the update tagged. For example, I just updated my .gitlab-ci.yml file which is the pipeline file, to fix the deployment. It has nothing to do with the site, so I won’t tag it and therefor, the image won’t be rebuilt.

Here’s the snippet of pipeline used to create the image. Remember, I’m using a local repository so I’m using tag and push to deploy it locally. And note that I’m also still using a docker server to build images manually hence the extra lines.

deploy-docker-job:
  tags:
    - docker
  stage: deploy-docker
  script:
    - env
    - /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
    - |
      if [[ ! -z ${CI_COMMIT_TAG} ]]
      then
        podman-compose build
        podman tag localhost/llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
        podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
      fi
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ jenkins@bldr0cuomdock1.dev.internal.pri:/opt/docker/llamas/llamas/

Configure Kubernetes

We’ll be creating a DNS entry and applying five files which will deploy the Llamas container making it available for viewing.

DNS

Step one is to create the llamas.dev.internal.pri DNS CNAME.

Namespace

Apply the namespace.yaml file to create the llamas namespace.

apiVersion: v1
kind: Namespace
metadata:
  name: llamas

Service

Apply the service.yaml file to manage how to access the website.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: llamas
  name: llamas
  namespace: llamas
spec:
  ports:
  - nodePort: 31200
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: llamas
  type: NodePort

Deployment

Apply the deployment.yaml file to deploy the llamas images. I set replicas to 1 which is the default but since HPA is being applied, it doesn’t really matter as HPA replaces the value. Under specs.template.specs I added the extra configurations from the PriorityClass article and the ResourceQuota article.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: llamas
  name: llamas
  namespace: llamas
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: llamas
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: llamas
    spec:
      containers:
      - image: bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
        imagePullPolicy: Always
        name: llamas
        priorityClassName: business-essential
        resources:
          limits:
            cpu: "40m"
            memory: "30Mi"
          requests:
            cpu: "30m"
            memory: "20Mi"

Ingress

Now apply the ingress.yaml file to permit access to the websites remotely.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: llamas
  namespace: llamas
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: llamas.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: llamas
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - llamas.dev.internal.pri

Horizontal Pod Autoscaling

HPA lets you configure your application to be responsive to increases and decreases in how busy the site is. You define parameters that indicate a pod is getting busy and Kubernetes reacts to it and creates new pods. Once things get less busy, Kubernetes removes pods until it reaches the minimum you’ve defined. There is a 15 second cycle which can be adjusted in the kube-controller-manager if you need it to respond quicker (or less often).

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: llamas
  namespace: llamas
spec:
  maxReplicas: 10
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 50
        type: Utilization
    type: Resource
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: llamas

In this configuration, we’ve set the CPU checks to be 50%. Once applied, check the status.

$ kubectl get hpa -n llamas
NAME     REFERENCE           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
llamas   Deployment/llamas   <unknown>/50%   3         10        3          9h

Since it’s a pretty idle site, the current usage is identified as unknown. Once it starts receiving traffic, it’ll spin up more pods to address the increased requirements.

Success!

And the site is up and running. Accessing https://llamas.dev.internal.pri gives us access to the one page website.

llamas               llamas-65785c7b99-2cxl7                                   1/1     Running   0            58s     10.42.80.15       bldr0cuomknode3.dev.internal.pri   <none>           <none>
llamas               llamas-65785c7b99-92tbz                                   1/1     Running   0            25s     10.42.31.139      bldr0cuomknode2.dev.internal.pri   <none>           <none>
llamas               llamas-65785c7b99-lgmk4                                   1/1     Running   0            42s     10.42.251.140     bldr0cuomknode1.dev.internal.pri   <none>           <none>

References

Posted in Computers, Docker, Kubernetes | Tagged , , , | 1 Comment

GitLab CI/CD Pipeline

Overview

This article provides details on my use of the GitLab Runners in order to deploy websites and then automatically build, tag, and push images to my local docker repository.

Runner Installation

I’ve been using Jenkins for most of my work but as someone who continually learns, I’m also looking at other tools to see how they work. In this case because where I was working, we were going to install a local GitLab server as such I want to dig into GitLab Runners.

We’ll need to create servers for the Runners in each of the environments. In addition, for security reasons, I’ll want separate Runners that have access to my remote server in Florida.

Configuration wise, I’ll have one Runner in Dev, QA, and Stage and two Runners in Production, one for local environments and one for access to the remote server, and two Runners in my Home environment, same with one for local installations and one for the remote server. What my Runners will do is mainly deploy websites (only my Llamas website is set up right now) to my Tool servers and remote server.

Each Runner server will have 2 CPUs, 4 Gigs of RAM, and 140 Gigs of disk.

Installation itself is simple enough. You retrieve the appropriate binary from the gitlab-runner site and install it.

rpm -ivh gitlab-runner_amd64.rpm

You will need to then register the runner with gitlab. You’ll need to get the registration token from the gitlab. Click on the ‘pancake icon’, Admin, CI/CD, Runners, and click the Register an instance runner drop down to get a Registration token. With that, you can register the runner.

gitlab-runner register --url http://lnmt1cuomgitlab.internal.pri/ \
--registration-token [registration token] \
--name bldr0cuomglrunr1.dev.internal.pri \
--tag-list "development,docker" --executor shell

I do have multiple gitlab-runner servers. This one is the development one that also processes containers. Other gitlab-runner servers test code or push code to various target servers.

GitLab CI/CD File

Now within your application, you can set up your pipeline to process your project on this new gitlab-runner. You do this in the .gitlab-ci.yml file. For this example, I’m again using my Llamas band website in part because it builds containers plus pushes out to two web sites so there’s some processing that needs to be done for each step. Let’s check out this process.

Test Stage

In the Test Stage, the gitlab-runner server has various testing tools installed. In this specific case, I’m testing my php scripts to make sure they all at least pass a lint test. There are other tests I have installed or can install to test other features of my projects. Note that I will use the CI_PROJECT_DIR for every command to make sure I’m working in the right directory.

test-job:
tags:
- test
stage: test
script:
- |
for i in $(find "${CI_PROJECT_DIR}" -type f -name *.php -print)
do
php -l ${i}
done

Docker Stage

In this section, I’m building the container, retagging it to be loaded to my local registry, and then pushing it to the registry. But only if the site’s been tagged. I only tag when I’m actually releasing a site version. So if no tag for this push, the build is skipped. I do push it out to the separate docker server though. I do keep all binary information on the two dev servers, bldr0cuomdev1 and ndld1cuomdev1 in the /opt/static directory structure. And unlike the other stages, there is no need to clear out the .git files and directories as they aren’t part of the llamas directory so won’t be in the container.

deploy-docker-job:
tags:
- docker
stage: deploy-docker
script:
- env
- /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
- |
if [[ ! -z ${CI_COMMIT_TAG} ]]
then
podman-compose build
podman tag localhost/llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
fi
- rm -rf "${CI_PROJECT_DIR}"/.git
- rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
- /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ jenkins@bldr0cuomdock1.dev.internal.pri:/opt/docker/llamas/llamas/

Local Stage

The next stage cleans up the git and docker information and moves the website from the llamas directory down to the documentroot. Then the site is pushed out to the local web server for review.

deploy-local-job:
tags:
- home
stage: deploy-local
script:
- /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
- rm -f "${CI_PROJECT_DIR}"/000-default.conf
- rm -f "${CI_PROJECT_DIR}"/docker-compose.yaml
- rm -f "${CI_PROJECT_DIR}"/Dockerfile.development
- rm -f "${CI_PROJECT_DIR}"/readme.md
- rm -rf "${CI_PROJECT_DIR}"/.git
- rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
- mv "${CI_PROJECT_DIR}"/llamas/* "${CI_PROJECT_DIR}"/
- rmdir "${CI_PROJECT_DIR}"/llamas
- /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ unixsvc@ndld1cuomtool11.home.internal.pri:/var/www/html/llamas/

Remote Stage

The last stage pushes the website out to my remote server. As it’s effectively the same as the local stage, there’s no need to duplicate the listing.

Pipeline

This is actually at the top of the .gitlab-ci.yml file and lists the steps involved in the pipeline build. If any of these stages fails, the process stops until it’s resolved. You can monitor the status in gitlab by going to the project and clicking on CI/CD. The most recent job and stages will be listed. Click on the stage to see the output of the task.

stages:
- test
- deploy-docker
- deploy-local
- deploy-remote

Podman Issue

Well, when running the pipeline with a change and a tag, gitlab-runner is unable to build the image. Basically when run from gitlab, gitlab-runner isn’t actually logged in so there’s an error:

$ podman-compose build
['podman', '--version', '']
using podman version: 4.2.0
podman build -t llamas_webapp -f ././Dockerfile.development .
Error: error creating tmpdir: mkdir /run/user/984: permission denied
exit code: 125

See when someone logs in, a socket is created in /run/user with the user id. But the gitlab-runner account isn’t actually logging in. So the uid 984 isn’t being created. I manually created it and was able to successfully use podman-compose but waiting a short time and the uid is removed by linux and rebooting caused it to disappear as well.

I did eventually find an article (linked below) where the person having the problem finally got an answer. Heck, I didn’t even know there was a loginctl command.

loginctl enable-linger gitlab-runner

From the man page:

Enable/disable user lingering for one or more users. If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering for the user of the session of the caller.

And it worked! Now to add that to the ansible script and give it a try.

References

Posted in CI/CD, Computers, Docker, Git, Kubernetes | Tagged , , , , | 1 Comment

ArgoCD CLI Commands

Overview

This article lists argocd CLI commands that were used to review and manage the ArgoCD installation. A lot of times finding useful commands isn’t easy. This article lists the more commands I used when getting things set up.

Help

Once you log in, simply typing argocd will give you a pretty complete list of commands and flags you can use. Generally I’m checking my project and applications but other commands are available.

Available Commands:
  account     Manage account settings
  admin       Contains a set of commands useful for Argo CD administrators and requires direct Kubernetes access
  app         Manage applications
  appset      Manage ApplicationSets
  cert        Manage repository certificates and SSH known hosts entries
  cluster     Manage cluster credentials
  completion  output shell completion code for the specified shell (bash or zsh)
  context     Switch between contexts
  gpg         Manage GPG keys used for signature verification
  help        Help about any command
  login       Log in to Argo CD
  logout      Log out from Argo CD
  proj        Manage projects
  relogin     Refresh an expired authenticate token
  repo        Manage repository connection parameters
  repocreds   Manage repository connection parameters
  version     Print version information

Logging In

As noted in a prior article, there were a few issues with logging in to the cluster and you have to log in in order to use the CLI tool. Mainly because I didn’t set up a set of certificates, I needed to use the insecure flag to log in.

$ argocd login argocd.dev.internal.pri --insecure
WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web.
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.dev.internal.pri' updated

And I’m in. From here I can now manage my ArgoCD projects. Since I use gitops to manage my Kubernetes clusters, the commands I use here generally are getting information and not setting up the project. See my github repo for those configurations.

Project Information

First off, you need to know what projects are installed. There is always a default project. The two projects I have installed are the blue and green projects.

First commands.

Available Commands:
  add-destination          Add project destination
  add-orphaned-ignore      Add a resource to orphaned ignore list
  add-signature-key        Add GnuPG signature key to project
  add-source               Add project source repository
  allow-cluster-resource   Adds a cluster-scoped API resource to the allow list and removes it from deny list
  allow-namespace-resource Removes a namespaced API resource from the deny list or add a namespaced API resource to the allow list
  create                   Create a project
  delete                   Delete project
  deny-cluster-resource    Removes a cluster-scoped API resource from the allow list and adds it to deny list
  deny-namespace-resource  Adds a namespaced API resource to the deny list or removes a namespaced API resource from the allow list
  edit                     Edit project
  get                      Get project details
  list                     List projects
  remove-destination       Remove project destination
  remove-orphaned-ignore   Remove a resource from orphaned ignore list
  remove-signature-key     Remove GnuPG signature key from project
  remove-source            Remove project source repository
  role                     Manage a project's roles
  set                      Set project parameters
  windows                  Manage a project's sync windows

Then you can use the list command to view the projects.

$ argocd proj list
NAME          DESCRIPTION                                 DESTINATIONS    SOURCES                                                    CLUSTER-RESOURCE-WHITELIST  NAMESPACE-RESOURCE-BLACKLIST  SIGNATURE-KEYS  ORPHANED-RESOURCES
default                                                   *,*             *                                                          */*                         <none>                        <none>          disabled
llamas-blue   Project to install the llamas band website  4 destinations  git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  */*                         <none>                        <none>          disabled
llamas-green  Project to install the llamas band website  4 destinations  git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  */*                         <none>                        <none>          disabled

If I wanted to check out the details of a project, I’d run the get command:

$ argocd proj get llamas-blue
Name:                        llamas-blue
Description:                 Project to install the llamas band website
Destinations:                https://kubernetes.default.svc,llamas-blue
                             https://cabo0cuomvip1.qa.internal.pri:6443,llamas-blue
                             https://tato0cuomvip1.stage.internal.pri:6443,llamas-blue
                             https://lnmt1cuomvip1.internal.pri:6443,llamas-blue
Repositories:                git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git
Scoped Repositories:         <none>
Allowed Cluster Resources:   */*
Scoped Clusters:             <none>
Denied Namespaced Resources: <none>
Signature keys:              <none>
Orphaned Resources:          disabled

If you compare, the only real difference between the two is the get command lists the remote K8S clusters vs just indicating there are 4.

Application Information

The main thing I’m checking is the application status. Let’s see the options first.

Available Commands:
  actions         Manage Resource actions
  create          Create an application
  delete          Delete an application
  delete-resource Delete resource in an application
  diff            Perform a diff against the target and live state.
  edit            Edit application
  get             Get application details
  history         Show application deployment history
  list            List applications
  logs            Get logs of application pods
  manifests       Print manifests of an application
  patch           Patch application
  patch-resource  Patch resource in an application
  resources       List resource of application
  rollback        Rollback application to a previous deployed version by History ID, omitted will Rollback to the previous version
  set             Set application parameters
  sync            Sync an application to its target state
  terminate-op    Terminate running operation of an application
  unset           Unset application parameters
  wait            Wait for an application to reach a synced and healthy state

For that I’d run the following list command:

$ argocd app list
NAME                       CLUSTER                                        NAMESPACE     PROJECT       STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                       PATH                 TARGET
argocd/llamas-blue-dev     https://kubernetes.default.svc                 llamas-blue   llamas-blue   Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  dev/llamas-blue/     dev
argocd/llamas-blue-prod    https://lnmt1cuomvip1.internal.pri:6443        llamas-blue   llamas-blue   Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  prod/llamas-blue/    main
argocd/llamas-blue-qa      https://cabo0cuomvip1.qa.internal.pri:6443     llamas-blue   llamas-blue   Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  qa/llamas-blue/      main
argocd/llamas-blue-stage   https://tato0cuomvip1.stage.internal.pri:6443  llamas-blue   llamas-blue   Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  stage/llamas-blue/   main
argocd/llamas-green-dev    https://kubernetes.default.svc                 llamas-green  llamas-green  Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  dev/llamas-green/    dev
argocd/llamas-green-prod   https://lnmt1cuomvip1.internal.pri:6443        llamas-green  llamas-green  Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  prod/llamas-green/   main
argocd/llamas-green-qa     https://cabo0cuomvip1.qa.internal.pri:6443     llamas-green  llamas-green  Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  qa/llamas-green/     main
argocd/llamas-green-stage  https://tato0cuomvip1.stage.internal.pri:6443  llamas-green  llamas-green  Synced  Healthy  Auto        <none>      git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git  stage/llamas-green/  main

There is a lot of information here in part because my ArgoCD instance is connected and manages applications on four Kubernetes clusters. So you’ll see a blue and green application four times each.

Getting details though provides a ton of information.

$ argocd app get argocd/llamas-blue-dev
Name:               argocd/llamas-blue-dev
Project:            llamas-blue
Server:             https://kubernetes.default.svc
Namespace:          llamas-blue
URL:                https://argocd.dev.internal.pri/applications/llamas-blue-dev
Repo:               git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git
Target:             dev
Path:               dev/llamas-blue/
SyncWindow:         Sync Allowed
Sync Policy:        Automated
Sync Status:        Synced to dev (1ef2090)
Health Status:      Healthy

GROUP                      KIND                     NAMESPACE    NAME                        STATUS   HEALTH   HOOK  MESSAGE
                           ResourceQuota            llamas-blue  llamas-rq                   Synced                  resourcequota/llamas-rq unchanged
                           LimitRange               llamas-blue  llamas-lr                   Synced                  limitrange/llamas-lr unchanged
                           ServiceAccount           llamas-blue  cschelin-admin              Synced                  serviceaccount/cschelin-admin unchanged
                           ServiceAccount           llamas-blue  cschelin                    Synced                  serviceaccount/cschelin unchanged
rbac.authorization.k8s.io  ClusterRoleBinding       llamas-blue  cschelin-view-llamas-blue   Running  Synced         clusterrolebinding.rbac.authorization.k8s.io/cschelin-view-llamas-blue reconciled. clusterrolebinding.rbac.authorization.k8s.io/cschelin-view-llamas-blue unchanged
rbac.authorization.k8s.io  ClusterRoleBinding       llamas-blue  cschelin-admin-llamas-blue  Running  Synced         clusterrolebinding.rbac.authorization.k8s.io/cschelin-admin-llamas-blue reconciled. clusterrolebinding.rbac.authorization.k8s.io/cschelin-admin-llamas-blue unchanged
                           Service                  llamas-blue  llamas                      Synced   Healthy        service/llamas unchanged
autoscaling                HorizontalPodAutoscaler  llamas-blue  llamas                      Synced   Healthy        horizontalpodautoscaler.autoscaling/llamas unchanged
networking.k8s.io          Ingress                  llamas-blue  llamas                      Synced   Healthy        ingress.networking.k8s.io/llamas configured
argoproj.io                Rollout                  llamas-blue  llamas                      Synced   Healthy        rollout.argoproj.io/llamas unchanged
rbac.authorization.k8s.io  ClusterRoleBinding                    cschelin-admin-llamas-blue  Synced
rbac.authorization.k8s.io  ClusterRoleBinding                    cschelin-view-llamas-blue   Synced



Posted in CI/CD, Computers | Tagged , , | 1 Comment