Ansible Automation Platform Installation

Overview

In order to use AWX, aka the upstream product of Ansible Automation Platform, formerly Ansible Tower, we need to have a working cluster. This article provides instructions in how to install and use AWX.

Installation

Before doing the installation, you’ll either need to install the Postgresql server or simply create the postgres user account on the storage server (NFS in my case). Otherwise the postgres container won’t start without mucking with the directory permissions.

# groupadd -g 26 postgres
# useradd -c "PostgreSQL Server" -d /var/lib/pgsql -s /bin/bash \
  -c 26 -g 26 -G postgres -m postgres

The installation process for AWX is pretty simple. In the Gitops repo under [dev/qa/prod]/cluster/awx, you’ll apply the registry-pv.yaml first. This creates the external file system on the NFS server under /srv/nfs4/registry where the postgres container will store data.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi
  nfs:
    path: /srv/nfs4/registry
    server: 192.168.101.170
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

And it worked. The PVC was allocated and the pods started.

Once the registry PV has been created, you might want to update the AWX tag in the kustomization.yaml file. The one in the Gitops repo is at 2.16.1 however as a note, when you kick it off, AWX will upgrade so as long as you’re not too far off of the next version, you can probably just apply as defined. Not it is ‘-k’ to tell kubectl this is a kustomization file and not a “normal” yaml file.

$ kubectl apply -k .
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxmeshingresses.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created
awx.awx.ansible.com/awx-demo created

Postgres Database and Storage

The postgres container has a default configuration that uses attached storage (PV and PVC) for the database information. This is an 8g slice. The problem is it creates a [share]/data/pgdata directory with the postgres database. This means you have to ensure you have a unique PV for each postgres container.

Of course if you’re using an external postgres server, make sure you make the appropriate updates to the configmap.

Ingress Access

In addition to the pods, we need to create a DNS entry plus an ingress route.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: awx-demo
  namespace: awx
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: awx.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: awx-demo-service
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - awx.dev.internal.pri

Running!

And now AWX should be up and running. It may do an upgrade so you may have to wait a few minutes.

$ kubectl get pods -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-postgres-13-0                             1/1     Running   0          24m
awx-demo-task-857c895bf9-rt2h8                     4/4     Running   0          23m
awx-demo-web-6c4df77799-6mn9p                      3/3     Running   0          21m
awx-operator-controller-manager-6544864fcd-tbpbm   2/2     Running   0          2d13h

Local Repository

One of the issues with being on a high speed WiFi internet connection is we don’t want to keep pulling images from the internet. I have a Docker Directory, aka a local Docker Registry where I’ll identify images that Kubernetes or Openshift uses, pull them to my docker server, tag them locally, push them up to the local Repository, and then update the various configurations either before adding them to the cluster or updating them after they’ve been installed. The main container we want to have locally is the automation one (awx-ee:latest) as it’s created every time a job runs.

Here is the list of images found while parsing through the awx namespace. Note we also update the imagePullPolicy to Always as a security measure.

  • quay.io/ansible/awx:24.3.1
  • quay.io/ansible/awx-ee:24.3.1
  • quay.io/ansible/awx-ee:latest
  • quay.io/ansible/awx-operator:2.16.1
  • kubebuilder/kube-rbac-proxy:v0.15.0
  • docker.io/redis:7
  • quay.io/sclorg/postgresql-15-c9s:latest

For the local docker repository, I’ll pull the image, tag it locally, then push it to the local server. Like so:

# docker pull quay.io/ansible/awx-ee:latest
latest: Pulling from ansible/awx-ee
a0a261c93a1a: Already exists
aa356d862a3f: Pull complete
ba1bfe7741c6: Pull complete
60791ff3f035: Pull complete
af540118867f: Pull complete
907a534f67c1: Pull complete
d6e203406e7e: Pull complete
e752d2a39a5b: Pull complete
08538fc74157: Pull complete
1b5fcec6379e: Pull complete
3924e18106c9: Pull complete
660e2ba9814c: Pull complete
4f4fb700ef54: Pull complete
8bd6dd298579: Pull complete
278858ab53b1: Pull complete
9c8ccd66c6ea: Pull complete
bfcd72bbf16f: Pull complete
Digest: sha256:53a523b6257abc3bf142244f245114e74af8eac17065fce3eaca7b7d9627fb0d
Status: Downloaded newer image for quay.io/ansible/awx-ee:latest
quay.io/ansible/awx-ee:latest
# docker tag quay.io/ansible/awx-ee:latest bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee:latest
# docker push bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee:latest
The push refers to repository [bldr0cuomrepo1.dev.internal.pri:5000/ansible/awx-ee]
5487431a4ee2: Layer already exists
5f70bf18a086: Layer already exists
f0a66f6d7663: Pushed
08832c5d2e19: Pushed
a6dcafa3a2bc: Pushed
2f3c22398896: Pushed
9f7095066a3e: Pushed
a39187c28f90: Pushed
e0f7e9d6d56c: Pushed
6c9e2e4049d1: Pushed
1c32be0b7423: Pushed
fad1f4d67115: Pushed
2755eb3d7410: Pushed
a3f08e56aa71: Pushed
7823a8ff9e75: Pushed
e679d25c8a79: Pushed
8a2b10aa0981: Mounted from sclorg/postgresql-15-c9s
latest: digest: sha256:242456e2ac473887c3ac385ff82cdb04574043ab768b70c1f597bf3d14e83e99 size: 4083

Admin Password

You’ll have to get the admin password, run the following command to retrieve it. Once retrieved, log in to https://awx.dev.internal.pri (or whatever you’re using) as admin and use the password. When you log in, the password is cleared so make sure you save it somewhere.

$ kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" -n awx | base64 --decode; echo
G4XfwRsfk9MycbxnS9cE8CDfqSKIuNMW

If you forget the admin password or simply want to reset it, you would log into the web container and reset it there.

$ kubectl exec awx-demo-web-6c4df77799-6mn9p -n awx --stdin --tty -- /bin/bash
bash-5.1$ awx-manage changepassword admin
Changing password for user 'admin'
Password:
Password (again):
Password changed successfully for user 'admin'
bash-5.1$

Troubleshooting

Persistent Volumes

The one issue I had was the persistent volume claim (PVC) failed to find appropriate storage.

$ kubectl describe pvc postgres-13-awx-demo-postgres-13-0 -n awx
Name:          postgres-13-awx-demo-postgres-13-0
Namespace:     awx
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/component=database
               app.kubernetes.io/instance=postgres-13-awx-demo
               app.kubernetes.io/managed-by=awx-operator
               app.kubernetes.io/name=postgres-13
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       awx-demo-postgres-13-0
Events:
  Type    Reason         Age                        From                         Message
  ----    ------         ----                       ----                         -------
  Normal  FailedBinding  3m17s (x14344 over 2d11h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

However I have a persistent volume.

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
storage-pv   100Gi      RWX            Retain           Available                                   165d

It took just a little digging but I figured out the problem.

$ kubectl get pvc postgres-13-awx-demo-postgres-13-0 -n awx -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2023-09-11T02:03:34Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: database
    app.kubernetes.io/instance: postgres-13-awx-demo
    app.kubernetes.io/managed-by: awx-operator
    app.kubernetes.io/name: postgres-13
  name: postgres-13-awx-demo-postgres-13-0
  namespace: awx
  resourceVersion: "54733870"
  uid: 1574b79e-1e17-4825-bc25-d70ac4021af7
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  volumeMode: Filesystem
status:
  phase: Pending

Note the spec.accessModes setting is ReadWriteOnce however the storage-pv persistent volume is configured as ReadWriteMany (RWX).

Conclusion

When you get to the website and log in, you’re ready to add your projects!

References

This entry was posted in ansible, Computers, Kubernetes and tagged , , , . Bookmark the permalink.

One Response to Ansible Automation Platform Installation

  1. Pingback: Kubernetes Index | Motorcycle Touring

Leave a Reply

Your email address will not be published. Required fields are marked *