Ansible Automation Platform Installation

Overview

In order to use AWX, aka the upstream product of Ansible Automation Platform, formerly Ansible Tower, we need to have a working cluster. This article provides instructions in how to install and use AWX.

Installation

Before doing the installation, you’ll either need to install the Postgresql server or simply create the postgres user account. Otherwise the postgres container won’t start.

# groupadd -g 26 postgres
# useradd -c "PostgreSQL Server" -d /var/lib/pgsql -s /bin/bash \
  -c 26 -g 26 -G postgres -m postgres

The installation process for AWX is pretty simple. In the Gitops repo under [dev/qa/prod]/cluster/awx, you’ll apply the registry-pv.yaml first. This creates the external file system on the NFS server under /srv/nfs4/registry where the postgres container will store data.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi
  nfs:
    path: /srv/nfs4/registry
    server: 192.168.101.170
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

And it worked. The PVC was allocated and the pods started.

Once the registry PV has been created, you might want to update the AWX tag in the kustomization.yaml file. The one in the Gitops repo is at 2.16.1 however as a note, when you kick it off, AWX will upgrade so as long as you’re not too far off of the next version, you can probably just apply as defined. Not it is ‘-k’ to tell kubectl this is a kustomization file and not a “normal” yaml file.

$ kubectl apply -k .
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxmeshingresses.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created
awx.awx.ansible.com/awx-demo created

Postgres Database and Storage

The postgres container has a default configuration that uses attached storage (PV and PVC) for the database information. This is an 8g slice. The problem is it creates a [share]/data/pgdata directory with the postgres database. This means you have to ensure you have a unique PV for each postgres container.

Of course if you’re using an external postgres server, make sure you make the appropriate updates to the configmap.

Ingress Access

In addition to the pods, we need to create a DNS entry plus an ingress route.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: awx-demo
  namespace: awx
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: awx.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: awx-demo-service
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - awx.dev.internal.pri

Running!

And now AWX should be up and running. It may do an upgrade so you may have to wait a few minutes.

$ kubectl get pods -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-demo-postgres-13-0                             1/1     Running   0          24m
awx-demo-task-857c895bf9-rt2h8                     4/4     Running   0          23m
awx-demo-web-6c4df77799-6mn9p                      3/3     Running   0          21m
awx-operator-controller-manager-6544864fcd-tbpbm   2/2     Running   0          2d13h

Local Repository

One of the issues with being on a high speed WiFi internet connection is we don’t want to keep pulling images from the internet. I have a Docker Directory, aka a local Docker Registry where I’ll identify images that Kubernetes uses, pull them to my docker server, tag them locally, push them up to the local Repository, and then update the various configurations either before adding them to the cluster or updating them after they’ve been installed. The main container we want to have locally is the automation one as it’s created every time a job runs.

Here is a list of the deployments and statefulsets that have images. Note we also update the imagePullPolicy to Always as a security measure.

# kubectl get deploy -n awx
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
awx-demo-task                     1/1     1            1           20h
awx-demo-web                      1/1     1            1           20h
awx-operator-controller-manager   1/1     1            1           20h
# kubectl get deploy awx-demo-task -n awx -o yaml | grep -i image
        image: docker.io/redis:7
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx:24.3.1
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx-ee:24.3.1
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx:24.3.1
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx:24.3.1
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx-ee:24.3.1
        imagePullPolicy: IfNotPresent
# kubectl get deploy awx-demo-web -n awx -o  yaml | grep -i image
        image: docker.io/redis:7
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx:24.3.1
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx:24.3.1
        imagePullPolicy: IfNotPresent
# kubectl get deploy -n awx awx-operator-controller-manager -o yaml | grep -i image
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
        imagePullPolicy: IfNotPresent
        image: quay.io/ansible/awx-operator:2.16.1
        imagePullPolicy: IfNotPresent
# kubectl get statefulset -n awx awx-demo-postgres-15 -n awx -o yaml | grep -i image
        image: quay.io/sclorg/postgresql-15-c9s:latest
        imagePullPolicy: IfNotPresent


Admin Password

You’ll have to get the admin password, run the following command to retrieve it. Once retrieved, log in to https://awx.dev.internal.pri (or whatever you’re using) as admin and use the password. When you log in, the password is cleared so make sure you save it somewhere.

$ kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" -n awx | base64 --decode; echo
G4XfwRsfk9MycbxnS9cE8CDfqSKIuNMW

If you forget the admin password or simply want to reset it, you would log into the web container and reset it there.

$ kubectl exec awx-demo-web-6c4df77799-6mn9p -n awx --stdin --tty -- /bin/bash
bash-5.1$ awx-manage changepassword admin
Changing password for user 'admin'
Password:
Password (again):
Password changed successfully for user 'admin'
bash-5.1$

Troubleshooting

Persistent Volumes

The one issue I had was the persistent volume claim (PVC) failed to find appropriate storage.

$ kubectl describe pvc postgres-13-awx-demo-postgres-13-0 -n awx
Name:          postgres-13-awx-demo-postgres-13-0
Namespace:     awx
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/component=database
               app.kubernetes.io/instance=postgres-13-awx-demo
               app.kubernetes.io/managed-by=awx-operator
               app.kubernetes.io/name=postgres-13
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       awx-demo-postgres-13-0
Events:
  Type    Reason         Age                        From                         Message
  ----    ------         ----                       ----                         -------
  Normal  FailedBinding  3m17s (x14344 over 2d11h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

However I have a persistent volume.

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
storage-pv   100Gi      RWX            Retain           Available                                   165d

It took just a little digging but I figured out the problem.

$ kubectl get pvc postgres-13-awx-demo-postgres-13-0 -n awx -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2023-09-11T02:03:34Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: database
    app.kubernetes.io/instance: postgres-13-awx-demo
    app.kubernetes.io/managed-by: awx-operator
    app.kubernetes.io/name: postgres-13
  name: postgres-13-awx-demo-postgres-13-0
  namespace: awx
  resourceVersion: "54733870"
  uid: 1574b79e-1e17-4825-bc25-d70ac4021af7
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  volumeMode: Filesystem
status:
  phase: Pending

Note the spec.accessModes setting is ReadWriteOnce however the storage-pv persistent volume is configured as ReadWriteMany (RWX).

Conclusion

When you get to the website and log in, you’re ready to add your projects!

References

This entry was posted in ansible, Computers, Kubernetes and tagged , , , . Bookmark the permalink.

One Response to Ansible Automation Platform Installation

  1. Pingback: Kubernetes Index | Motorcycle Touring

Leave a Reply

Your email address will not be published. Required fields are marked *