Overview
This article provides instructions in how to build my llamas container and then how to deploy it into my kubernetes cluster. In addition, a Horizontal Pod Autoscaling configuration is used.
Container Build
The llamas website is automatically installed in /opt/docker/llamas/llamas using the GitLab CI/CD pipeline whenever I make a change to the site. I have the docker configuration files for building the image already created.
The 000-default.conf file. This file configures the web server.
<VirtualHost *:80>
ServerAdmin cschelin@localhost
DocumentRoot /var/www/html
<Directory /var/www>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
The docker-compose.yaml file.
version: "3.7"
services:
webapp:
build:
context: .
dockerfile: ./Dockerfile.development
ports:
- "8000:80"
environment:
- APP_ENV=development
- APP_DEBUG=true
And the Dockerfile.development file. This copies the configuration to the webserver and starts it.
FROM php:7-apache
COPY 000-default.conf /etc/apachet/sites-available/000-default.conf
COPY ./llamas/ /var/www/html.
RUN a2enmod rewrite
CMD ["apache2-foreground"]
When done, all you need to do is run the docker-compose command and your image is built.
podman-compose build
Access the running image via the docker server, port 8000 as defined in the docker-compose.yaml file to confirm the image was built as desired.
Manage Image
You’ll need to tag the image and then push it up to the local repository.
podman tag llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
Now it’s ready to be added to the Kubernetes cluster.
GitLab Pipeline
Basically whatever server you’re using as a gitlab runner will need to have podman and podman-compose installed. Once that’s done, you can then automatically build images. I’m also using tagging to make sure I only remake the image when I’m ready vs every time I make an update. Since it’s also a website, I can check the status without building an image.
You’ll use the git tag command to tag the version then use git push –tags to have the update tagged. For example, I just updated my .gitlab-ci.yml file which is the pipeline file, to fix the deployment. It has nothing to do with the site, so I won’t tag it and therefor, the image won’t be rebuilt.
Here’s the snippet of pipeline used to create the image. Remember, I’m using a local repository so I’m using tag and push to deploy it locally. And note that I’m also still using a docker server to build images manually hence the extra lines.
deploy-docker-job:
tags:
- docker
stage: deploy-docker
script:
- env
- /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
- |
if [[ ! -z ${CI_COMMIT_TAG} ]]
then
podman-compose build
podman tag localhost/llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
fi
- rm -rf "${CI_PROJECT_DIR}"/.git
- rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
- /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ jenkins@bldr0cuomdock1.dev.internal.pri:/opt/docker/llamas/llamas/
Configure Kubernetes
We’ll be creating a DNS entry and applying five files which will deploy the Llamas container making it available for viewing.
DNS
Step one is to create the llamas.dev.internal.pri DNS CNAME.
Namespace
Apply the namespace.yaml file to create the llamas namespace.
apiVersion: v1
kind: Namespace
metadata:
name: llamas
Service
Apply the service.yaml file to manage how to access the website.
apiVersion: v1
kind: Service
metadata:
labels:
app: llamas
name: llamas
namespace: llamas
spec:
ports:
- nodePort: 31200
port: 80
protocol: TCP
targetPort: 80
selector:
app: llamas
type: NodePort
Deployment
Apply the deployment.yaml file to deploy the llamas images. I set replicas to 1 which is the default but since HPA is being applied, it doesn’t really matter as HPA replaces the value. Under specs.template.specs I added the extra configurations from the PriorityClass article and the ResourceQuota article.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: llamas
name: llamas
namespace: llamas
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: llamas
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: llamas
spec:
containers:
- image: bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
imagePullPolicy: Always
name: llamas
priorityClassName: business-essential
resources:
limits:
cpu: "40m"
memory: "30Mi"
requests:
cpu: "30m"
memory: "20Mi"
Ingress
Now apply the ingress.yaml file to permit access to the websites remotely.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: llamas
namespace: llamas
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: llamas.dev.internal.pri
http:
paths:
- backend:
service:
name: llamas
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- llamas.dev.internal.pri
Horizontal Pod Autoscaling
HPA lets you configure your application to be responsive to increases and decreases in how busy the site is. You define parameters that indicate a pod is getting busy and Kubernetes reacts to it and creates new pods. Once things get less busy, Kubernetes removes pods until it reaches the minimum you’ve defined. There is a 15 second cycle which can be adjusted in the kube-controller-manager if you need it to respond quicker (or less often).
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: llamas
namespace: llamas
spec:
maxReplicas: 10
metrics:
- resource:
name: cpu
target:
averageUtilization: 50
type: Utilization
type: Resource
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: llamas
In this configuration, we’ve set the CPU checks to be 50%. Once applied, check the status.
$ kubectl get hpa -n llamas
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
llamas Deployment/llamas <unknown>/50% 3 10 3 9h
Since it’s a pretty idle site, the current usage is identified as unknown. Once it starts receiving traffic, it’ll spin up more pods to address the increased requirements.
Success!
And the site is up and running. Accessing https://llamas.dev.internal.pri gives us access to the one page website.
llamas llamas-65785c7b99-2cxl7 1/1 Running 0 58s 10.42.80.15 bldr0cuomknode3.dev.internal.pri <none> <none>
llamas llamas-65785c7b99-92tbz 1/1 Running 0 25s 10.42.31.139 bldr0cuomknode2.dev.internal.pri <none> <none>
llamas llamas-65785c7b99-lgmk4 1/1 Running 0 42s 10.42.251.140 bldr0cuomknode1.dev.internal.pri <none> <none>
Pingback: Kubernetes Index | Motorcycle Touring