Pan Fried Rainbow Trout

Basically I do this when I find a recipe on line that I like and want to make sure I can find it again πŸ™‚

Ingredients

  • 2 Rainbow Trout fillets, boned and with skin.
  • 1/2 teaspoon kosher salt
  • 1/4 teaspoon black pepper
  • 1/4 teaspoon garlic granules
  • 1/2 cup butter (half a stick)
  • 1/2 tablespoon fresh lemon juice
  • 2 tablespoons minced parsley

Instructions

  • Rinse off the trout and pat dry with a papertowel.
  • Sprinkle the skin side with half of the mixture of the salt and pepper. I find doing it as a pinch makes sure it’s spread evenly.
  • Heat the butter in a 12 inch or so nonstick skillet. When the butter starts bubbling, swirl it around to make sure it’s spread evenly around the pan.
  • Add the two fillets skin side down and sprinkle the rest of the salt and pepper mixture on the fleshy side. Press on the fish to make sure the skin touches the pan.
  • Cook time is about 3 minutes.
  • When the skin side is done, sprinkle the garlic on the fillets and flip it over. Cook for about 3 minutes.
  • After removing the fish, put the lemon in the pan and mix it up. Then drizzle the sauce over the fish.
  • Sprinkle with parsley and serve.
Posted in Cooking | Tagged , , | Leave a comment

Quick and Dirty Kubernetes

Overview

At times you want to quickly throw up a Kubernetes cluster for some quick test or another. While I do have several Kubernetes clusters on my Homelab once in a while you want to do a quick test or even follow a tutorial to get familiar with this or that tool.

This time I’m using a tool called kind. See the References below to find the link to the site.

Installation

First off you’ll need to have a docker server in order to install the various tools. Next you’ll have to install the kind tool.

# go install sigs.k8s.io/kind@v0.17.0
go: downloading sigs.k8s.io/kind v0.17.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/spf13/cobra v1.4.0
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/alessio/shellescape v1.4.1
go: downloading github.com/mattn/go-isatty v0.0.14
go: downloading golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c
go: downloading github.com/pelletier/go-toml v1.9.4
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/evanphx/json-patch/v5 v5.6.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading sigs.k8s.io/yaml v1.3.0
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/google/safetext v0.0.0-20220905092116-b49f7bc46da2

Once kind is installed, start up your test cluster. Note that kind is installed off your home directory in the go/bin directory. Either add it to your path or add it to the command.

# go/bin/kind create cluster --name nginx-ingress --image kindest/node:v1.23.5
Creating cluster "nginx-ingress" ...
 βœ“ Ensuring node image (kindest/node:v1.23.5) 🖼
 βœ“ Preparing nodes 📦
 βœ“ Writing configuration 📜
 βœ“ Starting control-plane 🕹️
 βœ“ Installing CNI 🔌
 βœ“ Installing StorageClass 💾
Set kubectl context to "kind-nginx-ingress"
You can now use your cluster with:

kubectl cluster-info --context kind-nginx-ingress

Thanks for using kind! 😊

See if the cluster is up.

# kubectl get nodes
NAME                          STATUS   ROLES                  AGE     VERSION
nginx-ingress-control-plane   Ready    control-plane,master   3m21s   v1.23.5

And it’s ready to be used. Pretty interesting.

References

  • https://kind.sigs.k8s.io/
Posted in Computers, Kubernetes | Tagged , | Leave a comment

Kubernetes Issues

Overview

This article lists a couple of issues that occurred while I was building this environment. The issues don’t fit into any of the specific articles mainly because it was likely due to my testing vs anything that occurred during the installation. The final article will be accurate and should just work however during the testing, these were identified and I had to track down a fix.

Terminating Pods

I had some pods that got stuck terminating for a long period of time. There were a couple of suggestions but the two that seemed to work best was to either force it or clear any finalizers for the pod. A finalizer is basically a task that needs to run and the pod is waiting for a successful completion of the task before it removes the pod.

The main solution I used for this was to force the delete. Make sure you know what node the pod is on before deleting it.

$ kubectl get pods -llamas -o wide
NAMESPACE            NAME                                                        READY   STATUS        RESTARTS      AGE    IP                NODE                                 NOMINATED NODE   READINESS GATES
llamas               llamas-f8448d86c-br4z8                                      1/1     Terminating   0             3d1h   10.42.232.197     tato0cuomknode3.stage.internal.pri   <none>           <none>
$ kubectl delete pod llamas-f8448d86c-br4z8 -n llamas --grace-period=0 --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "llamas-f8448d86c-br4z8" force deleted

This can be an issue as noted, the underlying resource the pod was waiting on could also be stuck. The main thing I did was to restart kubelet on the node the pod was running on. Make sure you get the node name before forcing the deletion. Otherwise it’s safest to restart kubelet on all worker nodes, no fun if there are more than a few. If you don’t have privileged access to the node though, you’ll have to get with your sysadmin team.

Application Deletion

In working with projects in ArgoCD, I mucked up one of the projects so badly it couldn’t be deleted. This was due to some process that needed to complete but had been removed outside the normal deletion process (apparently). This time I removed the finalizer process and the application simply was deleted.

$ kubectl get application -A
NAMESPACE   NAME     SYNC STATUS   HEALTH STATUS
argocd      llamas   Unknown       Unknown
$ kubectl patch application/llamas -n argocd --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
application.argoproj.io/llamas patched
$ kubectl get application -A
No resources found


Posted in Computers, Kubernetes | Tagged , , | Leave a comment

Llamas Band and Continuous Delivery

Overview

In this article, I’ll be providing details on how to configure ArgoCD for the Llamas Band project including deploying to the other sites.

Continuous Delivery

With ArgoCD installed and the Llamas container CI pipeline completed, we’ll use this configuration to ensure any changes that are made to the Llamas website are automatically deployed when the container image is updated or any other configuration changes are made.

Project

In my homelab, there really isn’t a requirement for projects to be created however in a more professional environment, you’ll create an ArgoCD project for your application.

In this project, you’re defining what Kubernetes clusters the containers in the project have access to. I have four environments and since one project is across all four clusters, we need to configure the access under specs.destinations. When you created the links using the argocd cli, those are the same ones used in this file.

Since this is an ArgoCD configuration, it goes in the argocd namespace.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: llamas
  namespace: argocd
spec:
  clusterResourceWhitelist:
  - group: '*'
    kind: '*'
  description: Project to install the llamas band website
  destinations:
  - namespace: 'llamas'
    server: https://kubernetes.default.svc
  - namespace: 'llamas'
    server: https://cabo0cuomvip1.qa.internal.pri:6443
  - namespace: 'llamas'
    server: https://tato0cuomvip1.stage.internal.pri:6443
  - namespace: 'llamas'
    server: https://lnmt1cuomvip1.internal.pri:6443
  sourceRepos:
  - git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git

Application Configuration

When configuring the application, since I have four sites, I’m creating each with an extension; llamas-dev for example. The project defines what sites each application can access under the specs.destinations data set.

Much of the configuration should be easy enough to understand.

  • name – I used the site type to extend the name
  • namespace – It’s an ArgoCD configuration file so argocd
  • project – The name of the project (see above)
  • repoURL – The URL where the repo resides. I’m using an ssh like access method so git@ for this
  • targetRevision – The branch to monitor
  • path – The path to the files that belong to the llamas website
  • recurse – I’m using directories to manage files so I want argocd to check all subdirectories for changes
  • destination.server – One of the spec.destinations from the project
  • destination.namespace – The namespace for the project

The ignoreDifferences block is used due to my using Horizontal Pod Autoscaling (HPA) to manage replicas. While HPA does update the deployment, there could be a gap where argocd terminates pods before it catches up with the deployment. With this we’re just ignoring that value to prevent conflict.

You can test this easily by setting the deployment spec.replicas to 1 (the default) then adding the HPA configuration. When checking the deployment after that, you’ll see it’s now set to 3.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: llamas-dev
  namespace: argocd
spec:
  project: llamas
  source:
    repoURL: git@lnmt1cuomgitlab.internal.pri:external-unix/gitops.git
    targetRevision: dev
    path: dev/llamas/
    directory:
      recurse: true
  destination:
    server: https://kubernetes.default.svc
    namespace: llamas
  syncPolicy:
    automated:
      prune: false
      selfHeal: false
  ignoreDifferences:
    - group: apps
      kind: Deployment
      name: llamas
      namespace: llamas
      jqPathExpressions:
        - .spec.template.spec.replicas

And when checking the status.

$ kubectl get application -A
NAMESPACE   NAME         SYNC STATUS   HEALTH STATUS
argocd      llamas-dev   Synced        Healthy

Remote Clusters

Of course we also want the Dev ArgoCD instance to manage the other site installations vs installing ArgoCD to every site. Basically ArgoCD will need permission to apply the configuration files.

For the Llamas site, we’ll need a second, slightly different ArgoCD Application file. Note the difference is only the metadata.name where I added qa instead of dev, the spec.destination.server which is the api server of the cabo cluster, the spec.source.targetRevision of main instead of dev, and of course spec.source.path, the path to the Llamas files. The rest of the information should be the same.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: llamas-qa
  namespace: argocd
spec:
  project: llamas
  source:
    repoURL: git@lnmt1cuomgitlab.internal.pri/external-unix/gitops.git
    targetRevision: main
    path: qa/llamas/
    directory:
      recurse: true
  destination:
    server: https://cabo0cuomvip1.qa.internal.pri
    namespace: llamas
  syncPolicy:
    automated:
      prune: false
      selfHeal: false
  ignoreDifferences:
    - group: apps
      kind: Deployment
      name: llamas
      namespace: llamas
      jqPathExpressions:
        - .spec.template.spec.replicas

Next, a green/blue deployment for the llamas website. What fun!

Posted in CI/CD, Computers, Git | Tagged , | 1 Comment

Llamas Band Website

Overview

This article provides instructions in how to build my llamas container and then how to deploy it into my kubernetes cluster. In addition, a Horizontal Pod Autoscaling configuration is used.

Container Build

The llamas website is automatically installed in /opt/docker/llamas/llamas using the GitLab CI/CD pipeline whenever I make a change to the site. I have the docker configuration files for building the image already created.

The 000-default.conf file. This file configures the web server.

<VirtualHost *:80>
  ServerAdmin cschelin@localhost
  DocumentRoot /var/www/html

  <Directory /var/www>
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
  </Directory>
</VirtualHost>

The docker-compose.yaml file.

version: "3.7"
services:
  webapp:
    build:
      context: .
      dockerfile: ./Dockerfile.development
    ports:
      - "8000:80"
    environment:
      - APP_ENV=development
      - APP_DEBUG=true

And the Dockerfile.development file. This copies the configuration to the webserver and starts it.

FROM php:7-apache

COPY 000-default.conf /etc/apachet/sites-available/000-default.conf
COPY ./llamas/ /var/www/html.

RUN a2enmod rewrite

CMD ["apache2-foreground"]

When done, all you need to do is run the docker-compose command and your image is built.

podman-compose build

Access the running image via the docker server, port 8000 as defined in the docker-compose.yaml file to confirm the image was built as desired.

Manage Image

You’ll need to tag the image and then push it up to the local repository.

podman tag llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2

Now it’s ready to be added to the Kubernetes cluster.

GitLab Pipeline

Basically whatever server you’re using as a gitlab runner will need to have podman and podman-compose installed. Once that’s done, you can then automatically build images. I’m also using tagging to make sure I only remake the image when I’m ready vs every time I make an update. Since it’s also a website, I can check the status without building an image.

You’ll use the git tag command to tag the version then use git push –tags to have the update tagged. For example, I just updated my .gitlab-ci.yml file which is the pipeline file, to fix the deployment. It has nothing to do with the site, so I won’t tag it and therefor, the image won’t be rebuilt.

Here’s the snippet of pipeline used to create the image. Remember, I’m using a local repository so I’m using tag and push to deploy it locally. And note that I’m also still using a docker server to build images manually hence the extra lines.

deploy-docker-job:
  tags:
    - docker
  stage: deploy-docker
  script:
    - env
    - /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
    - |
      if [[ ! -z ${CI_COMMIT_TAG} ]]
      then
        podman-compose build
        podman tag localhost/llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
        podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
      fi
    - rm -rf "${CI_PROJECT_DIR}"/.git
    - rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
    - /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ jenkins@bldr0cuomdock1.dev.internal.pri:/opt/docker/llamas/llamas/

Configure Kubernetes

We’ll be creating a DNS entry and applying five files which will deploy the Llamas container making it available for viewing.

DNS

Step one is to create the llamas.dev.internal.pri DNS CNAME.

Namespace

Apply the namespace.yaml file to create the llamas namespace.

apiVersion: v1
kind: Namespace
metadata:
  name: llamas

Service

Apply the service.yaml file to manage how to access the website.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: llamas
  name: llamas
  namespace: llamas
spec:
  ports:
  - nodePort: 31200
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: llamas
  type: NodePort

Deployment

Apply the deployment.yaml file to deploy the llamas images. I set replicas to 1 which is the default but since HPA is being applied, it doesn’t really matter as HPA replaces the value. Under specs.template.specs I added the extra configurations from the PriorityClass article and the ResourceQuota article.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: llamas
  name: llamas
  namespace: llamas
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: llamas
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: llamas
    spec:
      containers:
      - image: bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.2
        imagePullPolicy: Always
        name: llamas
        priorityClassName: business-essential
        resources:
          limits:
            cpu: "40m"
            memory: "30Mi"
          requests:
            cpu: "30m"
            memory: "20Mi"

Ingress

Now apply the ingress.yaml file to permit access to the websites remotely.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: llamas
  namespace: llamas
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: llamas.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: llamas
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - llamas.dev.internal.pri

Horizontal Pod Autoscaling

HPA lets you configure your application to be responsive to increases and decreases in how busy the site is. You define parameters that indicate a pod is getting busy and Kubernetes reacts to it and creates new pods. Once things get less busy, Kubernetes removes pods until it reaches the minimum you’ve defined. There is a 15 second cycle which can be adjusted in the kube-controller-manager if you need it to respond quicker (or less often).

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: llamas
  namespace: llamas
spec:
  maxReplicas: 10
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 50
        type: Utilization
    type: Resource
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: llamas

In this configuration, we’ve set the CPU checks to be 50%. Once applied, check the status.

$ kubectl get hpa -n llamas
NAME     REFERENCE           TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
llamas   Deployment/llamas   <unknown>/50%   3         10        3          9h

Since it’s a pretty idle site, the current usage is identified as unknown. Once it starts receiving traffic, it’ll spin up more pods to address the increased requirements.

Success!

And the site is up and running. Accessing https://llamas.dev.internal.pri gives us access to the one page website.

llamas               llamas-65785c7b99-2cxl7                                   1/1     Running   0            58s     10.42.80.15       bldr0cuomknode3.dev.internal.pri   <none>           <none>
llamas               llamas-65785c7b99-92tbz                                   1/1     Running   0            25s     10.42.31.139      bldr0cuomknode2.dev.internal.pri   <none>           <none>
llamas               llamas-65785c7b99-lgmk4                                   1/1     Running   0            42s     10.42.251.140     bldr0cuomknode1.dev.internal.pri   <none>           <none>

References

Posted in Computers, Docker, Kubernetes | Tagged , , , | 1 Comment

GitLab CI/CD Pipeline

Overview

This article provides details on my use of the GitLab Runners in order to deploy websites and then automatically build, tag, and push images to my local docker repository.

Runner Installation

I’ve been using Jenkins for most of my work but as someone who continually learns, I’m also looking at other tools to see how they work. In this case because where I was working, we were going to install a local GitLab server as such I want to dig into GitLab Runners.

We’ll need to create servers for the Runners in each of the environments. In addition, for security reasons, I’ll want separate Runners that have access to my remote server in Florida.

Configuration wise, I’ll have one Runner in Dev, QA, and Stage and two Runners in Production, one for local environments and one for access to the remote server, and two Runners in my Home environment, same with one for local installations and one for the remote server. What my Runners will do is mainly deploy websites (only my Llamas website is set up right now) to my Tool servers and remote server.

Each Runner server will have 2 CPUs, 4 Gigs of RAM, and 140 Gigs of disk.

Installation itself is simple enough. You retrieve the appropriate binary from the gitlab-runner site and install it.

rpm -ivh gitlab-runner_amd64.rpm

You will need to then register the runner with gitlab. You’ll need to get the registration token from the gitlab. Click on the ‘pancake icon’, Admin, CI/CD, Runners, and click the Register an instance runner drop down to get a Registration token. With that, you can register the runner.

gitlab-runner register --url http://lnmt1cuomgitlab.internal.pri/ \
--registration-token [registration token] \
--name bldr0cuomglrunr1.dev.internal.pri \
--tag-list "development,docker" --executor shell

I do have multiple gitlab-runner servers. This one is the development one that also processes containers. Other gitlab-runner servers test code or push code to various target servers.

GitLab CI/CD File

Now within your application, you can set up your pipeline to process your project on this new gitlab-runner. You do this in the .gitlab-ci.yml file. For this example, I’m again using my Llamas band website in part because it builds containers plus pushes out to two web sites so there’s some processing that needs to be done for each step. Let’s check out this process.

Test Stage

In the Test Stage, the gitlab-runner server has various testing tools installed. In this specific case, I’m testing my php scripts to make sure they all at least pass a lint test. There are other tests I have installed or can install to test other features of my projects. Note that I will use the CI_PROJECT_DIR for every command to make sure I’m working in the right directory.

test-job:
tags:
- test
stage: test
script:
- |
for i in $(find "${CI_PROJECT_DIR}" -type f -name *.php -print)
do
php -l ${i}
done

Docker Stage

In this section, I’m building the container, retagging it to be loaded to my local registry, and then pushing it to the registry. But only if the site’s been tagged. I only tag when I’m actually releasing a site version. So if no tag for this push, the build is skipped. I do push it out to the separate docker server though. I do keep all binary information on the two dev servers, bldr0cuomdev1 and ndld1cuomdev1 in the /opt/static directory structure. And unlike the other stages, there is no need to clear out the .git files and directories as they aren’t part of the llamas directory so won’t be in the container.

deploy-docker-job:
tags:
- docker
stage: deploy-docker
script:
- env
- /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
- |
if [[ ! -z ${CI_COMMIT_TAG} ]]
then
podman-compose build
podman tag localhost/llamas_webapp:latest bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
podman push bldr0cuomrepo1.dev.internal.pri:5000/llamas:${CI_COMMIT_TAG}
fi
- rm -rf "${CI_PROJECT_DIR}"/.git
- rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
- /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ jenkins@bldr0cuomdock1.dev.internal.pri:/opt/docker/llamas/llamas/

Local Stage

The next stage cleans up the git and docker information and moves the website from the llamas directory down to the documentroot. Then the site is pushed out to the local web server for review.

deploy-local-job:
tags:
- home
stage: deploy-local
script:
- /usr/bin/rsync -av --rsync-path=/usr/bin/rsync unixsvc@ndld1cuomdev1.home.internal.pri:/opt/static/llamas/ "${CI_PROJECT_DIR}"/llamas/
- rm -f "${CI_PROJECT_DIR}"/000-default.conf
- rm -f "${CI_PROJECT_DIR}"/docker-compose.yaml
- rm -f "${CI_PROJECT_DIR}"/Dockerfile.development
- rm -f "${CI_PROJECT_DIR}"/readme.md
- rm -rf "${CI_PROJECT_DIR}"/.git
- rm -f "${CI_PROJECT_DIR}"/.gitlab-ci.yml
- mv "${CI_PROJECT_DIR}"/llamas/* "${CI_PROJECT_DIR}"/
- rmdir "${CI_PROJECT_DIR}"/llamas
- /usr/bin/rsync -av --delete --no-perms --no-owner --no-group --omit-dir-times --rsync-path=/usr/bin/rsync "${CI_PROJECT_DIR}"/ unixsvc@ndld1cuomtool11.home.internal.pri:/var/www/html/llamas/

Remote Stage

The last stage pushes the website out to my remote server. As it’s effectively the same as the local stage, there’s no need to duplicate the listing.

Pipeline

This is actually at the top of the .gitlab-ci.yml file and lists the steps involved in the pipeline build. If any of these stages fails, the process stops until it’s resolved. You can monitor the status in gitlab by going to the project and clicking on CI/CD. The most recent job and stages will be listed. Click on the stage to see the output of the task.

stages:
- test
- deploy-docker
- deploy-local
- deploy-remote

Podman Issue

Well, when running the pipeline with a change and a tag, gitlab-runner is unable to build the image. Basically when run from gitlab, gitlab-runner isn’t actually logged in so there’s an error:

$ podman-compose build
['podman', '--version', '']
using podman version: 4.2.0
podman build -t llamas_webapp -f ././Dockerfile.development .
Error: error creating tmpdir: mkdir /run/user/984: permission denied
exit code: 125

See when someone logs in, a socket is created in /run/user with the user id. But the gitlab-runner account isn’t actually logging in. So the uid 984 isn’t being created. I manually created it and was able to successfully use podman-compose but waiting a short time and the uid is removed by linux and rebooting caused it to disappear as well.

I did eventually find an article (linked below) where the person having the problem finally got an answer. Heck, I didn’t even know there was a loginctl command.

loginctl enable-linger gitlab-runner

From the man page:

Enable/disable user lingering for one or more users. If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering for the user of the session of the caller.

And it worked! Now to add that to the ansible script and give it a try.

References

Posted in CI/CD, Computers, Docker, Git, Kubernetes | Tagged , , , , | 1 Comment

Continuous Delivery With ArgoCD

Overview

This article provides instructions in installing and configuring ArgoCD in Kubernetes.

Installation

The main task here is that Openshift is using ArgoCD so we should be familiar with how ArgoCD works.

Images

Installation-wise, it’s pretty easy. There are a couple of changes you’ll need to do. First off is review the install.yaml file to see what images will be loaded. Bring them in to the local repository following those instructions then update the install.yaml file to point to the local repository.

Next make sure the imagePullPolicy is set to Always for security reasons which is one of the reasons we local the images locally so we’re not constantly pulling from the internet.

Private Repository

In order to access our private gitlab server and private projects, we’ll want to create a ssh public/private key pair. Simply press enter for passwordless access. Note you’ll want to save the keypair somewhere safe in case you need to use it again. For ArgoCD, you’ll be creating tiles and repository entries for each project.

ssh-keygen -t rsa

Next, in gitlab, access Settings and SSH Keys and add your new public key. I called mine ArgoCD so I knew which one to manage.

You’ll need to add an entry in ArgoCD in the Settings, Repository Certificates and Known Hosts. Since I have several repos on my gitlab server, I simply logged into my bldr0cuomgit1 server and copied the single line for the gitlab server from the known_hosts file. Then clicked the Add SSH Knwon Hosts button and added it to the list. If you don’t do this, you’ll get a known_hosts error when ArgoCD tries to connect to the repo. You can click the Skip server verification box when creating a connection to bypass this however it’s not secure.

Next in ArgoCD, in the Settings, Repositories section, you’ll be creating a connection to the repository for the project. Enter the following information for my llamas installation.

Name: GitOps Repo
Project: gitops
URL: git@lnmt1cuomgitlab.internal.pri:external-unix/gitops.git
SSH private key data: [ssh private key]

Click Connect and you should get a ‘Successful‘ response for the repo.

TLS Update

Per the troubleshooting section below, update the argocd-cmd-params-cm ConfigMap to add a data.server.insecure: “true” section. This ensures ArgoCD works with the haproxy-ingress controller.

Installation

Once done, create the argocd namespace file, argocd.yaml then apply it.

apiVersion: v1
kind: Namespace
metadata:
  name: argocd
kubectl apply -f argocd.yaml

Now that the namespace is created, create the argocd installations by applying the install.yaml file.

kubectl create -f install.yaml

It’ll take a few minutes for everything to start but once up, it’s all available.

In order to access the User Interface, you’ll need to create an argocd.dev.internal.pri alias to the HAProxy Load Balancer. In addition, you’ll need to apply the ingress.yaml file so you can access the UI.

kubectl apply -f ingress.yaml

Command Line Interface

Make sure you pull the argocd binary file which gives you CLI access to the argocd server.

Troubleshooting

After getting the haproxy-ingress controller installed and running, adding an ingress route to ArgoCD was failing. I mean, it was applied successfully however I was getting the following error from the argocd.dev.internal.pri website I’d configured:

The page isn’t redirecting properly

A quick search found the TLS Issue mentioned in the bug report (see References) which sent me over to the Multiple Ingress Objects page. At the end of the linked block of information was this paragraph:

The API server should then be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap

And it referred me to the ConfigMap page and I made the following update on the fly (we’ll need to fix it in the GitOps repo though).

kubectl edit configmap argocd-cmd-params-cm -n argocd

Which brought up a very minimal configmap.

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cmd-params-cm
  namespace: argocd

I made the following change and restarted the argocd-server and I have access both to the UI and to be able to use the argocd CLI. Make sure true is in quotes though or you’ll get an error.

apiVersion: v1
data:
  server.insecure: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cmd-params-cm
  namespace: argocd

External Clusters

I want to be able to use ArgoCD on the main cluster in order to push updates to remote clusters. Basically manage the llamas band website from one location. In order to do this, I need to connect the clusters together. In order to do this, I need to log in to the main ArgoCD cluster with the command line tool, argocd. Then make sure the working area has access to all the clusters in the .kube/config file. And finally use the argocd cli to connect to the clusters.

The main thing I find with many articles is the assumption of information. While I’ve provided links to where I found information, here I provide extra information that may have been left out of the linked article.

Login to ArgoCD

Logging into the main dev argocd environment is pretty easy in general. I had a few problems but eventually with help got logged in. The main thing was using the flags needed to get in. It took several tries and understanding what I was trying to get in to before I got logged in.

First off, I had to realize that I should be logged into the argocd ingress URL. In my case, argocd.dev.internal.pri. I still had a few issues and ultimately had the following error:

$ argocd login argocd.dev.internal.pri --skip-test-tls --grpc-web
Username: admin
Password:
FATA[0003] rpc error: code = Unknown desc = Post "https://argocd.dev.internal.pri:443/session.SessionService/Create": x509: certificate is valid for ingress.local, not argocd.dev.internal.pri

I posted up a call for help as I was having trouble locating a solution and eventually someone took pity and provided the answer. The –insecure flag. Since I was already using –skip-test-tls, I didn’t even think to see if there was such a flag. And it worked.

$ argocd login argocd.dev.internal.pri --skip-test-tls --grpc-web --insecure
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.dev.internal.pri' updated

Merge Kubeconfig

Next, in order for argocd to have sufficient access to the other clusters, you need to merge configuration files to a single config. You might want to create a service account with admin privileges to separate it away from the kubernetes-admin account. Since this is my homelab, for now I’m simply using the kubernetes-admin account.

Problem though, in the .kube/config file, the authinfo is the same name for each cluster, kubernetes-admin. But since it’s a label and the actual account is kubernetes-admin@bldr, you can just change each label to get a unique authinfo entry. Back up all files before working on them of course.

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@bldr bldr kubernetes-admin
kubernetes-admin@cabo cabo kubernetes-admin
kubernetes-admin@lnmt lnmt kubernetes-admin
kubernetes-admin@tato tato kubernetes-admin

When you do the merge as shown below, there’ll be just one set of configs for kubernetes-admin and you won’t be able to access the other clusters. What I did was in each unique config file, I changed the label, then merged them together. Under contexts, change the user to kubernetes-bldr

contexts:
- context:
cluster: bldr
user: kubernetes-bldr
name: kubernetes-admin@bldr

And in the users section, also change the name to match.

users:
- name: kubernetes-bldr

With the names changed, you can now merge the files together. I’ve named mine after each of the clusters so I have bldr, cabo, tato, and lnmt. If you have files in a different location, add the path to the files.

export KUBECONFIG=bldr:cabo:tato:lnmt

And then merge them into a single file.

kubectl config view --flatten > all-in-one.yam

Check the file to make sure it at least looks correct, copy it to .kube/config, and then check the contexts.

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@bldr bldr kubernetes-bldr
kubernetes-admin@cabo cabo kubernetes-cabo
kubernetes-admin@lnmt lnmt kubernetes-lnmt
kubernetes-admin@tato tato kubernetes-tato

See, AUTHINFO is all different now. Change contexts to one of the other clusters and check access. Once it’s all working, you should now be able to add them to ArgoCD.

Cluster Add

Now to the heart of the task, adding the remote clusters to ArgoCD. Now that we’re logged in and have access to all clusters from a single .kube/config file, we can add them to ArgoCD.

$ argocd cluster add kubernetes-admin@cabo --name cabo0cuomvip1.qa.internal.pri
WARNING: This will create a service account argocd-manager on the cluster referenced by context kubernetes-admin@cabo with full cluster level privileges. Do you want to continue [y/N]? y
INFO[0005] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0005] ClusterRole "argocd-manager-role" created
INFO[0005] ClusterRoleBinding "argocd-manager-role-binding" created
INFO[0010] Created bearer token secret for ServiceAccount "argocd-manager"
Cluster 'https://cabo0cuomvip1.qa.internal.pri:6443' added

And it’s added. Check the GUI, Settings, Clusters and you should see it there.

References

Posted in CI/CD, Computers, Kubernetes | Tagged , | 1 Comment

Ingress Controller

Overview

There are multiple IP assignments used in Kubernetes. In addition to the internal networking (by Calico in this case), you can install an Ingress Controller to manage access to your applications. This article provides some basic Service information as I explore the networking and work towards exposing my application(s) externally using an Ingress Router.

Networking

You can manage traffic for your network by either using a Layer 2 or Layer 3 device or by using an Overlay network. Due to the requirement of maintaining pod networks in a switch, the easiest method is using an Overlay network. This encapsulates network traffic using VXLAN (Virtual Extensible LAN) and tunnels it to other worker nodes in the cluster.

Services

I’ll be providing information on the three ways to provide access to applications using a Service context and any plusses and minuses.

ClusterIP

When you create a service, the default configuration assigns a ClusterIP from a pool of IPs defined as the Service Network when you created the Kubernetes cluster. The Service Network is how pods communicate with each other in the cluster. In my configuration, 10.69.0.0/16 is the network I assigned to the Service Network. When I look at a set of services, every one will have a 10.69.0.0/16 IP address.

$ kubectl get svc
NAMESPACE            NAME                                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
default              kubernetes                                ClusterIP      10.69.0.1       <none>        443/TCP                        8d
default              my-nginx                                  NodePort       10.69.210.167   <none>        8080:31201/TCP,443:31713/TCP   11m

NodePort

Configuring a service with the type of NodePort is probably the easiest. You’re defining an externally accessible port in the range of 30,000-32,767 that is associated with your application’s port.

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: default
  labels:
    run: my-nginx
spec:
  type: NodePort
  ports:
  - name: http
    nodePort: 30100
    port: 8080
    targetPort: 80
    protocol: TCP
  - name: https
    nodePort: 30110
    port: 443
    protocol: TCP
  selector:
    run: my-nginx

When you check the services, you’ll see the ports that were randomly assigned if you didn’t define them.

$ kubectl get svc
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP      PORT(S)                        AGE
my-nginx           NodePort    10.69.91.108    <none>           8080:30100/TCP,443:30110/TCP   11h

Anyway, when using NodePort, you simply access the API Server IP Address and tack on the port. With that you have access to the application.

https://bldr0cuomvip1.dev.internal.pri:30110

The positive aspect here is regardless of which worker node the container is running on, you always have access. But the problem with this method is your load balancer has to know about the ports and update the configuration, plus your firewall has to allow access to either a range of ports or an entry for each port. Not killer but can complicate things especially if you’re not assigning the NodePort. Infrastructure as Code does help with managing the Load Balancer and Firewall configurations pretty well.

Side note, you can also access any worker node with the defined port number and Kubernetes will route you to the correct node. Certainly accessing the API server with the port number is optimum.

ExternalIPs

The use of externalIPs lets you access an application/container via the IP of the worker node the app is running on. You can set a new DNS entry so you can access the application without using a default port or the defined port (common ones being 8080 for example).

You’d update the above service to add the externalIPs line. This would be the IP of the worker node the container is running on. In order to add the line you’ll need to get the list of workers to see which node the container is running on.

$ kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS      AGE   IP              NODE                               NOMINATED NODE   READINESS GATES
curl                               1/1     Running   1 (17h ago)   18h   10.42.251.135   bldr0cuomknode1.dev.internal.pri   <none>           <none>
curl-deployment-7d9ff6d9d4-jz6gj   1/1     Running   0             12h   10.42.251.137   bldr0cuomknode1.dev.internal.pri   <none>           <none>
echoserver-6f54957b4d-94qm4        1/1     Running   0             45h   10.42.80.7      bldr0cuomknode3.dev.internal.pri   <none>           <none>
my-nginx-66689dbf87-9x6kt          1/1     Running   0             12h   10.42.80.12     bldr0cuomknode3.dev.internal.pri   <none>           <none>

We see the my-nginx pod is running on bldr0cuomknode3.dev.internal.pri. Get the IP for it and update the service (I know all my K8S nodes are 160-162 for control and 163-165 for workers so knode3 is 165).

$ kubectl edit svc my-nginx
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2023-04-06T01:49:28Z"
  labels:
    run: my-nginx
  name: my-nginx
  namespace: default
  resourceVersion: "1735857"
  uid: 439abcae-94d8-4810-aa44-2992d7a30a63
spec:
  clusterIP: 10.69.91.108
  clusterIPs:
  - 10.69.91.108
  externalIPs:
  - 192.168.101.165
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 32107
    port: 8080
    protocol: TCP
    targetPort: 80
  - name: https
    nodePort: 31943
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    run: my-nginx
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Then add the externalIPs: line as noted above. When done, check the services

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP       PORT(S)                        AGE
echoserver   NodePort    10.69.249.118   <none>            8080:32356/TCP                 45h
kubernetes   ClusterIP   10.69.0.1       <none>            443/TCP                        8d
my-nginx     NodePort    10.69.91.108    192.168.101.165   8080:32107/TCP,443:31943/TCP   13h

If you check the pod output above, note that the echoserver is also on knode3 plus using port 8080. The issue here is containers can’t have services using the same ports as the first service will be the only one that responds. Either move the pod or change the port to a unique one.

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP       PORT(S)                        AGE
echoserver   NodePort    10.69.249.118   192.168.101.165   8080:32356/TCP                 45h
kubernetes   ClusterIP   10.69.0.1       <none>            443/TCP                        8d
my-nginx     NodePort    10.69.91.108    192.168.101.165   8080:32107/TCP,443:31943/TCP   12h

Finally, the problem should be clear. If knode3 goes away, or goes into maintenance mode, or heck is replaced, the IP address is now different. You’ll need to check the pods, update the service to point to the new node, then update DNS to use the new IP address. And depending on the DNS TTL, it could take some time before the new IP address is returned. Also what if you have more than one pod for load balancing or if you’re using Horizontal Pod Autoscaling (HPA)?

Ingress-Controllers

I checked out several ingress controllers and because Openshift is using a HAProxy ingress controller, that’s what I went with. There are several others of course and you’re free to pick the one that suits you.

The benefit of an Ingress Controller is it combines the positive features of a NodePort and an ExternalIP. Remember with a NodePort, you access your application by using the Load Balancer IP or Worker Node IP, but with a unique port number. It’s annoying because you have to manage firewalls for all the ports. With an ExternalIP, you can assign that to a Service and create a DNS entry to point to that IP and folks can access the site through a well crafted DNS entry. The problem of course if if the node goes away, you have to update the DNS with the new node IP where the pod now resides.

An Ingress Controller installs the selected ingress pod which has a label. Then you create an Ingress route using that label in a metadata.annotation, then create a DNS entry that points to the Load Balancer IP. The Ingress route knows about the DNS entry and has the label so points the incoming traffic to the Ingress Controller which then sends traffic to the appropriate pod or pods regardless of worker.

Ingress Controller Installation

I’ve been in positions where I can’t use helm so I’ve not used it much but the haproxy-ingress controller is only installable via helm chart so this is a first for me. First add the helm binary, then the helm charts for the controller.

helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts

Next is to create a custom values file, I called it haproxy-ingress-values.yaml

controller:
  hostNetwork: true

Then install the controller. This creates the ingress-controller namespace.

helm install haproxy-ingress haproxy-ingress/haproxy-ingress\
  --create-namespace --namespace ingress-controller\
  --version 0.14.2\
  -f haproxy-ingress-values.yaml

And that’s all there is to it. Next up is creating the necessary ingress rules for applications.

Ingress Controller Configuration

I’m going to be creating a real basic Ingress entry here to see how things work. I don’t need a lot of options but you should check out the documentation and feel free to adjust as necessary for your situation.

Initially I’ll be using a couple of examples I used when testing this process. In addition I have another document I used when I was managing Openshift which gave me the little hint on what I was doing wrong to this point.

There are two example sites I’m using to test this. One is from the kubernetes site (my-nginx) and one is from the haproxy-ingress site (echoserver) both linked in the References section.

my-nginx Project

The my-nginx project has several configuration files that make up the project. The one thing it doesn’t have is the ingress.yaml file needed for external access to the site. Following are the configurations used to build this site.

The configmap.yaml file provides data for the nginx web server.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginxconfigmap
data:
  default.conf: |
    server {
            listen 80 default_server;
            listen [::]:80 default_server ipv6only=on;

            listen 443 ssl;

            root /usr/share/nginx/html;
            index index.html;

            server_name localhost;
            ssl_certificate /etc/nginx/ssl/tls.crt;
            ssl_certificate_key /etc/nginx/ssl/tls.key;

            location / {
                    try_files $uri $uri/ =404;
            }
    }

For the nginxsecret.yaml Secret, you’ll first need to create a couple of ssl certificates using the openssl command.

openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout /var/tmp/nginx.key -out /var/tmp/nginx.crt \
  -subj "/CN=my-nginx/O=my-nginx"

You’ll then copy the new keys into the nginxsecret.yaml file and add it to the cluster.

apiVersion: "v1"
kind: "Secret"
metadata:
  name: "nginxsecret"
  namespace: "default"
type: kubernetes.io/tls
data:
  tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0..."
  tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0..."

After applying the secret, you’ll need to apply the service which is used by Kubernetes to connect ports with a label which is associated with the deployment. Note ‘labels‘ is ‘my-nginx‘ and in the deployment.yaml file, it has the same ‘labels‘ line. Traffic coming to this service will go to any pod (ingress-controller pod) with this label.

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    protocol: TCP
    name: https
  selector:
    run: my-nginx

Then apply the following deployment.yaml which will pull the nginx image from docker.io.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      volumes:
      - name: secret-volume
        secret:
          secretName: nginxsecret
      - name: configmap-volume
        configMap:
          name: nginxconfigmap
      containers:
      - name: nginxhttps
        image: bprashanth/nginxhttps:1.0
        ports:
        - containerPort: 443
        - containerPort: 80
        volumeMounts:
        - mountPath: /etc/nginx/ssl
          name: secret-volume
        - mountPath: /etc/nginx/conf.d
          name: configmap-volume

When you check the service, because it’s a NodePort, you’ll see both the service ports (8080 and 443) and the exposed ports (31201 and 31713). The exposed ports can be used to access the application by going to the Load Balancer url and adding the port.

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
echoserver   NodePort    10.69.249.118   <none>        8080:32356/TCP                 9h
kubernetes   ClusterIP   10.69.0.1       <none>        443/TCP                        9d
my-nginx     NodePort    10.69.210.167   <none>        8080:31201/TCP,443:31713/TCP   27h

However that’s not an optimum process. You have to make sure users know what port is assigned and make sure the port is opened on your Load Balancer. With an Ingress Controller, you create a DNS CNAME that points to the Load Balancer and then apply this ingress.yaml route.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-nginx
  namespace: default
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: my-ingress.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: my-nginx
            port:
              number: 8080
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - my-ingress.dev.internal.pri

I created a my-ingress.dev.internal.pri DNS CNAME that points to bldr0cuomvip1.dev.internal.pri. When accessing https://my-ingress.dev.internal.pri, you’ll be directed to the my-ingress service which then transmits traffic to the application pod regardless of which worker node it resides on.

Let’s break this down just a little for clarity, in part because it didn’t click for me without some poking around and having a ping moment when looking at an old document I created for an Openshift cluster I was working on.

In the ingress.yaml file, the spec.rules.host and spec.tls.hosts lines are the DNS entries you created for the pod(s). The ingress controller looks for this service and transmits traffic to the configured service.

The spec.rules.http.backend.service.name is the name of the service this ingress route transmits traffic to. The service.port.number is the port listed in the pod service.

The path line is interesting. You can have multiple directories accessible by different DNS names by changing the path line. In general this is a single website so the / for the path is appropriate for a majority of cases.

The thing that is important is the annotations line. It has to point to the ingress controller. For the haproxy-ingress-controller, it’s as listed but you can verify by describing the pod .

kubectl describe pod haproxy-ingress-7bc69b8cc-wq2hc  -n ingress-controller
...
    Args:
      --configmap=ingress-controller/haproxy-ingress
      --ingress-class=haproxy
      --sort-backends
...

In this case we see the passed argument of ingress-class = haproxy. This is the same as the annotations line and tells the ingress route which pod is load balancing traffic within the cluster.

Once applied, you can then go to https://my-ingress.dev.internal.pri and access the nginx startup page.

echoserver Project

This one is a little simpler but still can show us how to use an ingress route to access a pod.

All you need is a service.yaml file to know where to transmit traffic.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: echoserver
  name: echoserver
  namespace: default
spec:
  clusterIP: 10.69.249.118
  clusterIPs:
  - 10.69.249.118
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 32356
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: echoserver
  sessionAffinity: None
  type: NodePort

Then a deployment.yaml file to load the container.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: echoserver
  name: echoserver
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: echoserver
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: echoserver
    spec:
      containers:
      - image: k8s.gcr.io/echoserver:1.3
        imagePullPolicy: IfNotPresent
        name: echoserver
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

For me, the problem was the example was a single line to create the ingress route but it wasn’t enough information to help me create the route. A lot of the problems with examples is they’re expecting cloud usage and you’ll have an AWS, GCE, or Azure load balancer. For on prem it seems to be less obvious in the examples which is why I’m doing it this way. It helps me and may help others.

Here is the ingress.yaml file I used to access the application. Remember you have to create a DNS CNAME for the access and you’ll need the port number from the service definition (8080).

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echoserver
  namespace: default
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  rules:
  - host: echoserver.dev.internal.pri
    http:
      paths:
      - backend:
          service:
            name: echoserver
            port:
              number: 8080
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - echoserver.dev.internal.pri

And with this ingress route, you have access to the echoserver pod. As I progress in loading tools and my llamas website, I’ll provide the ingress.yaml file so you can see how it’s done.

References

Posted in Computers, Kubernetes | Tagged , , , , , | 1 Comment

Persistent Storage

Overview

In this article I’ll configure and verify Persistent Storage for the Kubernetes cluster.

Installation

This is a simple installation. The NFS server has 100 gigs of space which will be used for any Persistent Volume Claims (PVCs) needed by application.

Apply the following storage-pv.yaml file.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: storage-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /srv/nfs4/storage
    server: 192.168.101.170

Verify by checking the PV in the cluster.

$ kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
storage-pv   100Gi      RWX            Retain           Available                                   7s

And that’s it. Storage is now available for any applications.

Posted in Computers, Kubernetes | Tagged , , | 1 Comment

Kubernetes Networking

Overview

This article provides instructions in installing the networking layer to the Kubernetes clusters.

Calico Networking

You’ll need to install Calico which is the network layer for the cluster. There are two files you’ll retrieve from Tigera who makes Calico. The tigera-operator.yaml and custom-resources.yaml files.

In the custom-resources.yaml file, update the spec.calicoNetwork.ipPools.cidr line to point to the PodNetwork. In my case, 10.42.0.0/16.

In the tigera-operator.yaml file, update the image: line to point to the on-prem insecure registry and any imagePullPolicy lines to Always.

Once done, use kubectl to install the two configurations. First the tigera-operator.yaml file, then the custom-resources.yaml file.

kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

When done and all is working, you should also see several calico pods start up.

$ kubectl get pods -A | grep -E "(calico|tigera)"
calico-apiserver   calico-apiserver-6fd86fcb4b-77tld                         1/1     Running   0             32m
calico-apiserver   calico-apiserver-6fd86fcb4b-p6bzc                         1/1     Running   0             32m
calico-system      calico-kube-controllers-dd6c88556-zhg6b                   1/1     Running   0             45m
calico-system      calico-node-66fkb                                         1/1     Running   0             45m
calico-system      calico-node-99qs2                                         1/1     Running   0             45m
calico-system      calico-node-dtzgf                                         1/1     Running   0             45m
calico-system      calico-node-ksjpr                                         1/1     Running   0             45m
calico-system      calico-node-lhhrl                                         1/1     Running   0             45m
calico-system      calico-node-w8nmx                                         1/1     Running   0             45m
calico-system      calico-typha-69f9d4d5b4-vp7mp                             1/1     Running   0             44m
calico-system      calico-typha-69f9d4d5b4-xv5tg                             1/1     Running   0             45m
calico-system      calico-typha-69f9d4d5b4-z65kn                             1/1     Running   0             44m
calico-system      csi-node-driver-5czsp                                     2/2     Running   0             45m
calico-system      csi-node-driver-ch746                                     2/2     Running   0             45m
calico-system      csi-node-driver-gg9f4                                     2/2     Running   0             45m
calico-system      csi-node-driver-kwbwp                                     2/2     Running   0             45m
calico-system      csi-node-driver-nh564                                     2/2     Running   0             45m
calico-system      csi-node-driver-rvfd4                                     2/2     Running   0             45m
tigera-operator    tigera-operator-7d89d9444-4scfq                           1/1     Running   0             45m

It does take a bit so give it some time to get going.

Troubleshooting

I did have a problem with the installation the first time as I hadn’t updated the custom-resources.yaml file to update the cidr line with my podnetwork configuration. After rebuilding the cluster, I updated and reapplied and it worked. One other issue was crio wasn’t enabled or started on the first control node for some reason. Once it was enabled and started, it worked as expected.

Posted in Computers, Kubernetes | Tagged , | 1 Comment