Summer Access Road

We live in the mountains surrounded by pine and aspens and visited by elk, deer, moose, foxes, and bobcats plus the semi-domesticated animals like dogs and cats.

The other thing we’re visited by are fires. Either by acts of nature like lightning strikes or acts of idiots like the homeless or just reckless folks.

The year before we moved here the area had a pretty large fire called the Cold Springs fire. It was started by a couple of young men who failed to put out their campfire properly.

https://wildfirepartners.org/cold-springs-fire/

Back when the subdivision was created in 1984, an egress route was required by Boulder County so folks up in on Ridge Road could have an alternate method of escaping a fire like the Cold Springs fire. It was called the Summer Access Road in part due to there being no maintenance on the road during winter months.

At a yearly HOA meeting, we heard the following story.

The property to the east of the Summer Access Road was purchased. The folks who purchased the property cut in a driveway at the half way point down the Summer Access Road going uphill to a ridge where they intended to build a house.

Unfortunately they didn’t touch base with the HOA and when they were confronted, indicated they were not only going to build there, that if the HOA didn’t maintain the road in the winter, they’d block it above their driveway which would of course prevent anyone else from using the road.

Reminder of course that is road was created so the folks up on Ridge Road could escape a fire in an emergency. If it’s blocked, that’s going to be a problem.

As a result, the HOA went to court and was able to block their access to the Summer Access Road.

The property does connect with Ridge Road at the topmost corner, however the fire department said the piece is too steep for fire engines and required that the owners work with the adjacent property owners to create an easement for a driveway a fire engine can navigate.

Unfortunately they had so alienated the neighbors that they were denied. As a result, the owners filed a quiet claim to the Summer Access Road. A quiet claim is intended basically to claim ownership of the Summer Access Road and quiet all other claims. This is different than a quit claim in that is used when a co-owner indicates they have no claim to shared property.

So back to court the HOA went. Up to this point, the Summer Access Road was an easement on the three properties that make up that area. The folks who own those three properties allowed the easement. But there wasn’t an official court document that indicated the HOA was fully responsible for the road. So there was some concern about the future of the road.

However the court ruled that the HOA does in fact own the maintenance and management of the road. The property owners were forced to restore the lower driveway and block the usage of it (the location was poorly chosen and run off would have damaged the Summer Access Road). They were permitted to create a driveway that pointed downhill from the first turn of the Summer Access Road and they were required to follow the HOA rules with regards to access to the Summer Access Road. As in the HOA could close the road due to inclement weather or simple maintenance of the road.

The property owners were also forced to put up a large bond for the upper driveway, $26,000 as I understand. And pay a chunk of the legal fees incurred by the HOA, again $42,000 as I recall. They also had to get an okay from the county before proceeding. Apparently they were pretty abusive towards the county when they were visited back at the start of all this.

The bad part in general is the homeowner HOA fees doubled for two years to pay for this. But the good news as noted above is the HOA actually officially is responsible for the Summer Access Road. I’ve not seen any further activity on this property. I don’t know if they are researching how to put in the new road, if the money has tapped them out short term or what. We’ll see what the future holds.

Posted in Colorado, Rocky Knob | Tagged , , | Leave a comment

Docker Registry

Overview

I have a requirement to create a local Docker Registry. I’m doing this because I have four Kubernetes clusters to somewhat mirror the work environment. This lets me test out various new bits I want to apply to the work environment but without using up work resources or involving multiple groups. In addition, we’re on high speed WiFi so we have a pretty small pipe in general. So I’m not constantly using up bandwidth, hosting it locally is the next best thing.

Currently work is using Artifactory. Artifactory has some cool features for Docker in that I can create a Virtual Repository that consists of multiple Remote Repositories. So I can have a group specific Virtual Repository to be used in hosting images and when I try to pull a new image, such as kube-apiserver v1.18.8, Artifactory automatically pulls it into the Virtual Repository. Very nice.

Unfortunately, the Docker management features of Artifactory are a paid for product and in looking at the costs, I can’t justify paying that for my own learning purposes. Hence I’m installing the default Docker Registry.

Installation

It’s actually a pretty simply process overall. I have a CentOS 7 server created, bldr0cuomrepo1.internal.pri, and install the docker-distribution RPM which is part of the extras repository.

Check the configuration file located in /etc/docker-distribution/registry/config.yaml for any changes I might want to make. In my case, the default is fine.

version: 0.1
log:
  fields:
    service: registry
storage:
    cache:
        layerinfo: inmemory
    filesystem:
        rootdirectory: /var/lib/registry
http:
    addr: :5000

And finally enable and start the docker-distribution.

# systemctl enable docker-distribution
# systemctl start docker-distribution

Insecure

Okay, well this is an insecure registry as far as Docker and Kubernetes is concerned. As such, I need to make a change to the /etc/docker/daemon.json file.

{
        "insecure-registries" : ["bldr0cuomrepo1.internal.pri:5000"]
}

And of course, restart docker.

Image Management

Now if you want to host your own images for Kubernetes, you can host it locally and have your deployment point to the local registry. In addition, you can pull commonly used images from the internet and host them locally.

# docker pull nginx:alpine

Then you need to tag the image. This involves changing the location from the various internet sites like docker.io, k8s.gcr.io, and quay.io, to your new local repository.

# docker tag nginx:alpine bldr0cuomrepo1.internal.pri:5000/nginx:alpine

Once pulled, you can run a docker image ls to see the installed images. Then you can push it up to your repository.

[root@bldr0cuomifdock1 ~]# docker push bldr0cuomrepo1.internal.pri:5000/llamas-image:v1
The push refers to repository [bldr0cuomrepo1.internal.pri:5000/llamas-image]
ca01cce58e28: Pushed
a181cbf898a0: Pushed
570fc47f2558: Pushed
5d17421f1571: Pushed
7bb2a9d37337: Pushed
3e207b409db3: Pushed
v1: digest: sha256:4a0a5e1d545b9ac88041e9bb751d2e2f389d313ac5a59f7f4c3ce174cd527110 size: 1568

And now that it’s hosted locally, you can pull it to any server where docker is installed.

[root@bldr0cuomifdock1 data]# docker pull bldr0cuomrepo1.internal.pri:5000/llamas-image:v1
v1: Pulling from llamas-image
cbdbe7a5bc2a: Pull complete
10c113fb0c77: Pull complete
9ba64393807b: Pull complete
262f9908119d: Pull complete
c4a057508f96: Pull complete
e044fc51fea0: Pull complete
Digest: sha256:4a0a5e1d545b9ac88041e9bb751d2e2f389d313ac5a59f7f4c3ce174cd527110
Status: Downloaded newer image for bldr0cuomrepo1.internal.pri:5000/llamas-image:v1
bldr0cuomrepo1.internal.pri:5000/llamas-image:v1
Posted in Computers, Docker | Leave a comment

Kubernetes Pod Schedule Prioritization

Introduction

Currently Kubernetes is not configured to treat any pod as more or less important than any other pod with the exception of critical Kubernetes pods such as the kube-apiserver, kube-scheduler, and kube-controller-manager.

Multiple products with different Service Class requirements are hosted on Kubernetes but there is no configuration that provides any prioritization of these products.

The research goal is to identify a process or configuration which would let the Applications and Operations teams identify and ensure their products have priority when using cluster resources. For example, in the event of an unintentional failure such as a worker node failure, or an intentional failure such as removing a worker node from a cluster pool for maintenance.

A secondary goal is to determine if overcommitting the Kubernetes clusters is a viable solution to resource availability.

As always, this is a summation that generally applies to my environment. For full details, links to documents are provided at the end of this document.

Service Class

Service Class is used to define service availability. This is not relevant to individual components of a product but of the overall service itself. This is a list of Service Class definitions.

  • Mission Critical Service (MCS) – 99.999% up-time.
  • Business Critical Service (BCS) – 99.9% up-time.
  • Business Essential Service (BES) – 99% up-time.
  • Business Support Service (BSS) – 98% up-time.
  • Unsupported Business Service (UBS) – No guaranteed service up-time
  • LAB – No guaranteed service up-time.

Note that the PriorityClass design does not ensure the hosted Product satisfies the contracted Service Class. PriorityClass Objects ensures that resources are available to more critical Products should there be resource exhaustion due to overcommitment or worker node failure.

PriorityClass Objects

Kubernetes as of version 1.14 has introduced PriorityClass Objects. This object lets us assign a resource priority to a pod that lets a pod jump ahead in the scheduling queue.

  • 2,000,001,000 – This is used for critical pods running on Kubernetes nodes (system-mode-critical).
  • 2,000,000,000 – This is used for critical pods which manage Kubernetes clusters (system-cluster-critical)
  • 1,000,000,000 – This level and lower is available for any product to use.
  • 0 – This is the default level for all non-critical pods.
Linux:cschelin@lnmt1cuomtool11$  kubectl get priorityclasses -A
NAME                      VALUE        GLOBAL-DEFAULT   AGE
system-cluster-critical   2000000000   false            22d
system-node-critical      2000001000   false            22d

system-node-critical Object

The following pods are assigned to the system-node-critical Object.

  • calico-node
  • kube-proxy

system-cluster-critical Object

The following pods are assigned to the system-cluster-critical Object.

  • calico-kube-controllers
  • coredns
  • etcd
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

PriorityClass Definitions

A PriorityClass Object lets us define a set of values which can be used by applications in order to ensure availability based on Service Class. See the below recommendations to be configured for the Kubernetes environments.

  • 7,000,000 – Critical Infrastructure Service
  • 6,000,000 – Mission Critical Service
  • 5,000,000 – Infrastructure Service
  • 4,000,000 – Business Critical Plus Service (a product that requires 99.99% up-time)
  • 3,000,000 – Business Critical Service
  • 2,000,000 – Business Essential Service
  • 1,000,000 – Business Support Service
  • 500,000 – Unsupported Business Service and LAB Services (global default)

Most of the items in the list are well know Service Class definitions. For the ones that I’ve added, additional details follow.

Critical Infrastructure Service

Any pod that is used by any or all other pods in the cluster. Especially if the pod is used by a MCS product.

Infrastructure Service

Standard infrastructure pods such as kube-state-metrics and the metrics-server pods. This includes other services such as Prometheus and Filebeat.

Business Critical Plus Service

Currently there is no 4 9’s Service Class defined however some products have been deployed as requiring 4 9’s support. For this reason, a PriorityClass Object was created to satisfy that Service Class request.

Testing

In testing:

  1. MCS pods in a deployment will run as long as resources are available.
  2. If there are not enough resources for the lower PriorityClass deployments, pods will be started until resources are exhausted. Remaining pods will be put in a Pending state.
  3. If additional MCS pods need to start, lower PriorityClass pods will be Terminated. New pods will start and remain in a Pending state.
  4. Once the additional MCS pods are not needed, they will be deleted and any Pending pods will start.
  5. For multiple MCS deployments there is no PriorityClass priority. If there are unsufficient resources for all MCS pods to start, then any remaining MCS pods will be put in a Pending state.
  6. If any lower PriorityClass pods has sufficient resources to start where a higher PriorityClass pod is unable to start, the lower PriorityClass pod will start.

Pod Premption

There is a PriorityClass option called preemptionPolicy which has been made available in Kubernetes 1.15. This option lets you configure a PriorityClass to not evict pods of a lower PriorityClass. The option moves pods up in the scheduling queue, however it doesn’t evict pods if cluster resources are running low.

PodDisruptionBudget

This Object lets you specific the number of pods that must remain running. However, in testing this doesn’t appear to apply to PriorityClass evictions. If there is insufficient resources, pods in a lower PriorityClass will be evicted regardless of this setting. It will prevent a voluntary failure such as draining a worker node if there aren’t sufficient remaining pods.

Configuration Settings

For Deployments, you’d add the below defined name as a spec.priorityClassName: [name].

The following configurations are recommended for the environment.

Critical Infrastructure Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: critical-infrastructure
value: 7000000
globalDefault: false
description: "This priority class is reserved for infrastructure services that all pods use."

Mission Critical Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: mission-critical
value: 6000000
globalDefault: false
description: "This priority class is reserved for services that require 99.999% uptime."

Infrastructure Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: infrastructure
value: 5000000
globalDefault: false
description: "This priority class is reserved for infrastructure services."

Business Critical Plus Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: business-critical-plus
value: 4000000
globalDefault: false
description: "This priority class is reserved for services that require 99.99% uptime."

Business Critical Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: business-critical
value: 3000000
globalDefault: false
description: "This priority class is reserved for services that require 99.9% uptime."

Business Essential Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: business-essential
value: 2000000
globalDefault: false
description: "This priority class is reserved for services that require 99% uptime."

Business Support Service

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: business-support
value: 1000000
globalDefault: false
description: "This priority class is reserved for services that require 98% uptime."

Unsupported Business Service

Note the globalDefault setting here defining any pod that fails to set a PriorityClass in their Deployments.

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: unsupported-business
value: 500000
globalDefault: true
description: "This priority class is reserved for services that have no uptime requirements."

PriorityClass Object Table

Linux:cschelin@lnmt1cuomtool11$ kubectl get pc -A
NAME                              VALUE        GLOBAL-DEFAULT   AGE
business-critical                 3000000      false            3d9h
business-critical-plus            4000000      false            3d9h
business-essential                2000000      false            3d9h
business-support                  1000000      false            3d9h
critical-infrastructure           7000000      false            3s
infrastructure                    5000000      false            6s
mission-critical                  6000000      false            14s
system-cluster-critical           2000000000   false            25d
system-node-critical              2000001000   false            25d
unsupported-business              500000       true             3d9h

Pod Configuration

In order to assign this to pods, you’ll need to add the PriorityClass to the deployment or pod configuration.

    spec:
      containers:
      - image: bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.4.1
        imagePullPolicy: Always
        name: llamas
      priorityClassName: business-essential

For the pod configuration.

spec:
  containers:
  - name: llamas
    image: bldr0cuomrepo1.dev.internal.pri:5000/llamas:v1.4.1
    imagePullPolicy: Always
  priorityClassName: business-essential

Conclusion

The above recommendations provide a reliable way of ensuring critical products that are deployed to Kubernetes will have the necessary resources to respond appropriately to requests.

In order to prevent service disruption, ensure any deployed product doesn’t consume more resources than the minimum required for all deployed products.

This might also permit overcommitting resources in the clusters.

References

Posted in Computers, Kubernetes | Tagged | 1 Comment

Jenkins And Build Agents

Overview

In this article, I’ll provide instructions in how I installed Jenkins and the two Jenkins Build Agents in my environment.

System Requirements

I used one of my standard templates in vCenter to create the three Jenkins nodes. All three servers have 2 CPUs and 4 Gigs of Memory. For the main Jenkins server, 64 Gigs of storage is sufficient. For Build Agents, 200 Gigs of Storage is recommended. Basically as much as you need when storing deployment jobs. My photos website has about 30 Gigs of pictures. With deployments going to 3 sites (local for testing, docker for the future, and the remote public visible site), that means just the photos website takes almost 100 gigs. Jenkins will require Java 1.8 to be installed prior to installing Jenkins.

Firewall Configuration

As part of Zero Trust Networking, each system has a firewall. You’ll need to configure the firewall for the Jenkins nodes.

firewall-cmd --permanent --new-service=jenkins
firewall-cmd --permanant --service=jenkins --set-short="Jenkins ports"
firewall-cmd --permanent --service=jenkins --set-description="Jenkins port exceptions"
firewall-cmd --permanent --service=jenkins --add-port=8080/tcp
firewall-cmd --permanent --add-service=jenkins
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload

Installing Jenkins

You’ll need to install the repository configuration and GPG keys, then install Java and finally Jenkins.

wget -O /etc/yum.repos.d/jenkins.repo \
    https://pkg.jenkins.io/redhat-stable/jenkins.repo
rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
yum upgrade
yum install java-8-openjdk
yum install jenkins
systemctl daemon-reload

Enable and Start Jenkins

Pretty simple process here. You enable and start Jenkins.

systemctl enable jenkins
systemctl start jenkins

Configure Users

During the installation process, a password was created so you can Unlock the Jenkins installation. Copy it from
/var/lib/jenkins/secrets/initialAdminPassword and paste it into the Unlock screen. Once Jenkins is unlocked, you’ll be presented with a Create Administrator page. Fill it in and save it. Once done, you can then access Jenkins and install any plugins you want to use.

Build Agents

In order to effectively use Jenkins, the main node that you installed shouldn’t be processing any jobs. If it processes jobs, it can be overwhelmed and other jobs might queue up delaying deployments. For my homelab it’s not so critical however I am trying to emulate a production like environment so having Build Agents will satisfy that requirement.

Configuring the Build Agents

Jenkins requires a few things before the main system can incorporate a Build Agent. You’ll need to create the jenkins account.

useradd -d /var/lib/jenkins -c "Jenkins Remote Agent" -m jenkins

Of course set a password. Something long and hard to figure out. Then create your public/private key pair. This is used to communicate with the necessary servers.

ssh-keygen -t rsa

This creates the key pair in the Jenkins .ssh directory.

Next is install Java 1.8. Jenkins will communicate with Java in order to run jobs.

yum install -y java-1.8.0-openjdk-headless
Installed:
  java-1.8.0-openjdk-headless.x86_64 1:1.8.0.272.b10-1.el7_9

Dependency Installed:
  copy-jdk-configs.noarch 0:3.3-10.el7_5             javapackages-tools.noarch 0:3.4.1-11.el7              lksctp-tools.x86_64 0:1.0.17-2.el7
  pcsc-lite-libs.x86_64 0:1.8.8-8.el7                python-javapackages.noarch 0:3.4.1-11.el7             python-lxml.x86_64 0:3.2.1-4.el7
  tzdata-java.noarch 0:2020d-2.el7

Complete!

Now that the node is prepared, in Jenkins you’ll need to add a node. Click Manage Jenkins and Manage Nodes and Clouds. Then click on New Node. Once you name it, you’ll fill out the form as follows:

  • Name: Remote Build 2 (this is my first Jenkins Build Agent)
  • Description: Access to the remote server for local files
  • Number of Executors: 2 (rule of thumb is 1 per CPU)
  • Remote root directory: /var/lib/jenkins (the jenkins account home dir)
  • Labels: guardian (the label you’ll use in jobs to determine which Build Agent to use)
  • Usage: Select Only build jobs with label expressions matching this node
  • Launch method: Select Launch agents via SSH
  • Host: 192.168.104.82 (I tend to avoid using DNS as it can be unreliable)
  • Credentials: Select Remote schelin.org server
  • Host Key Verification Strategy: Select Known hosts file Verification Strategy
  • Availability: Select Keep this agent online as much as possible

When you’re done, click Save and the node will show up in the list of nodes. Then create the second Build Agent following the above installation instructions and configure it as follows.

  • Name: Local Build 3 (this is my second Jenkins Build Agent)
  • Description: Local Server Builds
  • Number of Executors: 2 (rule of thumb is 1 per CPU)
  • Remote root directory: /var/lib/jenkins (the jenkins account home dir)
  • Labels: local (the label you’ll use in jobs to determine which Build Agent to use)
  • Usage: Select Only build jobs with label expressions matching this node
  • Launch method: Select Launch agents via SSH
  • Host: 192.168.104.81 (I tend to avoid using DNS as it can be unreliable)
  • Credentials: Select Local environment
  • Host Key Verification Strategy: Select Known hosts file Verification Strategy
  • Availability: Select Keep this agent online as much as possible

And when done, click Save and the node will show up

References

Posted in Computers, Jenkins | Tagged | Leave a comment

Kubernetes Resource Management

Overview

There are two objectives to Resource Management. Ensure sufficient resources are available for all deployed products and determine when additional resources are required.

This document provides information based on the deployed Kubernetes cluster and is filtered for my specific configurations. For full details on Resource Management, be sure to follow the links in the References section at the end of this document.

Description

There are two Objects that are created to manage resources in a namespace. The LimitRange Object and the ResourceQuota Object.

LimitRange Object

The LimitRange Object is a default value for containers in a namespace that don’t have their resources defined. To define this, you need to try and determine the resources used by the containers in the namespace. The command, kubectl top pods -n [namespace] will give you the idle value but you’ll have to try and determine the maximum value by monitoring the usage over time and be ready to adjust as necessary. A tool that monitors resources such as Grafana can provide that information.

ResourceQuota Object

The ResourceQuota Object is the total value of the defined resource requirements for the product. The product Architecture Document will define the minimum (Requests) and maximum (Limits) CPU and Memory values plus the minimum and maximum number of pods in a product. You then create the ResourceQuota Object with those values and apply it to the product namespace and restart the pods.

Resource Requests

Requests are the minimum value required for the container to start. It consists of two settings, CPU and Memory and are slices of the cluster resources which are configured as millicpus (m) and megabytes (Mi) or gigabytes (Gi). These values reserve the resources so they are not available to other containers. When cluster Resource Requests reaches 80%, additional worker nodes are required.

Resource Limits

Limits are the maximum value the container is expected to consume. Like Requests, it consists of two settings, CPU and Memory. This does not reserve cluster resources but should be used to determine cluster capacity. Since it doesn’t reserve cluster resources, the cluster can be overcommitted. When cluster Resource Limits reaches 80%, additional worker nodes are recommended.

Settings

The ResourceQuota Admission Controller needs to be added to the kube-apiserver manifest to enable Resource Management. Log in to each control node and edit the /etc/kubernetes/manifests/kube-apiserver.yaml file:

  - --enable-admission-plugins=ResourceQuota

Other Admission Controllers may already be configured. There is no required order so the new ResourceQuota Admission Controller can be anywhere in the option.

The kube-apiserver for each control node will restart automatically after then change has been saved.

Available Resources

In order to understand when additional resources are required for a cluster, you must calculate the total resource availability of a cluster. This consists of the total CPUs and Memory of all worker nodes less 20% for overhead (Operating System, Kubernetes software, docker, and any additional agents). Other factors may increase the overhead on a worker node and will need to be taken into consideration.

LimitRange Sample Object

As with all Kubernetes definitions, the yaml file is fairly straightforward. You’ll need to determine the Requests and Limits values since they’re not defined in the containers. Then apply it to the cluster. In most cases this applies to a third party container.

apiVersion: v1
kind: LimitRange
metadata:
  name: vendor-limit-range
  namespace: vendor-system
spec:
  limits:
  - default:
      memory: 512Mi
      cpu: 512m
    defaultRequest:
      memory: 256Mi
      cpu: 128m
    type: Container

ResourceQuota Sample Object

You’ll need to get the requirements from the Architect Document and calculate the requirements. You’ll note a slight difference in the definition between a LimitRange and a ResourceQuota Object.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: inventory-rq
  namespace: inventory
spec:
  hard:
    limits.cpu: "24"
    limits.memory: 24Gi
    requests.cpu: "18"
    requests.memory: 18Gi

Resource Management

Important note, when ResourceQuota has been configured for a cluster, clusters that don’t have resources defined will not start. A recent example is a two container deployment that defined resources for the main container but not the init container. The main container failed as the init container didn’t start.

Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetAvailable
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate

When cluster resources have been exhausted, a couple of errors can be noted in the pod description.

Warning  FailedScheduling  78m (x11 over 80m)  default-scheduler  0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.

And

Error from server (Forbidden): error when creating "quota-mem-cpu-demo-2.yaml": pods "quota-mem-cpu-info-2" is forbidden: exceeded quota: mem-cpu-demo, requested: requests.memory=700Mi, used: requests.memory=600Mi, limited: requests.memory=1Gi

The events are informing you of a resource issue. Review the resource usage and adjust appropriately or add more resources to the cluster.

Finally adding a ResourceQuota to a namespace does not immediately go into effect. You need to restart the containers. After a ResourceQuota object is applied but in the status.used section, all the values are at zero (0), then the pods need to be restarted for the ResourceQuota to go into effect.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"inventory-rq","namespace":"inventory"},"spec":{"hard":{"limits.cpu":"5","limits.memory":"4Gi","requests.cpu":"3","requests.memory":"3Gi"}}}
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17256997"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
  spec:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
  status:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
    used:
      limits.cpu: "0"
      limits.memory: "0"
      requests.cpu: "0"
      requests.memory: "0"

After restarting the pods, the amount of resources used is then calculated.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"inventory-rq","namespace":"inventory"},"spec":{"hard":{"limits.cpu":"5","limits.memory":"4Gi","requests.cpu":"3","requests.memory":"3Gi"}}}
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17258299"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
  spec:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
  status:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
    used:
      limits.cpu: 2400m
      limits.memory: 2400Mi
      requests.cpu: 1200m
      requests.memory: 1800Mi

Reference

Posted in Computers, Kubernetes | Tagged | 1 Comment

Homelab

Well, I figure I should list out my gear as I finally picked up my third R710 and got it running and attached. It’s a moderate setup compared to some I’ve seen here 🙂 I’m less a hardware/network guy and more an OS and now Kubernetes guy.

Network:

  • High-Speed WiFi connection to the Internet (the black box on the bottom left).
  • Linksys EA9500 Wifi hub for the house.
  • HP 1910 24 Port Gigabit Ethernet Switch.
  • HP 1910 48 Port Gigabit Ethernet Switch.

Servers:

  • Nikodemus: Dell R710, 2 Intel 5680’s, 288G Ram, 14 TB Raid 5.
  • Slash: Dell R710, 2 Intel 5660’s, 288G Ram, 14 TB Raid 5.
  • Monkey: Dell R710, 2 Intel 5670’s, 288G Ram, 14 TB Raid 5.
  • Willow: Dell R410, 2 Intel 5649’s, 16G Ram, 4 TB RAID 10.

Array:

  • Sun 2540 Fiber Array filled with 24 TB. It’s not on and I’ve not configured it yet other than to make sure it works as I haven’t needed the additional space yet.

UPS:

  • Two APC Back-UPS [XS]1500 split between the three servers for uninterrupted power. Lasts about 20 minutes. Sufficient time to run the Ansible playbooks to shut down all the servers before the power goes out.

Software:

I bought the VMware package from VMUG so I have license keys for a bunch of stuff. vCenter is limited to 6 CPUs so the third R710 finishes that up. I can get the 6.7 software but haven’t pulled the trigger on that yet. My next exploration is Distributed Switches and Ports (classes this past weekend) and then vSAN and VLANs.

All three servers are booting off an internal 16G USB thumb drive.

  • vSphere 6.5
  • vCenter 6.5

Most of what I’m doing fits into two categories. Personal stuff and a duplication of work stuff in order to improve my skills.

I have about 103 virtual machines as of the last time I checked. Most of my VMs are CentOS or Red Hat since I work in a Red Hat shop, and a few Ubuntu, one Solaris, and a couple of Windows workstations. I am going to add a few others like FreeBSD, Slackware, SUSE, and maybe Mint.

Personal:

  • pfSense. Firewall plus other internal stuff like DNS and Load Balancing. I have all servers cabled to the Internet and House Wifi so I can move pfSense to any of the three to restore access.
  • Jump Servers. I have three jump servers I use basically for utility type work. My Ansible playbooks are on these servers.
  • Hobby Software Development. This is kind of a dual purpose thing. I’m basically trying to duplicate how work builds projects by applying the same tools to my home development process. CI/CD: gitlab, jenkins, ansible tower, and artifactory. Development: original server, git server, and Laravel server for a couple of bigger programs
  • Identity Management server. Centralized user management. All servers are configured.
  • Spacewalk. I don’t want every server downloading packages from the ‘net. I’m on a high-speed wifi setup where I live. So I’m downloading the bulk of packages to the Spacewalk server and upgrading using it as the source.
  • Inventory. This is a local installation of the inventory program I wrote for my work servers. This has my local servers though. Basically it’s the ‘eat your own dog food’ sort of thing. 🙂
  • Plex Servers. With almost 8 TB of space in use, I have two servers to split them between the R710’s. If I activate the 2540, I may combine them into one but there’s no good reason at this time. I’ve disabled the software for now. They are very chatty and are overwhelming the log server. Movie Server. About 3 or so TB of movies I’ve ripped from my collection. Television Server. About 4 TB of television shows I’ve ripped from my collection.
  • Backups. I have two backup servers. One for local/desktop backups via Samba and one for remote backups of my physical server which is hosted in Florida.
  • Windows XP. I have two pieces of hardware that are perfectly fine but only work on XP so I have an XP workstation so I can keep using the hardware.
  • Windows 7. Just to see if I could really 🙂
  • Grafana. I’ve been graphing out portions of the environment but am still in the learning phase.

Work/Skills:

  • Work development server. The scripts and code I write at work, backed up to this server and also spread out amongst my servers.
  • Nagios Servers. I have 3. One to monitor my Florida server, One to monitor my personal servers (above), and one to monitor my Work type servers. All three monitor the other two servers so if one bails, I’ll get notifications.
  • Docker Server. Basically learning docker.
  • Kubernetes Servers. I have three Kubernetes clusters for various testing scenario. Three masters and three workers
  • Elastic Stack clusters. This is a Kibana, Logstash, and multiple Elastic Search servers. Basically centralized log management. Just like Kubernetes, three clusters for testing.
  • A Hashicorp Vault server for testing to see if it’ll work for what we need at work (secrets management).
  • Salt. One salt master for testing. All servers are connected.
  • Terraform. One for testing.
  • Jira server. Basically trying to get familiar with the software
  • Confluence. Again, trying to get used to it. I normally use a Wiki but work is transferring things over to Confluence.
  • Wiki. This is a duplicate of my work wikis, basically copying all the documentation I’ve written over the past few years.
  • Solaris 2540. This manages my 2540 array.

My wife is a DBA so I have a few database servers up, partly for her and partly for my use.

  • Cassandra
  • Postgresql.
  • Postgresql – This one I stood up for Jira and Confluence
  • Microsoft SQL Server
  • MySQL Master/Master cluster. This is mainly used by my applications but there for my wife to test things on as well.

I will note that I’m 63 and have been mucking about with computers for almost 40 years now. I have several expired and active certifications. 3Com 3Wizard, Sun System and Network Admin, Cisco CCNA and CCNP, Red Hat RHCSA, and most recently a CKA.

Posted in Computers | Leave a comment

State of the Game Room – 2019

A reflection of the past 12 months of gaming. This includes board, card, and role playing.

In reviewing the Game Inventory I keep, I picked up some 259 games and expansions this year not counting dice. The bulk of these are Role Playing. As they tend to have lots of books, there are only a handful of actual RPGs with lots of books. For unique numbers, Board games will exceed everything else.

Role Playing

I’ve enjoyed RPGs since I got exposed to it in 1977 in an Army Recreation Center. I bought the box set of Dungeons and Dragons not long after and started making dungeons. I’ve run or played in numerous RPGs over the years with my main games being Advanced Dungeons and Dragons and Shadowrun. For AD&D, I snagged quite a few other RPGs in order to mine them for ideas for my main game. It wasn’t until I got into Shadowrun, that I really explored other settings with Paranoia being the third most run game for me. Over the past year, I’ve worked on Shadowrun 6th as a playtester and other Shadowrun books as a proofreader. I exposed my group to Conan 2d20 the RPG but we really didn’t get far into it. I think mainly the group (and me) just don’t have to time to read and prepare for RPGs any more. Especially new ones. The team has played Shadowrun in the past. Maybe a return to Shadowrun is in order, possibly sticking with the 20th Anniversary Edition as I’m most familiar with that one but maybe going back to 2nd Edition.

I did pick up several RPG books over the past year. I keep up on Dungeons and Dragons, probably more like a collector than someone who’s going to actually run any D&D games, although I am a fan of the Adventures in Middle-Earth series so perhaps there’s hope. Of course I picked up a few of the Conan 2d20 RPG books since the team was interested. And Genesys, especially Shadow of the Beanstock as it’s a Cyberpunk type setting.

Last Christmas my girlfriend bought me a box of miniatures for Shadowrun so that was quite cool. I also picked up several Shadowrun books and PDFs. The biggest purchase was getting my Star Wars RPGs updated. Turns out my Friendly Local Gaming Store (FLGS) hadn’t been keeping up on the releases. I stumbled on a posting somewhere and checked out Fantasy Flight Games to see what I was missing. And a close friend worked on the new Wrath and Glory Warhammer RPG so I got in on the kickstarter and I have the Collector’s Edition.

The ones in bold are new games that include the core rule book. The rest are expansions or items like miniatures or other non-rulebook accessories.

  • Alien – 2
  • Conan 2d20 – 16
  • Dungeons & Dragons – 15
  • Genesys – 6
  • Shadowrun – 23
  • Star Trek – 1
  • Star Wars – 61
  • Traveller – 2
  • Warhammer (Wrath & Glory) – 8

Card Games

We did play a few card games over the past year. Cards Against Humanity seems to pop up now and then plus others like Clank!, Race for the Galaxy, Love Letter, and Gloom in Space. I also kept up on my Arkham Horror Card Game even though we stopped playing that one last year.

For the Munchkin one below, I’d stumbled upon the Girl Genius on line comic again and one of the cartoons referenced a special Girl Genius Munchkin pack. I’m a huge fan of Phil Foglio’s art so I specially ordered it from my FLGS.

  • Arkham Horror – 15
  • Cards Against Humanity – 2
  • Clank! – 1
  • DC Deck Building Game – 1
  • Epic Spell Wars of the Battle Wizards – 1
  • Exploding Kittens – 1
  • Love Letter – 1
  • Munchkin – 1

Board Games

I did pick up quite a few board games this year and even played more than normal. The team seems to have more fun with a quick (or even lengthy) board game than spending time to learn and understand RPG rules.

By far, Zombicide had the most items come in this year. The team was interested in Zombicide and Jeanne and I even played a game that didn’t include the team. Zombicide is a several hours long game that can test your patience. Jeanne did an awesome job on our session saving her entire team when I was ready to abandon them and head on out.

A friend at work received two copies of the kickstarter Shadowrun Sprawl Ops, a Shadowrun specific board game. He gifted me with the second copy plus a copy of The 7th Continent. The Sprawl Ops rules weren’t the best and I had to do some research on the ‘net and Board Game Geek to get some clarity on the rules. Once we had it right, the team had quite a bit more fun with the game.

Wingspan was one of the more interesting purchases. My FLGS owner (Jamie) had saved a copy for me during all the hoopla over the distribution of the game. It had received a lot of attention because the game publisher had underestimated the demand for the game and had to make several print runs. The biggest issue was the FLGSs weren’t getting complete orders where Amazon.com was. The games were selling for the normal price and immediately being turned around for 3 and 4 times the cost over on E-Bay. I will say, the game was quite fun and we played it several times over the summer.

I’d picked up Clank! a couple or so years back but the name and that it was a deck building game was a bit of a turn off in general. Jeanne and I enjoyed the DC Deck Building game in the past but we really didn’t much like the Legendary Deck Building game so we were 50/50 on getting Clank!. I did pick up Clank! expansions and Clank! in Space. Jeanne and I played it and it turned out to be fun enough that Jeanne insisted on a second play (she’d lost and she’s very competitive). “Clank!” is simply the sound you’re making to alert the bad guy (Dragon) in the dungeon while you’re hunting through the caverns looking for artifacts. Clank! in Space is a similar game except that you’re on a spaceship stealing artifacts. I snagged Clank! Legacy this past week. There are several Legacy type games where as you play, you destroy cards, add stickers, and generally modify the board game as you complete missions. The games are ultimately playable however the 10 game series does make each person’s game somewhat unique. I look forward to running the team through the game. It might be a bit shorter than Zombicide, although that’s still on the list to be played.

Jeanne and I got married back in June and we had a board game themed wedding reception. I bought copies of Love Letter (of course), Ticket to Ride, and a new one for us, Sagrada. In this case, the box design was a bit of a turn-off. I’d seen it in the FLGS and Jamie recommended it so Jeanne and I snagged a copy so we knew how to play before the wedding. It’s not a bad game, kind of a Sudoku type game. You have a 6 space grid (6 across and 6 down) and roll dice to fill in the grids based on the underlying selected card, rules as defined by a couple of drawn cards, and general rules. It’s certainly a thinking game with less interaction with the rest of the players. We also had a copy of Cards Against Humanity.

Other games we’ve played over the past year: Bunny Kingdom, Gizmos, The Doom That Came To Atlantic City, Concept, Horrified, Isle of Skye, Photosynthesis, and Trains.

I’m not going to list all the board games, just the number of new and expansions.

  • Number of New Board Games: 36
  • Number of Expansions: 62

Pictures!

I need to get the games in order again and probably pick up another Kallex bookshelf.

Looking from the door to the window.


And looking from the window to the door
Posted in Gaming | 2 Comments

Shepherd’s Pie

Preparing the Potatoes

  • 1 1/2 lb. potatoes, peeled
  • Kosher salt
  • 4 tbsp. melted butter
  • 1/4c. milk
  • 1/4c. sour cream
  • Freshly ground black pepper

Preparing the beef filling

  • 1 tbsp. extra-virgin olive oil
  • 1 large onion, chopped
  • 2 carrots, peeled and chopped
  • 2 cloves garlic, minced
  • 1 tsp. fresh thyme
  • 1 1/2 lb. ground beef
  • 1 c. frozen peas
  • 1 c. frozen corn
  • 2 tbsp. all-purpose flour
  • 2/3c. low-sodium chicken broth
  • 1 tbsp. freshly chopped parsley, for garnish

Directions

Preheat oven to 400 degrees.

Make mashed potatoes. In a large pot, cover potatoes with water and add a generous pinch of salt. Bring to a boil and cook until totally soft, about 16 to 18 minutes. Drain and return to the pot.

Use a potato masher to mash potatoes until they’re smooth. Add melted butter, milk, and sour cream. Mash together until fully incorporated, then season with salt and pepper. Set aside.

Make beef filling. In a large, ovenproof skillet over medium heat, heat the olive oil. Add onion, carrots, garlic, and thyme and cook until fragrant and softened, about 5 minutes. Add ground beef and cook until no longer pink, about 5 more minutes. Drain the fat.

Stir in the frozen peas and corn and cooked until warmed through, about 3 minutes. Season with salt and pepper.

Sprinkle meat with flour and stir to evenly distribute. Cook 1 minute more and add the chicken broth. Bring to a simmer and let the mixture thicken slightly. About 5 minutes more.

Top the beef mixture with an even layer of mashed potatoes and bake in the oven until there is little moisture and the mashed potatoes are golden. About 20 minutes will do it. Broil will make the potatoes a bit crispier.

Posted in Cooking | 1 Comment

My Tech Certifications – A History

I don’t as a rule chase technical certifications. As a technical person who’s been mucking about with computers since around 1981, and as someone who has been on the hiring side of the desk, certifications are similar to some college degrees. They might get you in the door, but you still have to pass the practical exam with the technical staff in order to get hired.

Don’t get me wrong, the certification at least gets you past the recruiter/HR rep. Probably. At least where I am, the recruiter has a list of questions plus you have to get past my manager’s review before it even gets into my hands for a yes/no vote.

I have several certifications over the years and some have been challenging. I basically have a goal for going after the certification and generally it’s to validate my existing skills and maybe pick up a couple of extra bits that are outside my current work environment.

Back in the 80’s, I was installing and managing Novell and 3Com Local Area Networks (LAN). At one medium sized company, I was the first full time LAN Manager. In order to get access to the inner support network, I took quite a few 3Com classes and eventually went for the final certification. The certification would give me access to CompuServe and desired support network.

I did pass of course, and being a gamer, I enjoyed the heck out of the certification title.

Certification 1: 3Com 3Wizard

I’ve taken quite a few various training courses over the years. IBM’s OS/2 class down in North Carolina. Novell training (remember the ‘Paper CNE’ 🙂 ), and even MS-DOS 5 classes. About this time (early 90’s), I’d been on Usenet for 4 or 5 years. I’d written a Usenet news reader (Metal) and was very familiar with RFCs and how the Usenet server specifically worked. I had stumbled on Linux when Linus released it but I didn’t actually install a Linux server on an old 386 I had until Slackware came out with a crapload of diskettes. I had an internet connection at home on PSINet.

Basically I was poking at Linux.

In the mid 90’s, I was ready to change jobs. I had been moved from a single department to the overall organization (NASA HQ) and what I was going to be working on was going to be reduced from everything for the department to file and print and user management. I was approached by the Unix team and manager. “Hey, how about you shift to the Unix team?” It honestly took me a week to consider it but I eventually accepted. I was given ownership of the Usenet server 🙂 and the Essential Sysadmin book and over 30 days, I devoured the book and even sent in a correction to the author (credit in the next edition 🙂 ). After 2 years of digging in, documenting, and researching plus attending a couple of Solaris classes, I went for the Sun certification. This was really just so I could validate my skills. I didn’t need the certs for anything as there wasn’t a deeper support network you gained access to when you got it.

Certification 2: Sun Certified Solaris Administrator

Certification 3: Sun Certified Network Administrator

A few years later the subcontractor I was working for lost the Unix position. They were a programming shop (databases) and couldn’t justify the position. I was interested in learning more about networking and wanted to take a couple of classes. The new subcontractor offered me a chance at a boot camp for Cisco. I accepted and for several weeks, I attended the boot camp. I wasn’t working on any Cisco gear so basically concentrated on networking concepts more than anything else. I barely even took any notes 🙂  But I also figured that since the company was paying for the class ($10,000), I should at least get the certifications. The CCNA certification was a single test on all the basics of Cisco devices and networking. The CCNP certification was multiple tests, each one focusing on each category vs an overall test like the CCNA one was. The farther away from the class I got, the harder it was to pass the tests. CCNA was quick and easy. I passed the next couple with one test. The next took a couple of tests. The last took 3 tests. But I did pass and get my certifications.

Certification 4: Cisco Certified Network Associate

Certification 5: Cisco Certified Network Professional

I did actually manage firewalls after I got the certification, but I really am a systems admin and the command line and concepts were outside my wheelhouse. I tried to take the refresher certification but they’d gone to hands on testing vs multiple choice and since I wasn’t actually managing Cisco gear, I failed.

I’d been running Red Hat Linux on my home firewall for a while but I switched to Mandrake for a bit, then Mandriva, then Ubuntu. I also set up a remote server in Florida running OpenBSD so still poking at operating systems and still a system admin sort of person. At my job now, I was hired because of my enterprise knowledge. Working with Solaris, AIX, Irix, and various Linux distros. Since Sun was purchased by Oracle and then abandoned, I’ve been moving more into Red Hat management. Getting deeper and deeper into it. We’re also using HP-UX and had a few Tru64 servers in addition to a FreeBSD server and Red Hat servers. I’d taken several Red Hat training courses, cluster management, performance tuning, etc and eventually decided to go for my certifications. It seems like I’ve been getting a cert or two every 10 years 🙂  3Wizard in the 80’s. Sun in the 90’s. And Cisco in the 00’s. So I signed up for the Red Hat Certified System Administrator test and the Red Hat Certified Engineer test. It took two tries to get the RHCSA certificate. The first part of the test is to break into the test server. Took me 30 minutes the first time to remember how to do that. The RHCE test was a bit different. You had to create servers vs just use them as in the RHCSA test. Shoot, if I need to create a server, I don’t need to memorize how to do it. I document the process after research. Anyway, after two tries at the RHCE test, I dropped trying.

Certification 6: Red Hat Certified System Administrator

With Red Hat 8 out, I’ll give it a year and for the 20’s try for the RHCSA and RHCE again.

Here’s an odd thing though. These are all Operating System certifications. I’m a Systems Admin. I manage servers. I enjoy managing servers. I’ve considered studying for and getting certifications for MySQL for example since I do a lot of coding for one of my applications (and several smaller ones) and would like to expand my knowledge of databases. I’m sure I’m doing some things the hard way 🙂  Work actually gave me (for free!) two Dell R710 servers as they were being replaced. The first one I set up to replace my Ubuntu gateway so it was a full install of Linux and a firewall. Basically a replacement. All my code was on it, my backups, web sites, etc. But the second server showed up and the guys on the team talked me into installing VMware’s Vsphere software to turn the server into a VMware server able to host multiple virtual servers. And I stepped up and signed up to the VMware Users Group (VMUG) because I could get a discount on Vcenter which lets me turn the two R710’s into a VMware cluster.

In addition, I took over control of the Kubernetes clusters at work. The Product Engineers had thrown it over the wall at Operations and it ha sat idle. After I took it over, I updated the scripts I’d initially created to deploy the original clusters to start building new clusters. I’ve been digging deeper and deeper into Kubernetes in order to be able to support it. On the Product Engineering side, they’re building containers and pods to be deployed into the Kubernetes environments so they’re familiar with Kubernetes with regards to deployments and some rudimentary management but they’re not building or supporting the clusters. I am. I’m it. My boss recently asked me, “who’s our support contract with for Kubernetes?” and my answer was, “me, just me.”

So I decided to try and take the Kubernetes exams. This is the first non-operating system exam and certification I’ve attempted. Note that I considered it for mysql and others, but never actually moved forward with them. For Kubernetes, since I’m it, I figured I should dig in deeper and get smarter. I took the exam and failed it. But I realized that they were looking for application development knowledge as well which as an Operations admin, I’m not involved in. So I took the Application Developer course and took the exam again last week and passed it. But since I was taking the AppDev course, I figured I’d take the AppDev test. But I failed that as well. The first time. I expect I’ll be able to pass it the second time I try (I have a year for the free retake).

Certification 7: Certified Kubernetes Administrator

Over the past few days, I’ve been touting the CKA cert. I even have a printed copy of the cert at my desk. It’s the first one I’ve taken that’s not Operating System specific.

Certification 8: Certified Kubernetes Application Developer

A Year Later: I started receiving a few emails from the Linux Foundation. Your second test opportunity is about to expire. So I sucked it up and spent a month studying for the CKAD. I’d done a lot more in the past year and felt I was better prepared to take the test. I retook the Linux Academy course and even picked up a separate book just for some extra, different guidance. The book did clarify one thing for me that I hadn’t quite groked, Labels. I mean, I know what a label is, but wasn’t fully clear on the functionality of it. Since there’s no container or pod identity, there’s no way to associate things with a task. I got it because I’d been using tags to group products together in order to run ansible playbooks against them. The containers don’t have a set IP address, don’t have a DNS name, they just have a label, ‘app: llamas’. So any container with the ‘app: llamas’ label, has specific rights. Anyway, I took the test and passed it so one more certification.

Certification 9: AWS Certified Cloud Practitioner

The AWS CCP exam is a core or entry level exam. I started taking the Linux Academy course and it was basically matching up the Amazon terminology with how things work in Operations. Once I had them matched, I was able to take the test less than a week later and pass it. I started studying the AWS Certified SysOps Associate exam and will follow it up with the AWS Certified DevOps Professional and then the Security Track. In the mean time though, I’m taking the OpenShift Administration classes. So who knows what the next certification will be?

Carl – 3Wizard, SCSA/SCNA, CCNA/CCNP, RHCSA, CKA, CKAD, AWS CCP

Posted in About Carl, Computers | Tagged , , , , , , , | Leave a comment

Beef and Bean Taco Skillet

  • 1 lb beef
  • 1 1.4oz packet taco seasoning
  • 1 16oz can pinto beans
  • 1 1.75oz can condensed tomato soup
  • 1/2 cup salsa
  • 1/4 cup water
  • 1/2 cup shredded cheese
  • 6 6” flour tortillas

Cook beef in a 10-inch skillet over medium-high heat until well browned; break up any clumps of beef. Drain fat. Stir in taco seasoning. Ad beans, soup, salsa and water. Reduce to low heat and simmer for 10 minutes, stirring occasionally. To with cheese. Serve with flour tortillas.

Posted in Cooking | Leave a comment