Kubernetes Resource Management

Overview

There are two objectives to Resource Management. Ensure sufficient resources are available for all deployed products and determine when additional resources are required.

This document provides information based on the deployed Kubernetes cluster and is filtered for my specific configurations. For full details on Resource Management, be sure to follow the links in the References section at the end of this document.

Description

There are two Objects that are created to manage resources in a namespace. The LimitRange Object and the ResourceQuota Object.

LimitRange Object

The LimitRange Object is a default value for containers in a namespace that don’t have their resources defined. To define this, you need to try and determine the resources used by the containers in the namespace. The command, kubectl top pods -n [namespace] will give you the idle value but you’ll have to try and determine the maximum value by monitoring the usage over time and be ready to adjust as necessary. A tool that monitors resources such as Grafana can provide that information.

ResourceQuota Object

The ResourceQuota Object is the total value of the defined resource requirements for the product. The product Architecture Document will define the minimum (Requests) and maximum (Limits) CPU and Memory values plus the minimum and maximum number of pods in a product. You then create the ResourceQuota Object with those values and apply it to the product namespace and restart the pods.

Resource Requests

Requests are the minimum value required for the container to start. It consists of two settings, CPU and Memory and are slices of the cluster resources which are configured as millicpus (m) and megabytes (Mi) or gigabytes (Gi). These values reserve the resources so they are not available to other containers. When cluster Resource Requests reaches 80%, additional worker nodes are required.

Resource Limits

Limits are the maximum value the container is expected to consume. Like Requests, it consists of two settings, CPU and Memory. This does not reserve cluster resources but should be used to determine cluster capacity. Since it doesn’t reserve cluster resources, the cluster can be overcommitted. When cluster Resource Limits reaches 80%, additional worker nodes are recommended.

Settings

The ResourceQuota Admission Controller needs to be added to the kube-apiserver manifest to enable Resource Management. Log in to each control node and edit the /etc/kubernetes/manifests/kube-apiserver.yaml file:

  - --enable-admission-plugins=ResourceQuota

Other Admission Controllers may already be configured. There is no required order so the new ResourceQuota Admission Controller can be anywhere in the option.

The kube-apiserver for each control node will restart automatically after then change has been saved.

Available Resources

In order to understand when additional resources are required for a cluster, you must calculate the total resource availability of a cluster. This consists of the total CPUs and Memory of all worker nodes less 20% for overhead (Operating System, Kubernetes software, docker, and any additional agents). Other factors may increase the overhead on a worker node and will need to be taken into consideration.

LimitRange Sample Object

As with all Kubernetes definitions, the yaml file is fairly straightforward. You’ll need to determine the Requests and Limits values since they’re not defined in the containers. Then apply it to the cluster. In most cases this applies to a third party container.

apiVersion: v1
kind: LimitRange
metadata:
  name: vendor-limit-range
  namespace: vendor-system
spec:
  limits:
  - default:
      memory: 512Mi
      cpu: 512m
    defaultRequest:
      memory: 256Mi
      cpu: 128m
    type: Container

ResourceQuota Sample Object

You’ll need to get the requirements from the Architect Document and calculate the requirements. You’ll note a slight difference in the definition between a LimitRange and a ResourceQuota Object.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: inventory-rq
  namespace: inventory
spec:
  hard:
    limits.cpu: "24"
    limits.memory: 24Gi
    requests.cpu: "18"
    requests.memory: 18Gi

Resource Management

Important note, when ResourceQuota has been configured for a cluster, clusters that don’t have resources defined will not start. A recent example is a two container deployment that defined resources for the main container but not the init container. The main container failed as the init container didn’t start.

Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetAvailable
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate

When cluster resources have been exhausted, a couple of errors can be noted in the pod description.

Warning  FailedScheduling  78m (x11 over 80m)  default-scheduler  0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.

And

Error from server (Forbidden): error when creating "quota-mem-cpu-demo-2.yaml": pods "quota-mem-cpu-info-2" is forbidden: exceeded quota: mem-cpu-demo, requested: requests.memory=700Mi, used: requests.memory=600Mi, limited: requests.memory=1Gi

The events are informing you of a resource issue. Review the resource usage and adjust appropriately or add more resources to the cluster.

Finally adding a ResourceQuota to a namespace does not immediately go into effect. You need to restart the containers. After a ResourceQuota object is applied but in the status.used section, all the values are at zero (0), then the pods need to be restarted for the ResourceQuota to go into effect.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"inventory-rq","namespace":"inventory"},"spec":{"hard":{"limits.cpu":"5","limits.memory":"4Gi","requests.cpu":"3","requests.memory":"3Gi"}}}
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17256997"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
  spec:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
  status:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
    used:
      limits.cpu: "0"
      limits.memory: "0"
      requests.cpu: "0"
      requests.memory: "0"

After restarting the pods, the amount of resources used is then calculated.

$ kubectl get resourcequota inventory-rq -n inventory -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"ResourceQuota","metadata":{"annotations":{},"name":"inventory-rq","namespace":"inventory"},"spec":{"hard":{"limits.cpu":"5","limits.memory":"4Gi","requests.cpu":"3","requests.memory":"3Gi"}}}
    creationTimestamp: "2020-02-11T22:12:30Z"
    name: inventory-rq
    namespace: inventory
    resourceVersion: "17258299"
    selfLink: /api/v1/namespaces/inventory/resourcequotas/inventory-rq
    uid: 1795b194-dc50-41e8-978d-28c7803aec5a
  spec:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
  status:
    hard:
      limits.cpu: "5"
      limits.memory: 4Gi
      requests.cpu: "3"
      requests.memory: 3Gi
    used:
      limits.cpu: 2400m
      limits.memory: 2400Mi
      requests.cpu: 1200m
      requests.memory: 1800Mi

Reference

This entry was posted in Computers, Kubernetes and tagged . Bookmark the permalink.

One Response to Kubernetes Resource Management

  1. Pingback: Kubernetes Index | Motorcycle Touring

Leave a Reply

Your email address will not be published. Required fields are marked *