Load Balancing Kubernetes

Overview

This article provides instructions in how I set up my HAProxy servers (yes two) to provide access to the Kubernetes cluster.

Configuration

To emulate a production like environment, I’m configuring two HAProxy servers to provide access to the Kubernetes cluster. In order to ensure access to Kubernetes, I’m also installing keepalived. In addition, I’m using a tool called monit to ensure the haproxy binary continues to run in case it stops.

The server configuration isn’t gigantic. I am using my default CentOS 7.9 image though so it’s 2 CPUs, 4 Gigs of memory, and 100 Gigs of storage but only 32 Gigs are allocated.

HAProxy

I am making a few changes to the default installation of haproxy. In the global block the following configuration is in place.

global
        log /dev/log local0
        log /dev/log local1 notice
        chroot /var/lib/haproxy
        stats socket /var/lib/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        # An alternative list with additional directives can be obtained from
        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

In the defaults block of the haproxy.cfg file, the following configuration is in place.

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5s
        timeout client  50s
        timeout server  50s

I also added a listener so you can go to the web page and see various statistics on port 1936. Don’t forget to set the firewall to let you access the stats.

listen stats
        bind *:1936
        mode http
        log  global
        maxconn 10
        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        stats show-desc Stats for the k8s cluster
        stats uri /
        monitor-uri /healthz/ready

There are two ports that need to be open for Kubernetes control plane nodes. Ports 6443 for the api server and 22623 for the machine config server. Set up the frontend and backend configurations as follows:

frontend kubernetes-api-server
        bind *:6443
        default_backend kubernetes-api-server
        mode tcp
        option tcplog

backend kubernetes-api-server
        mode tcp
        server bldr0cuomkube1 192.168.101.160:6443 check
        server bldr0cuomkube2 192.168.101.161:6443 check
        server bldr0cuomkube3 192.168.101.162:6443 check


frontend machine-config-server
        bind *:22623
        default_backend machine-config-server
        mode tcp
        option tcplog

backend machine-config-server
        mode tcp
        server bldr0cuomkube1 192.168.101.160:22623 check
        server bldr0cuomkube2 192.168.101.161:22623 check
        server bldr0cuomkube3 192.168.101.162:22623 check

For the worker nodes, the following configuration for ports 80 and 443 are required.

frontend ingress-http
        bind *:80
        default_backend ingress-http
        mode tcp
        option tcplog

backend ingress-http
        balance source
        mode tcp
        server bldr0cuomknode1-http-router0 192.168.101.163:80 check
        server bldr0cuomknode2-http-router1 192.168.101.164:80 check
        server bldr0cuomknode3-http-router2 192.168.101.165:80 check


frontend ingress-https
        bind *:443
        default_backend ingress-https
        mode tcp
        option tcplog

backend ingress-https
        balance source
        mode tcp
        server bldr0cuomknode1-http-router0 192.168.101.163:443 check
        server bldr0cuomknode2-http-router1 192.168.101.164:443 check
        server bldr0cuomknode3-http-router2 192.168.101.165:443 check

Before starting haproxy, you’ll need to do some configuration work. For logging, create the /var/log/haproxy directory as logs will be stored there.

Since we’re using chroot to isolate haproxy, create the /var/lib/haproxy/dev directory. Then create a socket for the logs:

python3 -c "import socket as s; sock = s.socket(s.AF_UNIX); sock.bind('/var/lib/haproxy/dev/log')"

To point to this new device, add the following configuration file to /etc/rsyslog.d called 49-haproxy.conf and restart rsyslog.

# Create an additional socket in haproxy's chroot in order to allow logging via
# /dev/log to chroot'ed HAProxy processes
$AddUnixListenSocket /var/lib/haproxy/dev/log

# Send HAProxy messages to a dedicated logfile
if $programname startswith 'haproxy' then /var/log/haproxy/haproxy.log
&~

keepalived

Since there are two servers, I have hap1 as the primary and hap2 as the secondary server. On the primary server, use the following configuration.

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    interface ens192
    state MASTER
    priority 200

    virtual_router_id 33
    unicast_src_ip 192.168.101.61
    unicast_peer {
        192.168.101.62
    }

    advert_int 1
    authentication {
        auth_type PASS
        auth_pass [unique password]
    }

    virtual_ipaddress {
        192.168.101.100
    }

    track_script {
        chk_haproxy
    }
}

And the backup server

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    interface ens192
    state BACKUP
    priority 100

    virtual_router_id 33
    unicast_src_ip 192.168.101.62
    unicast_peer {
        192.168.101.61
    }

    advert_int 1
    authentication {
        auth_type PASS
        auth_pass [Unique password]
    }

    virtual_ipaddress {
        192.168.101.100
    }

    track_script {
        chk_haproxy
    }
}

monit

The monit tool watches running processes and if the process ceases to exist, the tool restarts the process. It can be configured to notify admins as well. The following changes were made to the default monit configuration.

Note that the username and password appear to be hard coded into monit. The best I could do was ensure access was read-only.

set daemon  120              # check services at 2 minute intervals

set log /var/log/monit.log

set idfile /var/lib/monit/.monit.id

set statefile /var/lib/monit/.monit.state

set eventqueue
    basedir /var/lib/monit/events  # set the base directory where events will be stored
    slots 100                      # optionally limit the queue size

set httpd
    port 2812
     address 192.168.101.62                  # only accept connection from localhost
     allow 192.168.101.62/255.255.255.255    # allow localhost to connect to the server
     allow 192.168.101.90/255.255.255.255                    # allow connections from the tool server
     allow 192.168.0.0/255.255.0.0                           # allow connections from the internal servers
     allow admin:monit read-only   # require authentication

include /etc/monit.d/*


This entry was posted in Computers, Kubernetes and tagged , , , . Bookmark the permalink.

One Response to Load Balancing Kubernetes

  1. Pingback: Kubernetes Index | Motorcycle Touring

Leave a Reply

Your email address will not be published. Required fields are marked *