Overview
There are multiple IP assignments used in Kubernetes. In addition to the internal networking (by Calico in this case), you can install an Ingress Controller to manage access to your applications. This article provides some basic Service information as I explore the networking and work towards exposing my application(s) externally using an Ingress Router.
Networking
You can manage traffic for your network by either using a Layer 2 or Layer 3 device or by using an Overlay network. Due to the requirement of maintaining pod networks in a switch, the easiest method is using an Overlay network. This encapsulates network traffic using VXLAN (Virtual Extensible LAN) and tunnels it to other worker nodes in the cluster.
Services
I’ll be providing information on the three ways to provide access to applications using a Service context and any plusses and minuses.
ClusterIP
When you create a service, the default configuration assigns a ClusterIP from a pool of IPs defined as the Service Network when you created the Kubernetes cluster. The Service Network is how pods communicate with each other in the cluster. In my configuration, 10.69.0.0/16 is the network I assigned to the Service Network. When I look at a set of services, every one will have a 10.69.0.0/16 IP address.
$ kubectl get svc
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.69.0.1 <none> 443/TCP 8d
default my-nginx NodePort 10.69.210.167 <none> 8080:31201/TCP,443:31713/TCP 11m
NodePort
Configuring a service with the type of NodePort is probably the easiest. You’re defining an externally accessible port in the range of 30,000-32,767 that is associated with your application’s port.
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: default
labels:
run: my-nginx
spec:
type: NodePort
ports:
- name: http
nodePort: 30100
port: 8080
targetPort: 80
protocol: TCP
- name: https
nodePort: 30110
port: 443
protocol: TCP
selector:
run: my-nginx
When you check the services, you’ll see the ports that were randomly assigned if you didn’t define them.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx NodePort 10.69.91.108 <none> 8080:30100/TCP,443:30110/TCP 11h
Anyway, when using NodePort, you simply access the API Server IP Address and tack on the port. With that you have access to the application.
https://bldr0cuomvip1.dev.internal.pri:30110
The positive aspect here is regardless of which worker node the container is running on, you always have access. But the problem with this method is your load balancer has to know about the ports and update the configuration, plus your firewall has to allow access to either a range of ports or an entry for each port. Not killer but can complicate things especially if you’re not assigning the NodePort. Infrastructure as Code does help with managing the Load Balancer and Firewall configurations pretty well.
Side note, you can also access any worker node with the defined port number and Kubernetes will route you to the correct node. Certainly accessing the API server with the port number is optimum.
ExternalIPs
The use of externalIPs lets you access an application/container via the IP of the worker node the app is running on. You can set a new DNS entry so you can access the application without using a default port or the defined port (common ones being 8080 for example).
You’d update the above service to add the externalIPs line. This would be the IP of the worker node the container is running on. In order to add the line you’ll need to get the list of workers to see which node the container is running on.
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
curl 1/1 Running 1 (17h ago) 18h 10.42.251.135 bldr0cuomknode1.dev.internal.pri <none> <none>
curl-deployment-7d9ff6d9d4-jz6gj 1/1 Running 0 12h 10.42.251.137 bldr0cuomknode1.dev.internal.pri <none> <none>
echoserver-6f54957b4d-94qm4 1/1 Running 0 45h 10.42.80.7 bldr0cuomknode3.dev.internal.pri <none> <none>
my-nginx-66689dbf87-9x6kt 1/1 Running 0 12h 10.42.80.12 bldr0cuomknode3.dev.internal.pri <none> <none>
We see the my-nginx pod is running on bldr0cuomknode3.dev.internal.pri. Get the IP for it and update the service (I know all my K8S nodes are 160-162 for control and 163-165 for workers so knode3 is 165).
$ kubectl edit svc my-nginx
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-04-06T01:49:28Z"
labels:
run: my-nginx
name: my-nginx
namespace: default
resourceVersion: "1735857"
uid: 439abcae-94d8-4810-aa44-2992d7a30a63
spec:
clusterIP: 10.69.91.108
clusterIPs:
- 10.69.91.108
externalIPs:
- 192.168.101.165
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 32107
port: 8080
protocol: TCP
targetPort: 80
- name: https
nodePort: 31943
port: 443
protocol: TCP
targetPort: 443
selector:
run: my-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Then add the externalIPs: line as noted above. When done, check the services
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.69.249.118 <none> 8080:32356/TCP 45h
kubernetes ClusterIP 10.69.0.1 <none> 443/TCP 8d
my-nginx NodePort 10.69.91.108 192.168.101.165 8080:32107/TCP,443:31943/TCP 13h
If you check the pod output above, note that the echoserver is also on knode3 plus using port 8080. The issue here is containers can’t have services using the same ports as the first service will be the only one that responds. Either move the pod or change the port to a unique one.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.69.249.118 192.168.101.165 8080:32356/TCP 45h
kubernetes ClusterIP 10.69.0.1 <none> 443/TCP 8d
my-nginx NodePort 10.69.91.108 192.168.101.165 8080:32107/TCP,443:31943/TCP 12h
Finally, the problem should be clear. If knode3 goes away, or goes into maintenance mode, or heck is replaced, the IP address is now different. You’ll need to check the pods, update the service to point to the new node, then update DNS to use the new IP address. And depending on the DNS TTL, it could take some time before the new IP address is returned. Also what if you have more than one pod for load balancing or if you’re using Horizontal Pod Autoscaling (HPA)?
Ingress-Controllers
I checked out several ingress controllers and because Openshift is using a HAProxy ingress controller, that’s what I went with. There are several others of course and you’re free to pick the one that suits you.
The benefit of an Ingress Controller is it combines the positive features of a NodePort and an ExternalIP. Remember with a NodePort, you access your application by using the Load Balancer IP or Worker Node IP, but with a unique port number. It’s annoying because you have to manage firewalls for all the ports. With an ExternalIP, you can assign that to a Service and create a DNS entry to point to that IP and folks can access the site through a well crafted DNS entry. The problem of course if if the node goes away, you have to update the DNS with the new node IP where the pod now resides.
An Ingress Controller installs the selected ingress pod which has a label. Then you create an Ingress route using that label in a metadata.annotation, then create a DNS entry that points to the Load Balancer IP. The Ingress route knows about the DNS entry and has the label so points the incoming traffic to the Ingress Controller which then sends traffic to the appropriate pod or pods regardless of worker.
Ingress Controller Installation
I’ve been in positions where I can’t use helm so I’ve not used it much but the haproxy-ingress controller is only installable via helm chart so this is a first for me. First add the helm binary, then the helm charts for the controller.
helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
Next is to create a custom values file, I called it haproxy-ingress-values.yaml
controller:
hostNetwork: true
Then install the controller. This creates the ingress-controller namespace.
helm install haproxy-ingress haproxy-ingress/haproxy-ingress\
--create-namespace --namespace ingress-controller\
--version 0.14.2\
-f haproxy-ingress-values.yaml
And that’s all there is to it. Next up is creating the necessary ingress rules for applications.
Ingress Controller Configuration
I’m going to be creating a real basic Ingress entry here to see how things work. I don’t need a lot of options but you should check out the documentation and feel free to adjust as necessary for your situation.
Initially I’ll be using a couple of examples I used when testing this process. In addition I have another document I used when I was managing Openshift which gave me the little hint on what I was doing wrong to this point.
There are two example sites I’m using to test this. One is from the kubernetes site (my-nginx) and one is from the haproxy-ingress site (echoserver) both linked in the References section.
my-nginx Project
The my-nginx project has several configuration files that make up the project. The one thing it doesn’t have is the ingress.yaml file needed for external access to the site. Following are the configurations used to build this site.
The configmap.yaml file provides data for the nginx web server.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginxconfigmap
data:
default.conf: |
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
server_name localhost;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
location / {
try_files $uri $uri/ =404;
}
}
For the nginxsecret.yaml Secret, you’ll first need to create a couple of ssl certificates using the openssl command.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /var/tmp/nginx.key -out /var/tmp/nginx.crt \
-subj "/CN=my-nginx/O=my-nginx"
You’ll then copy the new keys into the nginxsecret.yaml file and add it to the cluster.
apiVersion: "v1"
kind: "Secret"
metadata:
name: "nginxsecret"
namespace: "default"
type: kubernetes.io/tls
data:
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0..."
tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0..."
After applying the secret, you’ll need to apply the service which is used by Kubernetes to connect ports with a label which is associated with the deployment. Note ‘labels‘ is ‘my-nginx‘ and in the deployment.yaml file, it has the same ‘labels‘ line. Traffic coming to this service will go to any pod (ingress-controller pod) with this label.
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
Then apply the following deployment.yaml which will pull the nginx image from docker.io.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfigmap
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
When you check the service, because it’s a NodePort, you’ll see both the service ports (8080 and 443) and the exposed ports (31201 and 31713). The exposed ports can be used to access the application by going to the Load Balancer url and adding the port.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.69.249.118 <none> 8080:32356/TCP 9h
kubernetes ClusterIP 10.69.0.1 <none> 443/TCP 9d
my-nginx NodePort 10.69.210.167 <none> 8080:31201/TCP,443:31713/TCP 27h
However that’s not an optimum process. You have to make sure users know what port is assigned and make sure the port is opened on your Load Balancer. With an Ingress Controller, you create a DNS CNAME that points to the Load Balancer and then apply this ingress.yaml route.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-nginx
namespace: default
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: my-ingress.dev.internal.pri
http:
paths:
- backend:
service:
name: my-nginx
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- my-ingress.dev.internal.pri
I created a my-ingress.dev.internal.pri DNS CNAME that points to bldr0cuomvip1.dev.internal.pri. When accessing https://my-ingress.dev.internal.pri, you’ll be directed to the my-ingress service which then transmits traffic to the application pod regardless of which worker node it resides on.
Let’s break this down just a little for clarity, in part because it didn’t click for me without some poking around and having a ping moment when looking at an old document I created for an Openshift cluster I was working on.
In the ingress.yaml file, the spec.rules.host and spec.tls.hosts lines are the DNS entries you created for the pod(s). The ingress controller looks for this service and transmits traffic to the configured service.
The spec.rules.http.backend.service.name is the name of the service this ingress route transmits traffic to. The service.port.number is the port listed in the pod service.
The path line is interesting. You can have multiple directories accessible by different DNS names by changing the path line. In general this is a single website so the / for the path is appropriate for a majority of cases.
The thing that is important is the annotations line. It has to point to the ingress controller. For the haproxy-ingress-controller, it’s as listed but you can verify by describing the pod .
kubectl describe pod haproxy-ingress-7bc69b8cc-wq2hc -n ingress-controller
...
Args:
--configmap=ingress-controller/haproxy-ingress
--ingress-class=haproxy
--sort-backends
...
In this case we see the passed argument of ingress-class = haproxy. This is the same as the annotations line and tells the ingress route which pod is load balancing traffic within the cluster.
Once applied, you can then go to https://my-ingress.dev.internal.pri and access the nginx startup page.
echoserver Project
This one is a little simpler but still can show us how to use an ingress route to access a pod.
All you need is a service.yaml file to know where to transmit traffic.
apiVersion: v1
kind: Service
metadata:
labels:
app: echoserver
name: echoserver
namespace: default
spec:
clusterIP: 10.69.249.118
clusterIPs:
- 10.69.249.118
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32356
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: echoserver
sessionAffinity: None
type: NodePort
Then a deployment.yaml file to load the container.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echoserver
name: echoserver
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: echoserver
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: echoserver
spec:
containers:
- image: k8s.gcr.io/echoserver:1.3
imagePullPolicy: IfNotPresent
name: echoserver
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
For me, the problem was the example was a single line to create the ingress route but it wasn’t enough information to help me create the route. A lot of the problems with examples is they’re expecting cloud usage and you’ll have an AWS, GCE, or Azure load balancer. For on prem it seems to be less obvious in the examples which is why I’m doing it this way. It helps me and may help others.
Here is the ingress.yaml file I used to access the application. Remember you have to create a DNS CNAME for the access and you’ll need the port number from the service definition (8080).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echoserver
namespace: default
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: echoserver.dev.internal.pri
http:
paths:
- backend:
service:
name: echoserver
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- echoserver.dev.internal.pri
And with this ingress route, you have access to the echoserver pod. As I progress in loading tools and my llamas website, I’ll provide the ingress.yaml file so you can see how it’s done.
Pingback: Kubernetes Index | Motorcycle Touring