How a Kubernetes bug won't let you expose a service over TCP and UDP on a same port
Introduction
Long story short, I wasted hours of my life because of an unfixed 2016 Kubernetes’s bug that didn’t want me to expose a service over both UDP and TCP on a same port. May this article come up in your Google search and save you hours of suffering.
How it is supposed to work
Let’s say you want to expose the following Pod using a Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: someuser/myapp
ports:
- containerPort: 8081
name: web-interface
- containerPort: 31000
name: a-service
protocol: TCP
- containerPort: 31000
name: a-service-udp
protocol: UDP
You have a Web Interface to manage a service, which is available on both TCP and UDP. Now let’s see the associated Kubernetes Service used to expose those ports using NodePort.
apiVersion: v1
kind: Service
metadata:
annotations:
name: myapp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: web-interface
port: 8081
targetPort: 8081
protocol: TCP
- name: a-service
port: 31000
targetPort: 31000
protocol: TCP
- name: a-service-udp
port: 31000
targetPort: 31000
protocol: UDP
selector:
app: transmission
type: NodePort
Question: what would happen if you kubectl apply -f
those yaml files ?
It will work without issue.
The bug
However, if you poor soul had the misfortune to first deploy a configuration like the following one, planning to add UDP later :
[...]
spec:
containers:
- name: myapp
image: someuser/myapp
ports:
- containerPort: 8081
name: web-interface
- containerPort: 31000
name: a-service
protocol: TCP
[...]
[...]
spec:
externalTrafficPolicy: Cluster
ports:
- name: web-interface
port: 8081
targetPort: 8081
protocol: TCP
- name: a-service
port: 31000
targetPort: 31000
protocol: TCP
[...]
Then you never will be able to add said UDP service by updating your configuration using kubectl apply
and kubectl edit
.
The root cause
This is due to the way Kubernetes updates existing configuration files: it uses “merge keys” to match fields from the new configuration with fields of the existing one. In our context, the merge key used is port
, which happens to be the same for TCP and UDP. As a result, only the first “match” from the new configuration file is applied.
An issue has been created in 2016 for this bug, but remains open to this day : https://github.com/kubernetes/kubernetes/issues/39188
The solution
The solution is quite simple : kubectl delete
existing Deployments and Services, and recreate them rather than updating them.