TL;DR – Make sure you name your ports when you create external endpoints.
In my home environment, I need a reverse proxy that serves all port 80 and 443 requests and can interface easily with LetsEncrypt to ensure all those endpoints are secure. Originally I’ve been using Docker and Jwilder’s nginx proxy to support all these. As it’s just using nginx, you can use it to send stuff to backends that aren’t in docker pretty easily (like the few physical things that aren’t in docker). However, I’ve been transitioning over to Kubernetes and need a similar way to have a single endpoint on those ports that all services can use.
Well, the good news is that the the internet is awash of articles about this. However, after attempting to implement any of them, I was consistently getting 502 errors – no live upstreams. This was happening on a Ubuntu 20.04 LTS system running microk8s v1.19.5.
My original endpoint, service, and ingress configs were the following:
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: <<IP>>
ports:
- port: <<PORT>>
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
ports:
- name: https
protocol: TCP
port: <<PORT>>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: external-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
cert-manager.io/acme-challenge-type: http01
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- external.rebelpeon.com
secretName: external-prod
rules:
- host: external.rebelpeon.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <<PORT>>
This yaml deployed successfully, but as mentioned did not work. With it deployed, when describing the Endpoint:
$ kubectl describe endpoints -n test
Name: external-service
Namespace: test
Labels: <none>
Annotations: <none>
Subsets:
Addresses: <<IP>>
NotReadyAddresses: <<none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 443 TCP
Events: <none>
When describing the service:
$ kubectl describe services -n test
Name: external-service
Namespace: test
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.182
IPs: <none>
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints:
Session Affinity: None
Events: <none>
Wait a minute, the service lists the endpoints as being blank – not undefined or properly defined as others. When I describe the endpoint of a working K8-managed endpoint, I see that the port has a name, and that’s the only difference.
$ kubectl describe endpoints -n test
Name: external-service
Namespace: test
Labels: <none>
Annotations: <none>
Subsets:
Addresses: <<IP>>
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
https 443 TCP
So, I changed my config to the following (one line change):
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: <<IP>>
ports:
- port: <<PORT>>
protocol: TCP
name: https
---
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
ports:
- name: https
protocol: TCP
port: <<PORT>>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: external-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
cert-manager.io/acme-challenge-type: http01
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- external.rebelpeon.com
secretName: external-prod
rules:
- host: external.rebelpeon.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: <<PORT>>
And, tada everything works! I can now access physical hosts outside of K8 via the K8 ingress! Sadly, that took about 4 hours of head bashing-in to realize…