-
Notifications
You must be signed in to change notification settings - Fork 269
Can't seem to get the GCE LoadBalancer to work (502) #18
Comments
I think the problem is your app has to respond to GET / with a 200. That's the default behaviour for GCE's health checking. If you want another health check endpoint (e.g. /health), you can specify a ReadinessProbe for the pods behind the services web-http/web-http-staging. Can you verify what exact services are the ones that fail the health check? (I assume web-http/web-http-staging are failing) |
Hi, You are completely right. / returns a HTTP 302 to the HTTPS version, which for some reason doesn't work. The load balancer refuses connections to HTTPS. The certs are renewed by lego correctly, though. I'm going to investigate further and I'll be back. |
Ok, some of it works now
instead of
Now I'm finding myself in a situation, where the HTTP backend is assigned one IP and the HTTPS backend is assigned another. That makes no sense to me. |
I think the yaml line swap doesn't change anything. I think it's always a good idea to remove the ingress resource from your cluster and make sure that all related LB objects in GCE are removed. And then start a again... |
The swap fixed it though. Also, the Ingress works now by recreating it. |
Can confirm this works for me too! Massive thanks to you @niclashedam. I had so much trouble getting the https load balancing running thanks to my index session auth redirect to a login page. Once I'd changed that |
No problem! Glad I could help :-) |
@simonswine this is very good point that your service needs ALWAYS to respond 200 to kubernetes at |
Hello, I seem be to having the same problem. apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: django
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: django-tls
hosts:
- api.example.com
- example.com
- www.example.com
rules:
- host: api.example.com
http:
paths:
- path: /*
backend:
serviceName: django
servicePort: 80
- host: example.com
http:
paths:
- path: /.well-known/acme-challenge/*
backend:
serviceName: django
servicePort: 80
- host: www.example.com
http:
paths:
- path: /.well-known/acme-challenge/*
backend:
serviceName: django
servicePort: 80
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: django
spec:
replicas: 1
progressDeadlineSeconds: 600
minReadySeconds: 15
revisionHistoryLimit: 5
template:
metadata:
labels:
app: django
tier: midend
spec:
securityContext:
runAsUser: 999
fsGroup: 999
restartPolicy: Always
containers:
- name: django
image: gcr.io/project-id/django:v1beta3
imagePullPolicy: "IfNotPresent"
command: ["gunicorn", "config.wsgi:application", "-b", "0.0.0.0:5000", "-w", "4", "--chdir=/app"]
livenessProbe:
httpGet:
path: /healthz
port: 5000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 5000
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 60 On the Google Cloud Console I can see 3 loadbalancers, 2 of them showing healthy 2/2 and 1 of them showing 0/2. I have deleted and recreated the setup as instructed above, but it still isn't ready for further testing. EDIT: Changing both |
hI @niclashedam and @jamesthompson ,
to
when I try to edit this file back to the original configuration and save, I get an "edit cancelled" message from kubectl. |
I have the same problem. |
remove unused tls.Validate() func
I only see error 502, no matter what I try.
My ingress:
And one of my services (The other looks exactly the same):
and lastly, my pod listens on 80 and 443. If I curl the internal service IP from a node, I get the correct response (200). Therefore the Load Balancer must fail.
The .well-known path is present in the GCE load balancer, and there are four backends -- and half of them has the health status 0/4, while the last two has 4/4. I have no idea why they are reporting has unhealthy, since none of my Pods have health checks and they are reporting 200, if you connect directly to the service.
Help is greatly appreciated. Thank you.
The text was updated successfully, but these errors were encountered: