-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/nginx-ingress-controller] upstream prematurely closed connection while reading response header from upstream, #14121
Comments
Hi @sai-ns, I just launched the solution with the default parameters and everything came up without problems $ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg 1/1 Running 0 8m55s
pod/jota-ingress-nginx-ingress-controller-default-backend-9d66qjw2v 1/1 Running 0 8m55s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jota-ingress-nginx-ingress-controller LoadBalancer 10.159.30.113 35.245.220.207 80:31748/TCP,443:30203/TCP 8m55s
service/jota-ingress-nginx-ingress-controller-default-backend ClusterIP 10.159.21.41 <none> 80/TCP 8m55s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jota-ingress-nginx-ingress-controller 1/1 1 1 8m55s
deployment.apps/jota-ingress-nginx-ingress-controller-default-backend 1/1 1 1 8m55s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jota-ingress-nginx-ingress-controller-74fc5bfbd4 1 1 1 8m55s
replicaset.apps/jota-ingress-nginx-ingress-controller-default-backend-9d66fff7 1 1 1 8m55s When checking the logs of the controller pod, I didn't see any error $ kubectl logs -f pod/jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 1.6.0
Build: 3474c33
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
-------------------------------------------------------------------------------
W0110 12:14:39.823345 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0110 12:14:39.823590 1 main.go:209] "Creating API client" host="https://10.159.16.1:443"
I0110 12:14:39.834926 1 main.go:253] "Running in Kubernetes cluster" major="1" minor="24" git="v1.24.6-gke.1500" state="clean" commit="2a03b79e15ee67cb6ddb3b8c868eb98482b2a254" platform="linux/amd64"
I0110 12:14:39.841239 1 main.go:86] "Valid default backend" service="jotamartos/jota-ingress-nginx-ingress-controller-default-backend"
I0110 12:14:40.088050 1 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
W0110 12:14:40.126962 1 store.go:809] Unexpected error reading configuration configmap: configmaps "jota-ingress-nginx-ingress-controller" not found
I0110 12:14:40.138378 1 nginx.go:260] "Starting NGINX Ingress controller"
I0110 12:14:41.341042 1 nginx.go:303] "Starting NGINX process"
I0110 12:14:41.341686 1 leaderelection.go:248] attempting to acquire leader lease jotamartos/ingress-controller-leader...
I0110 12:14:41.342726 1 controller.go:168] "Configuration changes detected, backend reload required"
I0110 12:14:41.348188 1 status.go:84] "New leader elected" identity="jota-nginx-ingress-nginx-ingress-controller-5c969b6c4b-64gx2"
I0110 12:14:41.435894 1 controller.go:185] "Backend successfully reloaded"
I0110 12:14:41.436258 1 controller.go:196] "Initial sync, sleeping for 1 second"
I0110 12:14:41.436386 1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jotamartos", Name:"jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg", UID:"133a5600-f486-4620-be06-b02db26539c2", APIVersion:"v1", ResourceVersion:"56291818", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0110 12:15:24.385018 1 leaderelection.go:258] successfully acquired lease jotamartos/ingress-controller-leader
I0110 12:15:24.385223 1 status.go:84] "New leader elected" identity="jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg"
10.30.2.1 - - [10/Jan/2023:12:16:04 +0000] "" 400 0 "-" "-" 0 0.057 [] [] - - - - b809eec97a59e7d087bbe5580377b2ec I didn't find any error in the backend pod either. Could you please retry the deployment and check if it works with the default configuration? My version is also 1.24.6 so that shouldn't be the problem
|
Hi @jotamartos I worked with @sai-ns on this issue. We found the root cause for NGINX config loading issue. There was an ingress in our cluster that was using a location block of "/" so nginx could not load the conf file.
This was due to this a patch to remove root and alias directives from this PR kubernetes/ingress-nginx#8624 after we deleted the ingress with the location block of "/" the nginx could load the conf file and "502 bad gateway" errors went away. |
Thanks for the info. I'm glad to hear that everything is working properly now! Have a good week! :) |
Name and Version
bitnami/ingress 1.6.0
What steps will reproduce the bug?
All the pods are having same issues but the hostnames are not consistent across the pods. Same host which is failing to load in a pod is being properly loaded in a different pod of the controller. With this, we ended up having different nginx.conf files in each pod and requests coming to the pods that has the server block in the conf file are working as expected where as when the requests are served by different pod, it is throwing 502 error.
I copied the nginx.conf files from all the pods and searched for the server block count and it is different for each pod
![image](https://user-images.githubusercontent.com/59936125/209907290-03a4ec5d-4ebd-414f-97fe-68e747e7f99f.png)
Are you using any custom parameters or values?
No response
What is the expected behavior?
ingress controller should be able to validate all the ingresses without throwing an error and have a consistent nginx.conf file across all the pods. Applications should be able to access without 502 errors
What do you see instead?
Additional information
we are on kubernetes v1.24.6 and have kuma 1.7.1
The text was updated successfully, but these errors were encountered: