Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/nginx-ingress-controller] upstream prematurely closed connection while reading response header from upstream, #14121

Closed
sai-ns opened this issue Dec 29, 2022 · 3 comments
Assignees
Labels
nginx-ingress-controller solved tech-issues The user has a technical issue about an application

Comments

@sai-ns
Copy link

sai-ns commented Dec 29, 2022

Name and Version

bitnami/ingress 1.6.0

What steps will reproduce the bug?

  1. helm install
  2. with slight changes such in values file such as pod annotations and resource limits/requests
  3. Pods come up just fine but I see NGINX is trying to reload and is failing to reload in the logs
  4. first error is from the controller pods while startup and second error is after the startup for the same hostname
E1229 03:05:49.589743       1 controller.go:180] Unexpected failure reloading the backend:

-------------------------------------------------------------------------------
Error: exit status 1
2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:144
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:144
2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:145
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:145
2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx/nginx-cfg3268068649:146
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx/nginx-cfg3268068649:146
2022/12/29 03:05:49 [emerg] 6611#6611: duplicate location "/" in /tmp/nginx/nginx-cfg3268068649:21987
nginx: [emerg] duplicate location "/" in /tmp/nginx/nginx-cfg3268068649:21987
nginx: configuration file /tmp/nginx/nginx-cfg3268068649 test failed

-------------------------------------------------------------------------------
E1229 03:05:49.589820       1 queue.go:130] "requeuing" err=<

        -------------------------------------------------------------------------------
        Error: exit status 1
        2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:144
        nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:144
        2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:145
        nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /tmp/nginx/nginx-cfg3268068649:145
        2022/12/29 03:05:49 [warn] 6611#6611: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx/nginx-cfg3268068649:146
        nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /tmp/nginx/nginx-cfg3268068649:146
        2022/12/29 03:05:49 [emerg] 6611#6611: duplicate location "/" in /tmp/nginx/nginx-cfg3268068649:21987
        nginx: [emerg] duplicate location "/" in /tmp/nginx/nginx-cfg3268068649:21987
        nginx: configuration file /tmp/nginx/nginx-cfg3268068649 test failed

        -------------------------------------------------------------------------------
 > key="yyyyyyyyyyyy/xxxxxx"
I1229 03:05:49.589911       1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-77bdb48fb5-2zw9l", UID:"aa8fb70b-c822-4ece-88c8-aa37e4586f09", APIVersion:"v1", ResourceVersion:"65628765", FieldPath:""}): type: 'Warning' reason: 'RELOAD' Error reloading NGINX:
-------------------------------------------------------------------------------
Error: exit status 1
ingress-nginx-controller-77bdb48fb5-m9nfj:controller 2022/12/29 04:08:05 [error] 59#59: *1213219 upstream prematurely closed connection while reading response header from upstream, client: 10.x.x.x, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.17.5:8080/", host: "hostname.com"

All the pods are having same issues but the hostnames are not consistent across the pods. Same host which is failing to load in a pod is being properly loaded in a different pod of the controller. With this, we ended up having different nginx.conf files in each pod and requests coming to the pods that has the server block in the conf file are working as expected where as when the requests are served by different pod, it is throwing 502 error.

I copied the nginx.conf files from all the pods and searched for the server block count and it is different for each pod
image

Are you using any custom parameters or values?

No response

What is the expected behavior?

ingress controller should be able to validate all the ingresses without throwing an error and have a consistent nginx.conf file across all the pods. Applications should be able to access without 502 errors

What do you see instead?

image

Additional information

we are on kubernetes v1.24.6 and have kuma 1.7.1

@sai-ns sai-ns added the tech-issues The user has a technical issue about an application label Dec 29, 2022
@bitnami-bot bitnami-bot added this to Triage in Support Dec 29, 2022
@github-actions github-actions bot added the triage Triage is needed label Dec 29, 2022
@carrodher carrodher moved this from Triage to In progress in Support Dec 29, 2022
@github-actions github-actions bot added in-progress and removed triage Triage is needed labels Dec 29, 2022
@bitnami-bot bitnami-bot assigned jotamartos and unassigned carrodher Dec 29, 2022
@jotamartos
Copy link
Contributor

jotamartos commented Jan 10, 2023

Hi @sai-ns,

I just launched the solution with the default parameters and everything came up without problems

$ kubectl get all
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg            1/1     Running   0          8m55s
pod/jota-ingress-nginx-ingress-controller-default-backend-9d66qjw2v   1/1     Running   0          8m55s

NAME                                                            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
service/jota-ingress-nginx-ingress-controller                   LoadBalancer   10.159.30.113   35.245.220.207   80:31748/TCP,443:30203/TCP   8m55s
service/jota-ingress-nginx-ingress-controller-default-backend   ClusterIP      10.159.21.41    <none>           80/TCP                       8m55s

NAME                                                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jota-ingress-nginx-ingress-controller                   1/1     1            1           8m55s
deployment.apps/jota-ingress-nginx-ingress-controller-default-backend   1/1     1            1           8m55s

NAME                                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/jota-ingress-nginx-ingress-controller-74fc5bfbd4                 1         1         1       8m55s
replicaset.apps/jota-ingress-nginx-ingress-controller-default-backend-9d66fff7   1         1         1       8m55s

When checking the logs of the controller pod, I didn't see any error

$ kubectl logs -f pod/jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       1.6.0
  Build:         3474c33
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.21.6

-------------------------------------------------------------------------------

W0110 12:14:39.823345       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0110 12:14:39.823590       1 main.go:209] "Creating API client" host="https://10.159.16.1:443"
I0110 12:14:39.834926       1 main.go:253] "Running in Kubernetes cluster" major="1" minor="24" git="v1.24.6-gke.1500" state="clean" commit="2a03b79e15ee67cb6ddb3b8c868eb98482b2a254" platform="linux/amd64"
I0110 12:14:39.841239       1 main.go:86] "Valid default backend" service="jotamartos/jota-ingress-nginx-ingress-controller-default-backend"
I0110 12:14:40.088050       1 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
W0110 12:14:40.126962       1 store.go:809] Unexpected error reading configuration configmap: configmaps "jota-ingress-nginx-ingress-controller" not found
I0110 12:14:40.138378       1 nginx.go:260] "Starting NGINX Ingress controller"
I0110 12:14:41.341042       1 nginx.go:303] "Starting NGINX process"
I0110 12:14:41.341686       1 leaderelection.go:248] attempting to acquire leader lease jotamartos/ingress-controller-leader...
I0110 12:14:41.342726       1 controller.go:168] "Configuration changes detected, backend reload required"
I0110 12:14:41.348188       1 status.go:84] "New leader elected" identity="jota-nginx-ingress-nginx-ingress-controller-5c969b6c4b-64gx2"
I0110 12:14:41.435894       1 controller.go:185] "Backend successfully reloaded"
I0110 12:14:41.436258       1 controller.go:196] "Initial sync, sleeping for 1 second"
I0110 12:14:41.436386       1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jotamartos", Name:"jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg", UID:"133a5600-f486-4620-be06-b02db26539c2", APIVersion:"v1", ResourceVersion:"56291818", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0110 12:15:24.385018       1 leaderelection.go:258] successfully acquired lease jotamartos/ingress-controller-leader
I0110 12:15:24.385223       1 status.go:84] "New leader elected" identity="jota-ingress-nginx-ingress-controller-74fc5bfbd4-58kcg"
10.30.2.1 - - [10/Jan/2023:12:16:04 +0000] "" 400 0 "-" "-" 0 0.057 [] [] - - - - b809eec97a59e7d087bbe5580377b2ec

I didn't find any error in the backend pod either. Could you please retry the deployment and check if it works with the default configuration?

My version is also 1.24.6 so that shouldn't be the problem

Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6-gke.1500", GitCommit:"2a03b79e15ee67cb6ddb3b8c868eb98482b2a254", GitTreeState:"clean", BuildDate:"2022-10-13T09:31:09Z", GoVersion:"go1.18.6b7", Compiler:"gc", Platform:"linux/amd64"}

@jotamartos jotamartos changed the title upstream prematurely closed connection while reading response header from upstream, [bitnami/nginx-ingress-controller] upstream prematurely closed connection while reading response header from upstream, Jan 10, 2023
@github-actions github-actions bot moved this from In progress to Pending in Support Jan 10, 2023
@JeffParkes
Copy link

JeffParkes commented Jan 13, 2023

Hi @jotamartos I worked with @sai-ns on this issue. We found the root cause for NGINX config loading issue. There was an ingress in our cluster that was using a location block of "/" so nginx could not load the conf file.

kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" nginx.ingress.kubernetes.io/server-snippet: | location / { proxy_set_header Upgrade "websocket"; proxy_http_version 1.1; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header Connection "upgrade"; proxy_cache_bypass $http_upgrade;

This was due to this a patch to remove root and alias directives from this PR kubernetes/ingress-nginx#8624

after we deleted the ingress with the location block of "/" the nginx could load the conf file and "502 bad gateway" errors went away.

@github-actions github-actions bot moved this from Pending to In progress in Support Jan 13, 2023
@jotamartos
Copy link
Contributor

Thanks for the info. I'm glad to hear that everything is working properly now!

Have a good week! :)

@bitnami-bot bitnami-bot moved this from In progress to Solved in Support Jan 16, 2023
@github-actions github-actions bot moved this from Solved to Pending in Support Jan 16, 2023
@carrodher carrodher moved this from Pending to Solved in Support Jan 17, 2023
@fmulero fmulero removed this from Solved in Support Jan 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
nginx-ingress-controller solved tech-issues The user has a technical issue about an application
Projects
None yet
Development

No branches or pull requests

4 participants