Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx fails when disabling the service #6847

Closed
omerozery opened this issue Feb 4, 2021 · 11 comments
Closed

Nginx fails when disabling the service #6847

omerozery opened this issue Feb 4, 2021 · 11 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@omerozery
Copy link

omerozery commented Feb 4, 2021

NGINX Ingress controller version: 0.44.0

NGINX Ingress controller Helm Chart version: 3.23.0

Kubernetes version: v1.20.2

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS: CentOS Linux release 7.7.1908 (Core)
  • Kernel: 3.10.0-1160.11.1.el7.x86_64

What happened:
when trying to deploy this chart using the following:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install -n ingress-nginx ingress-nginx ingress-nginx/ingress-nginx -f myfile.yaml

and myfile.yaml content is:

controller:
    dnsPolicy: ClusterFirstWithHostNet
    hostNetwork: true
    kind: DaemonSet
    metrics:
        enabled: true
        serviceMonitor:
            enabled: true
    nodeSelector:
        node.kubernetes.io/role: edge
    service:
        enabled: false

the pod throws the following error message and drops:

goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000138001, 0xc00045c400, 0xa8, 0x1e1)
	k8s.io/klog/v2@v2.4.0/klog.go:1026 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x26915e0, 0xc000000003, 0x0, 0x0, 0xc00016ca10, 0x25e6c33, 0x7, 0x5d, 0x40e200)
	k8s.io/klog/v2@v2.4.0/klog.go:975 +0x19b
k8s.io/klog/v2.(*loggingT).printDepth(0x26915e0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000156260, 0x1, 0x1)
	k8s.io/klog/v2@v2.4.0/klog.go:732 +0x16f
k8s.io/klog/v2.(*loggingT).print(...)
	k8s.io/klog/v2@v2.4.0/klog.go:714
k8s.io/klog/v2.Fatal(...)
	k8s.io/klog/v2@v2.4.0/klog.go:1482
main.main()
	k8s.io/ingress-nginx/cmd/nginx/main.go:93 +0x170f

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x26915e0)
	k8s.io/klog/v2@v2.4.0/klog.go:1169 +0x8b
created by k8s.io/klog/v2.init.0
	k8s.io/klog/v2@v2.4.0/klog.go:417 +0xdf

goroutine 100 [IO wait]:
internal/poll.runtime_pollWait(0x7f19faaead88, 0x72, 0x1c01f20)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000400898, 0x72, 0x1c01f00, 0x2601608, 0x0)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000400880, 0xc000184900, 0x8e9, 0x8e9, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc000400880, 0xc000184900, 0x8e9, 0x8e9, 0x203000, 0x74b2db, 0xc000276f60)
	net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc00000e048, 0xc000184900, 0x8e9, 0x8e9, 0x0, 0x0, 0x0)
	net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc000204de0, 0xc000184900, 0x8e9, 0x8e9, 0xfa, 0x8bb, 0xc00010d710)
	crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000277080, 0x1bfdea0, 0xc000204de0, 0x40b665, 0x181b220, 0x198a180)
	bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000276e00, 0x1c00180, 0xc00000e048, 0x5, 0xc00000e048, 0xe9)
	crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000276e00, 0x0, 0x0, 0xc00010dd18)
	crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000276e00, 0xc000376000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	crypto/tls/conn.go:1252 +0x15f
bufio.(*Reader).Read(0xc00013a660, 0xc0007ac118, 0x9, 0x9, 0xc00010dd18, 0x1abf900, 0x9551ab)
	bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x1bfdd00, 0xc00013a660, 0xc0007ac118, 0x9, 0x9, 0x9, 0xc000116050, 0x0, 0x1bfe080)
	io/io.go:314 +0x87
io.ReadFull(...)
	io/io.go:333
golang.org/x/net/http2.readFrameHeader(0xc0007ac118, 0x9, 0x9, 0x1bfdd00, 0xc00013a660, 0x0, 0x0, 0xc00010ddd0, 0x46cf65)
	golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007ac0e0, 0xc000284660, 0x0, 0x0, 0x0)
	golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/frame.go:492 +0xa5
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00010dfa8, 0x0, 0x0)
	golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:1819 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00044e480)
	golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:1741 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
	golang.org/x/net@v0.0.0-20201110031124-69a78807bb2b/http2/transport.go:705 +0x6c5```

remove

    service:
        enabled: false

and everything works fine, though a useless LoadBalancer service is created...
/kind bug

@omerozery omerozery added the kind/bug Categorizes issue or PR as related to a bug. label Feb 4, 2021
@aledbf aledbf added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Feb 4, 2021
@iam-veeramalla
Copy link

Hi @aledbf , I would like to try my hands on this. Can you assign this issue to me ?

@kundan2707
Copy link
Contributor

@omerozery controller.service.enabed used to be true by dafault. if enabled is false and controller.kind is set to Daemonset
then no service will be created. behavior seems to be as expected

@nic-6443
Copy link
Contributor

@omerozery You should set publishService.enable with false too, for example:

  publishService:
    enabled: false

@omerozery
Copy link
Author

omerozery commented Feb 28, 2021

Thanks @nic-6443
I didn't have to set this one in older versions..

can we close this issue ?

@nic-6443
Copy link
Contributor

nic-6443 commented Mar 1, 2021

I think so.

@vibhas77
Copy link

/close

@k8s-ci-robot
Copy link
Contributor

@vibhas77: You can't close an active issue/PR unless you authored it or you are a collaborator.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 22, 2021
@kundan2707
Copy link
Contributor

@omerozery can you close this issue as its already resolved?

@strongjz
Copy link
Member

/close

@k8s-ci-robot
Copy link
Contributor

@strongjz: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

9 participants