Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external-attacher crashing in azuredisk-csi-driver and azurefile-csi-driver #284

Closed
emiliodangelo opened this issue Dec 11, 2020 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@emiliodangelo
Copy link

From kubernetes-sigs/azurefile-csi-driver#495

What happened:
After installing azurefile-csi-driver and azuredisk-csi-driver in a Kubernetes cluster, csi-attacher container, inside *csi-azurefile-controller and csi-azuredisk-controller pods, is crashing every 1 or 2 minutes with the following message:

csi-attacher log:

...
I1210 14:36:58.833263       1 reflector.go:153] Starting reflector *v1beta1.CSINode (10m0s) from k8s.io/client-go/informers/factory.go:135
I1210 14:36:58.833363       1 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
runtime: mlock of signal stack failed: 12
runtime: increase the mlock limit (ulimit -l) or
runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
fatal error: mlock failed

runtime stack:
runtime.throw(0x15dc7fc, 0xc)
    /usr/lib/go-1.14/src/runtime/panic.go:1112 +0x72
runtime.mlockGsignal(0xc000502a80)
    /usr/lib/go-1.14/src/runtime/os_linux_x86.go:72 +0x107
runtime.mpreinit(0xc000500380)
    /usr/lib/go-1.14/src/runtime/os_linux.go:341 +0x78
runtime.mcommoninit(0xc000500380)
    /usr/lib/go-1.14/src/runtime/proc.go:630 +0x108
runtime.allocm(0xc00006b000, 0x167d548, 0x22b8718)
    /usr/lib/go-1.14/src/runtime/proc.go:1390 +0x14e
runtime.newm(0x167d548, 0xc00006b000)
    /usr/lib/go-1.14/src/runtime/proc.go:1704 +0x39
runtime.startm(0x0, 0xc0004ca401)
    /usr/lib/go-1.14/src/runtime/proc.go:1869 +0x12a
runtime.wakep(...)
    /usr/lib/go-1.14/src/runtime/proc.go:1953
runtime.resetspinning()
    /usr/lib/go-1.14/src/runtime/proc.go:2415 +0x93
runtime.schedule()
    /usr/lib/go-1.14/src/runtime/proc.go:2527 +0x2de
runtime.mstart1()
    /usr/lib/go-1.14/src/runtime/proc.go:1104 +0x8e
runtime.mstart()
    /usr/lib/go-1.14/src/runtime/proc.go:1062 +0x6e

goroutine 1 [select]:
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1.1(0x1380640, 0x0, 0xc000628180)
...

What you expected to happen:
The container should not fail so frequently.

How to reproduce it:
The failure started right after installing v0.7.0 of azurefile-csi-driver. I upgraded to v0.9.0 (for both, azurefile and azuredisk) with the same results. The Kubernetes cluster is composed of 3 master nodes and 3 workers running on Azure VMs (not AKS).

Anything else we need to know?:
Found a couple issues in golang/go repository that seems to be related:

Possibly upgrading golang version from 1.14 to 1.15 will solve the problem.

Environment:

  • CSI Driver version: v0.7.0 and v0.9.0
  • Kubernetes version (use kubectl version): v1.19.14
  • OS (e.g. from /etc/os-release): Ubuntu v20.04.1 LTS
  • Kernel (e.g. uname -a): 5.4.0-1032-azure PluginCapability update for external-attacher #33-Ubuntu SMP Fri Nov 13 14:23:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: Helm v3.4.2
  • Others:
    • Master node size: Standard D2ds_v4 (2 vcpus, 8 GiB memory)
    • Worker node size: Standard D16ds_v4 (16 vcpus, 64 GiB memory)

Complete log file: csi-attacher.log

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 11, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 10, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants