Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {E2eNode Suite} #42920

Closed
k8s-github-robot opened this issue Mar 10, 2017 · 5 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. release-blocker sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet/2441/
Failed: [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:40
Expected error:
    <*errors.errorString | 0xc4203a8e40>: {
        s: "failed to get logs from pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b for projected-secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b)",
    }
    failed to get logs from pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b for projected-secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b)
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195
@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Mar 10, 2017
@yujuhong
Copy link
Contributor

Found something in the kubelet.log....kubelet panicked.

I0310 22:29:50.140506   10834 kubelet_pods.go:879] Killing unwanted pod "pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b"
I0310 22:29:50.142556   10834 plugins.go:410] Calling network plugin kubenet to tear down pod "pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b_e2e-tests-projected-84c6m"
I0310 22:29:50.148760   10834 kubenet_linux.go:471] Failed to remove pod IP 10.180.0.131 from shaper: Failed to find cidr: 10.180.0.131/32 on interface: cbr0
I0310 22:29:50.149521   10834 reconciler.go:352] Detached volume "kubernetes.io/projected/12394362-05e1-11e7-a478-42010a80000b-projected-secret-volume" (spec.Name: "projected-secret-volume") devicePath: ""
I0310 22:29:50.149895   10834 kubenet_linux.go:789] Removing e2e-tests-projected-84c6m/pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b from 'kubenet' with CNI 'bridge' plugin and runtime: &{ContainerID:90dac145a58da154fff23b9856d39d4d385f6537b6338f3e1c20f9964d97f85a NetNS:/proc/16056/ns/net IfName:eth0 Args:[]}
I0310 22:29:50.204578   10834 iptables.go:361] running iptables -C [POSTROUTING -t nat -m comment --comment kubenet: SNAT for outbound traffic from cluster -m addrtype ! --dst-type LOCAL ! -d 10.0.0.0/8 -j MASQUERADE]
I0310 22:29:50.214800   10834 kubenet_linux.go:520] TearDownPod took 72.218579ms for e2e-tests-projected-84c6m/pod-projected-secrets-1239307e-05e1-11e7-91c0-42010a80000b
I0310 22:29:50.219654   10834 kubelet_pods.go:1537] Orphaned pod "12394362-05e1-11e7-a478-42010a80000b" found, removing pod cgroups
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xddd21b]

goroutine 15977 [running]:
panic(0x2d90020, 0xc4200100b0)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids.func1(0xc421cf9e40, 0xa9, 0x0, 0x0, 0x4d237c0, 0xc420ddcb70, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:452 +0x3b
path/filepath.walk(0xc4215a5e00, 0x93, 0x4d47ae0, 0xc420e3add0, 0xc42057bf00, 0x0, 0x0)
	/usr/local/go/src/path/filepath/path.go:372 +0x22e
path/filepath.walk(0xc420f92a20, 0x52, 0x4d47ae0, 0xc420e3ad00, 0xc42057bf00, 0x0, 0x1)
	/usr/local/go/src/path/filepath/path.go:376 +0x344
path/filepath.Walk(0xc420f92a20, 0x52, 0xc42057bf00, 0x0, 0x0)
	/usr/local/go/src/path/filepath/path.go:398 +0xd5
k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids(0xc4202e45a0, 0xc420da6580, 0x3c, 0xc421a1ef28, 0xc420ffe3c0, 0xc420ffef60)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:466 +0x380
k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).tryKillingCgroupProcesses(0xc4209de0a0, 0xc420da6580, 0x3c, 0x43d515, 0xc421a1eed8)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:120 +0x7e
k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).Destroy(0xc4209de0a0, 0xc420da6580, 0x3c, 0x1, 0xc421d73540)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:157 +0x5a
created by k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanupOrphanedPodCgroups
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1542 +0x381

/cc @derekwaynecarr @dchen1107 @kubernetes/sig-node-bugs

@yujuhong yujuhong removed their assignment Mar 10, 2017
@yujuhong yujuhong added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Mar 10, 2017
@yujuhong yujuhong added this to the v1.6 milestone Mar 10, 2017
@yujuhong yujuhong added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Mar 10, 2017
@yujuhong
Copy link
Contributor

Happened in https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet/2431 too

Mar 10 07:58:25 localhost kubelet[2233]: I0310 07:58:25.438662    2233 iptables.go:361] running iptables -C [POSTROUTING -t nat -m comment --comment kubenet: SNAT for outbound traffic from cluster -m addrtype ! --dst-type LOCAL ! -d 10.0.0.0/8 -j MASQUERADE]
Mar 10 07:58:25 localhost kubelet[2233]: I0310 07:58:25.450507    2233 kubenet_linux.go:520] TearDownPod took 145.247277ms for e2e-tests-downward-api-hxsbr/downward-api-55e5d34b-0567-11e7-b233-42010a800012
Mar 10 07:58:25 localhost kubelet[2233]: panic: runtime error: invalid memory address or nil pointer dereference
Mar 10 07:58:25 localhost kubelet[2233]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xddd10b]
Mar 10 07:58:25 localhost kubelet[2233]: goroutine 24560 [running]:
Mar 10 07:58:25 localhost kubelet[2233]: panic(0x2d8e240, 0xc42000e040)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids.func1(0xc421de0d10, 0xa9, 0x0, 0x0, 0x4d1f7a0, 0xc4210f3620, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:452 +0x3b
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.walk(0xc4216890e0, 0x93, 0x4d439e0, 0xc422058410, 0xc4209eaca0, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:372 +0x22e
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.walk(0xc421ded020, 0x52, 0x4d439e0, 0xc422058340, 0xc4209eaca0, 0x0, 0x1)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:376 +0x344
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.Walk(0xc421ded020, 0x52, 0xc4209eaca0, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:398 +0xd5
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids(0xc42042af40, 0xc421540b80, 0x3c, 0xc4208ba750, 0xc422098ff0, 0xc42093d560)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:466 +0x380
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).tryKillingCgroupProcesses(0xc4209aabe0, 0xc421540b80, 0x3c, 0x58c25ca1, 0xc4182e4e65)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:120 +0x7e
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).Destroy(0xc4209aabe0, 0xc421540b80, 0x3c, 0x1, 0xc421e61540)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:157 +0x5a
Mar 10 07:58:25 localhost kubelet[2233]: created by k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanupOrphanedPodCgroups
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1542 +0x381
Mar 10 07:58:25 localhost systemd[1]: kubelet-2090179018.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

@Random-Liu
Copy link
Member

Random-Liu commented Mar 10, 2017

@dchen1107
Copy link
Member

@derekwaynecarr and @vishh I am assigning this one to you two since it is related to pod cgroup cleanup logic.

@Random-Liu
Copy link
Member

Random-Liu commented Mar 10, 2017

Send a PR to fix this #42927. Hope it makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. release-blocker sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

7 participants