Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {E2eNode Suite} #42875

Closed
k8s-github-robot opened this issue Mar 10, 2017 · 2 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/node Categorizes an issue or PR as relevant to SIG Node.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet/2431/
Failed: [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] {E2eNode Suite}

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:175
Expected error:
    <*errors.errorString | 0xc420beb600>: {
        s: "failed to get logs from downward-api-55e5d34b-0567-11e7-b233-42010a800012 for dapi-container: the server could not find the requested resource (get pods downward-api-55e5d34b-0567-11e7-b233-42010a800012)",
    }
    failed to get logs from downward-api-55e5d34b-0567-11e7-b233-42010a800012 for dapi-container: the server could not find the requested resource (get pods downward-api-55e5d34b-0567-11e7-b233-42010a800012)
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2195
@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 sig/node Categorizes an issue or PR as relevant to SIG Node. labels Mar 10, 2017
@derekwaynecarr
Copy link
Member

this looks like the same issue fixed #42927

@derekwaynecarr
Copy link
Member

For reference, this showed the following panic:

Mar 10 07:58:25 localhost kubelet[2233]: I0310 07:58:25.450507    2233 kubenet_linux.go:520] TearDownPod took 145.247277ms for e2e-tests-downward-api-hxsbr/downward-api-55e5d34b-0567-11e7-b233-42010a800012
Mar 10 07:58:25 localhost kubelet[2233]: panic: runtime error: invalid memory address or nil pointer dereference
Mar 10 07:58:25 localhost kubelet[2233]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xddd10b]
Mar 10 07:58:25 localhost kubelet[2233]: goroutine 24560 [running]:
Mar 10 07:58:25 localhost kubelet[2233]: panic(0x2d8e240, 0xc42000e040)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids.func1(0xc421de0d10, 0xa9, 0x0, 0x0, 0x4d1f7a0, 0xc4210f3620, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:452 +0x3b
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.walk(0xc4216890e0, 0x93, 0x4d439e0, 0xc422058410, 0xc4209eaca0, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:372 +0x22e
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.walk(0xc421ded020, 0x52, 0x4d439e0, 0xc422058340, 0xc4209eaca0, 0x0, 0x1)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:376 +0x344
Mar 10 07:58:25 localhost kubelet[2233]: path/filepath.Walk(0xc421ded020, 0x52, 0xc4209eaca0, 0x0, 0x0)
Mar 10 07:58:25 localhost kubelet[2233]:         /usr/local/go/src/path/filepath/path.go:398 +0xd5
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*cgroupManagerImpl).Pids(0xc42042af40, 0xc421540b80, 0x3c, 0xc4208ba750, 0xc422098ff0, 0xc42093d560)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go:466 +0x380
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).tryKillingCgroupProcesses(0xc4209aabe0, 0xc421540b80, 0x3c, 0x58c25ca1, 0xc4182e4e65)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:120 +0x7e
Mar 10 07:58:25 localhost kubelet[2233]: k8s.io/kubernetes/pkg/kubelet/cm.(*podContainerManagerImpl).Destroy(0xc4209aabe0, 0xc421540b80, 0x3c, 0x1, 0xc421e61540)
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cm/pod_container_manager_linux.go:157 +0x5a
Mar 10 07:58:25 localhost kubelet[2233]: created by k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanupOrphanedPodCgroups
Mar 10 07:58:25 localhost kubelet[2233]:         /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1542 +0x381

which is fixed in referenced pr.

@ethernetdan ethernetdan added this to the v1.6 milestone Mar 13, 2017
liggitt pushed a commit to liggitt/kubernetes that referenced this issue Mar 14, 2017
Automatic merge from submit-queue (batch tested with PRs 42802, 42927, 42669, 42988, 43012)

Fix kubelet panic in cgroup manager.

Fixes kubernetes#42920.
Fixes kubernetes#42875
Fixes kubernetes#42927 
Fixes kubernetes#43059

Check the error in walk function, so that we don't use info when there is an error.

@yujuhong @dchen1107 @derekwaynecarr @vishh /cc @kubernetes/sig-node-bugs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

3 participants