Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended.[k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] #14876

Closed
bparees opened this issue Jun 25, 2017 · 7 comments
Assignees
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P1

Comments

@bparees
Copy link
Contributor

bparees commented Jun 25, 2017

/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originRdjKmf/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:39
Expected error:
    <*errors.errorString | 0xc4217422a0>: {
        s: "expected pod \"pod-configmaps-6a15e904-598c-11e7-8c20-0ea2fa5297b4\" success: <nil>",
    }
    expected pod "pod-configmaps-6a15e904-598c-11e7-8c20-0ea2fa5297b4" success: <nil>
not to have occurred
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originRdjKmf/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:2183

https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_conformance_gce/3599

@derekwaynecarr
Copy link
Member

there was an upstream issue:
kubernetes/kubernetes#42782

which was traced back to a panic in kubelet:
kubernetes/kubernetes#42927

which was picked up in openshift bump here:
#13653

i can't access the logs anymore to see, but i suspect this flaked for a different reason than the more serious panic that was the source of the upstream flake which we have the fix for in openshift.

@0xmichalis
Copy link
Contributor

@smarterclayton
Copy link
Contributor

       s: "expected pod \"pod-projected-configmaps-e8a3e259-8b66-11e7-86e3-0ee7704fd4f6\" success: pod \"pod-projected-configmaps-e8a3e259-8b66-11e7-86e3-0ee7704fd4f6\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-27 16:32:56 -0400 EDT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-27 16:32:56 -0400 EDT Reason:ContainersNotReady Message:containers with unready status: [projected-configmap-volume-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-08-27 16:32:56 -0400 EDT Reason: Message:}] Message: Reason: HostIP:10.128.0.3 PodIP:172.16.2.76 StartTime:2017-08-27 16:32:56 -0400 EDT InitContainerStatuses:[] ContainerStatuses:[{Name:projected-configmap-volume-test State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"open /sys/fs/cgroup/pids/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a8b664_8b66_11e7_92c8_42010a800005.slice/docker-e76f1ade3e27961339a1c690a22206caef83fd163de02ca009e16ceed0861e1f.scope/cgroup.procs: no such file or directory\\\\\\\"\\\"\\n\",StartedAt:2017-08-27 16:32:57 -0400 EDT,FinishedAt:2017-08-27 16:32:57 -0400 EDT,ContainerID:docker://e76f1ade3e27961339a1c690a22206caef83fd163de02ca009e16ceed0861e1f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/mounttest:0.8 ImageID:docker-pullable://gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 ContainerID:docker://e76f1ade3e27961339a1c690a22206caef83fd163de02ca009e16ceed0861e1f}] QOSClass:BestEffort}",

Looks like a cgroup race, probably new (I think the old one is fixed). Repurposing and bumping priority.

@sjenning (you should add a sig-node group so we can start mentioning that).

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 18, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P1
Projects
None yet
Development

No branches or pull requests

6 participants