Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RunAsGroup. #65926

Merged
merged 1 commit into from Jul 7, 2018

Conversation

@Random-Liu
Copy link
Member

Random-Liu commented Jul 6, 2018

For kubernetes/enhancements#213
See containerd/cri#836

In containerd/cri#836, people thought that this is a containerd issue. However, it turns out that this feature doesn't work at all. @krmayankk

Without the fix:

• Failure [10.125 seconds]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [Feature:RunAsGroup] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:112

  Expected error:
      <*errors.errorString | 0xc42185bcd0>: {
          s: "expected \"gid=2002\" in container output: Expected\n    <string>: uid=1002 gid=1002\n    \nto contain substring\n    <string>: gid=2002",
      }
      expected "gid=2002" in container output: Expected
          <string>: uid=1002 gid=1002
          
      to contain substring
          <string>: gid=2002
  not to have occurred

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2325

With the fix:

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] 
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:112
[BeforeEach] [k8s.io] [sig-node] Security Context [Feature:SecurityContext]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul  6 15:38:43.994: INFO: >>> kubeConfig: /var/run/kubernetes/admin.kubeconfig
STEP: Building a namespace api object, basename security-context
Jul  6 15:38:44.024: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:112
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  6 15:38:44.027: INFO: Waiting up to 5m0s for pod "security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064" in namespace "e2e-tests-security-context-hwm7l" to be "success or failure"
Jul  6 15:38:44.029: INFO: Pod "security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064": Phase="Pending", Reason="", readiness=false. Elapsed: 1.17106ms
Jul  6 15:38:46.031: INFO: Pod "security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003308423s
Jul  6 15:38:48.033: INFO: Pod "security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005287901s
STEP: Saw pod success
Jul  6 15:38:48.033: INFO: Pod "security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064" satisfied condition "success or failure"
Jul  6 15:38:48.034: INFO: Trying to get logs from node 127.0.0.1 pod security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064 container test-container: <nil>
STEP: delete the pod
Jul  6 15:38:48.047: INFO: Waiting for pod security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064 to disappear
Jul  6 15:38:48.049: INFO: Pod security-context-56aac70e-816d-11e8-91cd-8cdcd43ac064 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context [Feature:SecurityContext]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul  6 15:38:48.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-security-context-hwm7l" for this suite.
Jul  6 15:38:54.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  6 15:38:54.084: INFO: namespace: e2e-tests-security-context-hwm7l, resource: bindings, ignored listing per whitelist
Jul  6 15:38:54.107: INFO: namespace e2e-tests-security-context-hwm7l deletion completed in 6.056285097s

• [SLOW TEST:10.113 seconds]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:112
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Security Context [Feature:SecurityContext] 
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:84
[BeforeEach] [k8s.io] [sig-node] Security Context [Feature:SecurityContext]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jul  6 15:38:54.108: INFO: >>> kubeConfig: /var/run/kubernetes/admin.kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:84
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  6 15:38:54.137: INFO: Waiting up to 5m0s for pod "security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064" in namespace "e2e-tests-security-context-hs2vr" to be "success or failure"
Jul  6 15:38:54.138: INFO: Pod "security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064": Phase="Pending", Reason="", readiness=false. Elapsed: 1.374422ms
Jul  6 15:38:56.140: INFO: Pod "security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003249942s
Jul  6 15:38:58.142: INFO: Pod "security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.005110799s
STEP: Saw pod success
Jul  6 15:38:58.142: INFO: Pod "security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064" satisfied condition "success or failure"
Jul  6 15:38:58.143: INFO: Trying to get logs from node 127.0.0.1 pod security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064 container test-container: <nil>
STEP: delete the pod
Jul  6 15:38:58.149: INFO: Waiting for pod security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064 to disappear
Jul  6 15:38:58.152: INFO: Pod security-context-5cb16d23-816d-11e8-91cd-8cdcd43ac064 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context [Feature:SecurityContext]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jul  6 15:38:58.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-security-context-hs2vr" for this suite.
Jul  6 15:39:04.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  6 15:39:04.175: INFO: namespace: e2e-tests-security-context-hs2vr, resource: bindings, ignored listing per whitelist
Jul  6 15:39:04.193: INFO: namespace e2e-tests-security-context-hs2vr deletion completed in 6.039953722s

• [SLOW TEST:10.085 seconds]
[k8s.io] [sig-node] Security Context [Feature:SecurityContext]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [Feature:RunAsGroup]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:84
------------------------------
SSSSJul  6 15:39:04.193: INFO: Running AfterSuite actions on all node
Jul  6 15:39:04.193: INFO: Running AfterSuite actions on node 1

Ran 2 of 1007 Specs in 50.246 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 1005 Skipped PASS

Ginkgo ran 1 suite in 50.482926642s
Test Suite Passed
2018/07/06 15:39:04 process.go:155: Step './hack/ginkgo-e2e.sh -host=https://localhost:6443 --ginkgo.focus=RunAsGroup' finished in 50.523613088s
2018/07/06 15:39:04 e2e.go:83: Done

We should cherry-pick this to 1.10 and 1.11. /cc @kubernetes/sig-node-bugs

Fix `RunAsGroup` which doesn't work since 1.10.
@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jul 6, 2018

[MILESTONENOTIFIER] Milestone Pull Request Needs Approval

@Random-Liu @yujuhong @kubernetes/sig-node-misc

Action required: This pull request must have the status/approved-for-milestone label applied by a SIG maintainer.

Pull Request Labels
  • sig/node: Pull Request will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help
@yujuhong

This comment has been minimized.

Copy link
Member

yujuhong commented Jul 6, 2018

Do the tests actually run in any test job? How come this was not caught by any test at all?...

@yujuhong

This comment has been minimized.

Copy link
Member

yujuhong commented Jul 6, 2018

Do the tests actually run in any test job? How come this was not caught by any test at all?...

@Random-Liu pointed out that the test is in cluster (i.e., not node) e2e suites, and by default [Feature:] tests are not run.

First, I see no reason for this test to be in the cluster e2e (e2e/node). It should either be in e2e/common or e2e_node, so that it runs as part of the node e2e. Second, the test should run somewhere. It's an alpha feature, so it should be included in the alpha test job: https://k8s-testgrid.appspot.com/google-gce#gci-gce-alpha-features

For the cherrypick, I think including the test in the alpha test job will be easier.

@yujuhong

This comment has been minimized.

Copy link
Member

yujuhong commented Jul 6, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Jul 6, 2018

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jul 6, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Random-Liu, yujuhong

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jul 7, 2018

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jul 7, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jul 7, 2018

@Random-Liu: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-local-e2e-containerized 3193a4a link /test pull-kubernetes-local-e2e-containerized

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jul 7, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit f634f7d into kubernetes:master Jul 7, 2018

15 of 17 checks passed

pull-kubernetes-local-e2e-containerized Job failed.
Details
Submit Queue Required Github CI test is not green: pull-kubernetes-kubemark-e2e-gce-big
Details
cla/linuxfoundation Random-Liu authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details

@Random-Liu Random-Liu deleted the Random-Liu:fix-run-as-group branch Jul 7, 2018

k8s-github-robot pushed a commit that referenced this pull request Jul 12, 2018

Kubernetes Submit Queue
Merge pull request #65934 from Random-Liu/automated-cherry-pick-of-#6…
…5926-upstream-release-1.10

Automatic merge from submit-queue.

Automated cherry pick of #65926: Fix RunAsGroup.

Cherry pick of #65926 on release-1.10.

#65926: Fix RunAsGroup.

k8s-github-robot pushed a commit that referenced this pull request Jul 12, 2018

Kubernetes Submit Queue
Merge pull request #65935 from Random-Liu/automated-cherry-pick-of-#6…
…5926-upstream-release-1.11

Automatic merge from submit-queue.

Automated cherry pick of #65926: Fix RunAsGroup.

Cherry pick of #65926 on release-1.11.

#65926: Fix RunAsGroup.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.