Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove ContainerUser from busybox #35

Closed
wants to merge 1 commit into from
Closed

Remove ContainerUser from busybox #35

wants to merge 1 commit into from

Conversation

benmoss
Copy link
Contributor

@benmoss benmoss commented Mar 5, 2019

Just use the default, which is ContainerAdministrator

@BCLAU

Just use default, which is ContainerAdministrator

Signed-off-by: Ben Moss <bmoss@pivotal.io>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Mar 5, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: benmoss
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: patricklang

If they are not already assigned, you can assign the PR to them by writing /assign @patricklang in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pjh
Copy link
Contributor

pjh commented Mar 5, 2019

cc @jingxu97 @yujuhong

@benmoss
Copy link
Contributor Author

benmoss commented Mar 6, 2019

My understanding is that container users are only interesting if you have a "multi-user" container, where you setup some files as one user but then create another user that doesn't have permissions over them. I don't know the use-case for this, but I don't think there's any reason we need to do this in our test images. I saw that a bunch of the other images in this repo do the same thing, but this one is especially annoying since it ends up meaning that we need to grant BUILTIN\Users write permissions on the host's C:\var\lib\kubelet directory in order for the "Container Runtime blackbox test when starting a container that exits should run with the expected status" test to pass.

@claudiubelu
Copy link
Contributor

/lgtm

Just an fyi, there are tests which testing "multi-user" containers as you say, but we can't run them at the moment, because Kubernetes does that by specifying RunAsUid in the pod spec, which is not how Windows works. But in v1.15 we'll probably have RunAsUsername as an alternative, so those tests will become relevant then. But those tests use the mounttest image primarely anyways.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 6, 2019
@claudiubelu
Copy link
Contributor

But it's still a weird situation, and something doesn't feel right. You're basically saying that if a container is being run as non-root (non-Administrator in our case), they're not able to write in an emptyDir volume? IMO, that totally defeats the purpose of the emptyDir voume in the first place. That sounds like an issue to me, and we should also investigate it further.

Can you replicate the same issue in a Linux Pod? Does the same thing happens there as well?

@benmoss
Copy link
Contributor Author

benmoss commented Mar 6, 2019

On Linux the emptyDir directories are created with drwxrwxrwx, meaning that anyone can modify them. This means that no matter which user the container runs as, it can always read, write, and execute files in an emptyDir. No special permissions have to be given over /var/lib/kubelet or anything.

# ls -alh /var/lib/kubelet/pods/22c63ccc-402d-11e9-a7ff-005056ba5a6a/volumes/kubernetes.io~empty-dir/
total 12K
drwxr-xr-x 3 root root 4.0K Mar  6 16:30 .
drwxr-x--- 4 root root 4.0K Mar  6 16:30 ..
drwxrwxrwx 2 root root 4.0K Mar  6 16:30 cache-volume
apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  securityContext:
    runAsUser: 1004
  containers:
  - image: busybox:latest
    name: test-container
    command: ["/bin/sh"]
    args: ["-c", "ls -alh /cache; touch /cache/test; ls -alh /cache; while true; do sleep 86400; done"]
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}

I created a user with useradd foo and it has the UID 1004 (id foo). I set that as the runAsUser param.

I'm not sure if this means there's a bug in Windows/Kubernetes/Docker where it should be putting the same drwxrwxrwx permissions on the emptyDir, or what.

@claudiubelu
Copy link
Contributor

Thanks for trying it out. It seems that, if no Medium is specified, the emptyDir defaults to the scratch disk. It is interesting that on Linux the permissions are 777, but it sounds right.

IMO, it is a bug, since this is an unexpected behaviour, but I'm not sure who causes this bug (Windows / Kubelet / Docker). Either way, this is something we'll have to look into for v1.15, especially for the RunAsUsername feature.

@PatrickLang, what are your thoughts on this?

@benmoss
Copy link
Contributor Author

benmoss commented Mar 6, 2019

Relevant to this: kubernetes/kubernetes@5394aa9

Another thing I noticed while testing this: the ContainerUser user can't seem to bind to a port on Windows in my tests. I used this deployment to test: https://gist.github.com/benmoss/a56ebed1653081bff0cb531bc0e201f6

@yujuhong
Copy link
Contributor

yujuhong commented Mar 6, 2019

My understanding is that container users are only interesting if you have a "multi-user" container, where you setup some files as one user but then create another user that doesn't have permissions over them

I just read https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage

The default users are different on "server core" and and "nano server" images. Probably a known fact, but pointing it out in case anyone's wondering.

"ContainerAdministrator" on Windows Server Core and "ContainerUser" on Nano Server containers, by default

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 24, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jingxu97
Copy link

jingxu97 commented Oct 7, 2019

@benmoss

Should we fix this issue or it is the expected spec?

@jingxu97
Copy link

jingxu97 commented Oct 7, 2019

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 7, 2019
@jingxu97
Copy link

jingxu97 commented Oct 7, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@jingxu97: Failed to re-open PR: state cannot be changed. The repository that submitted this pull request has been deleted.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants