Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The vulnerability(CVE-2019-11244) is not completely eliminated #80581

Open
RainbowMango opened this issue Jul 25, 2019 · 9 comments
Open

The vulnerability(CVE-2019-11244) is not completely eliminated #80581

RainbowMango opened this issue Jul 25, 2019 · 9 comments
Assignees
Labels
area/security kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@RainbowMango
Copy link
Member

RainbowMango commented Jul 25, 2019

What happened:
The PR(#77874) seems have fixed CVE-2019-11244.

But I doubt the correction didn't fix completely.

According to the description of CVE-2019-11244:

In Kubernetes v1.8.x-v1.14.x, schema info is cached by kubectl in the location specified by --cache-dir (defaulting to $HOME/.kube/http-cache), written with world-writeable permissions (rw-rw-rw-). If --cache-dir is specified and pointed at a different location accessible to other users/groups, the written files may be modified by other users/groups and disrupt the kubectl invocation.

The cache file created by kubectl should not be modified by other users/groups, or, to be more accurate, the file can only be modified by the user who runs the process.

But after the correction, the user in the same group still can write the cache file.
In the correction we have made the two changes:

  • The path permission changed from 0755 to 0750. (I think it's ok. Allow user in the same group enter the path but can not modify the path, neither create nor delete a file in this path. Refuse all other users.)
  • The cache file permission changed from 0755 to 0660. (This still allows the user in the same group modify the cache file.)

So, I think the file permission should be 0640. The user in the same group can only read the file.

What you expected to happen:
The cache file can only be modified by the user who runs the process, and the users in the same group can only have read permission.

How to reproduce it (as minimally and precisely as possible):

  1. Create a path by root and set permission with 0755:
    # mkdir -m 0750 myPath0750

  2. Create a file in it by root and set permission with 0660:
    # touch myPath0750/myFile0660
    # chmod 0660 myPath0750/myFile0660

  3. Create a new user in root group:
    # useradd -G root -d /home/horen -m horen
    # passwd horen

  4. Switch to new users and try to modify the file myPath0750/myFile0660.
    You will see the new user can modify the file.

Anything else we need to know?:
I don't know if there a scenario for users in the same group to modify the cache file. Please let me know if there is.

Environment:

  • Kubernetes version (use kubectl version): N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g: cat /etc/os-release): CentOS 7.0
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@RainbowMango RainbowMango added the kind/bug Categorizes issue or PR as related to a bug. label Jul 25, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 25, 2019
@RainbowMango
Copy link
Member Author

cc @liggitt

@RainbowMango
Copy link
Member Author

/assign

@RainbowMango
Copy link
Member Author

/sig api-machinery
/area security

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. area/security and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 25, 2019
@liggitt
Copy link
Member

liggitt commented Jul 31, 2019

In some cases, it is expected that new files will be group-writeable.

Using os.Create() (or detecting default permissions at cache instantiation time) would honor the user's umask setting.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 29, 2019
@RainbowMango
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2019
@liggitt liggitt added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jan 23, 2020
@jiggyjigsj
Copy link

@RainbowMango This is still any active flag for us and was wondering if this is going to be looked any anytime soon? We have been flaged since Oct of 2019.

@RainbowMango
Copy link
Member Author

@jiggyjigsj I remember @liggitt file a PR(#80813) for this but don't know why it stopped.

@jiggyjigsj
Copy link

@liggitt Is this something you got around to rebasing the pr or opened a new PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants