Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use securitycontext with the *-check pods #400

Closed
davidkarlsen opened this issue Apr 1, 2020 · 10 comments
Closed

use securitycontext with the *-check pods #400

davidkarlsen opened this issue Apr 1, 2020 · 10 comments
Assignees
Labels
feature request A request for a specific feature to be added to Kuberhealthy
Milestone

Comments

@davidkarlsen
Copy link
Contributor

Describe the feature you would like and why you want it
The checker pods fails our pod security policy because they run as priv. user.

Additional context

  Normal   Pulled     13s                kubelet, alt-eos-g-kwo01  Successfully pulled image "fsdepot.evry.com:8085/kuberhealthy/dns-resolution-check:v1.1.0"
  Warning  Failed     13s (x2 over 13s)  kubelet, alt-eos-g-kwo01  Error: container has runAsNonRoot and image will run as root
@davidkarlsen davidkarlsen added the feature request A request for a specific feature to be added to Kuberhealthy label Apr 1, 2020
@integrii
Copy link
Collaborator

integrii commented Apr 1, 2020

Hey @davidkarlsen! Thanks for reporting in. We finally got Kuberhealthy listed as a private repository and people can use helm install again. Glad to see you still around the project.

@jdowni000 is working on adding security scans for Dockerfiles in issue #401 (already in progress but no issue existed). I also created issue #402 to explicitly track the migration of all images to non-root users.

@integrii integrii self-assigned this Apr 1, 2020
@integrii integrii added this to the 2.2.0 milestone Apr 1, 2020
@davidkarlsen
Copy link
Contributor Author

davidkarlsen commented Apr 1, 2020

please note that securityContext != securityScan - and Kubernetes does not really care about the USER in the Dockerfile.

The securityContext controls the UID of the process and would fix the issue I reported. Also I can recommend running the container with securityContext.readOnlyRootFilesystem: https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/

@NissesSenap
Copy link
Contributor

As a happy OpenShift (OCP) user I thought i would add some extra input, was just going to create a issue about this.
Currently in daemonset-check we have runAsUser := int64(1000)
This creates issues in OCP:

Error creating: pods "ds-check-daemonset-1586161692-1586161705-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000670000, 1000679999]]

I agree that I think that you should write a securityContext but we should put it ether as
runAsUser := int64(1000670001)

Or make it configurable through a env or something like that.
Since it was hardcoded I have built my own version of daemonset-check where I set this high UID and you can try it in:
quay.io/nissessenap/daemonset-check:v2.1.1

@davidkarlsen
Copy link
Contributor Author

as long as it's in a section in the values.yaml the end-user can easily set it to the configuration they want.

@integrii
Copy link
Collaborator

integrii commented Apr 6, 2020

Oh, I see. So fixing #401 and #402 were not particularly helpful for fixing this issue. It does appear that kubernetes will always run pods as root, even if the Dockerfile that created them specifies a different USER.

So then, it seems like the best fix here is to add the following to all of our khcheck yamls (helm chart and per-check examples):

Under main pod spec (pod security context):

spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 1000

In the check container spec (within pod spec security context):

    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true

(I also threw in the allowPrivilegeEscalation disablement in here.

@2infinitee could you check this one out?

@NissesSenap
Copy link
Contributor

Now stuff went a bit to quick.
We should still move away from running containers as pods at least from a OpenShift point of view. To get kuberhealthy to work i currently disable a security policy on a specific service account and I would love not to be forced to do this even if we set a sercuritContext.

Even if you change the sercuritContext and make it harder to break out of the container itself you still shouldn't run a container as root since it still give potential hackers to play around way to much in the container itself.
As always when it comes to security go with the least amount of access as possible.

So please reopen #401 and #402

@NissesSenap
Copy link
Contributor

I haven't thought about this earlier but I hit the issue when i reinstalled one of my clusters.
In OpenShift (OCP) the range on which securityContext.runAsUser is unique per project(namespace) so setting it just creates a bunch of issues because if you don't specify it OCP does it for you.
So now when I hardcoded the runAsUser to 1000670001 it dosen't work im my new ns due to error: Invalid value: 1000670001: must be in the ranges: [1000550000, 1000559999]]

https://docs.openshift.com/container-platform/4.3/authentication/managing-security-context-constraints.html#security-context-constraints-example_configuring-internal-oauth scroll down to the "Without explicit runAsUser setting" and you will see a explanation plus this:

"The admission plug-in will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges."

More or less if we force securityContext.runAsUser on all pods it will be rather hard to use from my point of view.
So i definitely see the need for this but also I see a need to be able to shut it off and in my case let OCP do it's magic.

Ideally we should come up with a solution where we can send config to the final pod being created and not just the deployment/job/ds etc and not use environment variabels since it wont scale for long.

I currently don't have a good idea have to achieve that.

@NissesSenap
Copy link
Contributor

I potentially came up with an idea on how to easily send data to the images we should start without having to rewrite big parts of the kuberhealthy and we can still be backwards compatible.

I think the solution is that we add a feature to all the checks to take a configmap as input that contains a good old pod config. We add the feature in the CRD to take the name of a configmap as input.
That way we can easily define values in the cm that works with specific needs without having to do code changes.

For example lets say that we want to have more memory in my pod

apiVersion: v1
kind: Pod
metadata:
name: deployment-deployment-something
spec:
containers:

  • image: nginx:1.17.9
    name: deployment-container
    resources:
    limits:
    cpu: 75m
    memory: 75Mi
    requests:
    cpu: 15m
    memory: 50Mi

We can parse the config file with help of the k8s lib and check vs the defaults that we have written in the code. If the differ we just change them.
We add it to the deployment template and run the check as normal.

We are still backwards compatible since we don't need to get any configmap and the ENV config that we can send in can still overwrite the configmap.

We can also overwrite any default values that we want. For example:
If I want to let OpenShift handle the security constraints for me all i need to do is

securityContext: {}
and let that overwrite them and you can still have your sain defaults (and they are sain).

What do you think?

@2infinitee
Copy link
Collaborator

@NissesSenap thanks for the suggestions, I'll take this back to the team and see what inputs they have as well.

@2infinitee
Copy link
Collaborator

Closing issue. Changes have been merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request A request for a specific feature to be added to Kuberhealthy
Projects
None yet
Development

No branches or pull requests

4 participants