Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods cannot be created on current (OpenShift 4.13, Kubernetes 1.26.5) environments #491

Closed
gmodzelewski opened this issue Jul 7, 2023 · 8 comments

Comments

@gmodzelewski
Copy link
Contributor

Issue:
Not starting on a current OpenShift due to default security policies:

Used Environment:
OpenShift Server Version: 4.13.4
Kubernetes Version: v1.26.5+7d22122

Error message (added whitespaces for better readability):
FailedCreate replicaset/reloader-1688730517-reloader-c795c7bd5 Error creating: pods "reloader-1688730517-reloader-c795c7bd5-" is forbidden: unable to validate against any security context constraint: [
provider "anyuid": Forbidden: not usable by user or serviceaccount,
provider "pipelines-scc": Forbidden: not usable by user or serviceaccount,
spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000750000, 1000759999],
provider "restricted": Forbidden: not usable by user or serviceaccount,
provider "container-build": Forbidden: not usable by user or serviceaccount,
provider "nonroot-v2": Forbidden: not usable by user or serviceaccount,
provider "nonroot": Forbidden: not usable by user or serviceaccount,
provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
provider "hostaccess": Forbidden: not usable by user or serviceaccount,
provider "node-exporter": Forbidden: not usable by user or serviceaccount,
provider "privileged": Forbidden: not usable by user or serviceaccount]

Remediation ideas
I suppose this can probably be solved via default security constraints or documentation on how to add this.

@smuda
Copy link
Contributor

smuda commented Jul 7, 2023

(Not a maintainer, only user)

I suppose you've installed reloader via the helm chart. Looking at the README.md there is a documented parameter isOpenshift which when set to true handles a lot of things appropriate for OpenShift/OKD.

My setup works nicely on OKD 4.13 with pod-security.kubernetes.io/enforce=restricted. I have two levels of value-files; the basic and the Openshift/OKD specific. My basic values looks like this:

  reloader:
    reloadStrategy: annotations
    readOnlyRootFileSystem: true
    deployment:
      resources:
        requests:
          cpu: "10m"
          memory: "128Mi"
        limits:
          memory: "512Mi"

and the Openshift/OKD specific:

  isOpenshift: true
  reloader:
    serviceMonitor:
      enabled: true
    deployment:
      securityContext: false

@MuneebAijaz
Copy link
Contributor

hi @gmodzelewski , we ourselves have been using reloader on OCP 4.9 - 4.12 with isOpenshift: true option, however we haven't tested it out yet on OCP 4.13.
Can you pls tell us more about the values file you used to deploy reloader?

@gmodzelewski
Copy link
Contributor Author

I just set one variable:
isOpenShift: true

That does not work on my OpenShift 4.13.
If I use the way @smuda wrote and set
reloader.deployment.securityContext: false
it works.

Don't know if that's a bug or it should be documented somewhere.

@MuneebAijaz
Copy link
Contributor

Seems like a documentation issue, I will try to update that as soon as possible. You are also welcome to open a PR with suggested changes in documentation.

@gmodzelewski
Copy link
Contributor Author

Strange. If I do a
helm install reloader stakater/reloader --namespace reloader --create-namespace --set reloader.isOpenShift=true --set reloader.deployment.securityContext=false
It doesn't work. I get a warning:
coalesce.go:220: warning: cannot overwrite table with non table for reloader.reloader.deployment.securityContext (map[runAsNonRoot:true runAsUser:65534])
And errors in the logs because of missing permission to list stuff.

If I set securityContext to false in the values(OpenShift helm yaml view) it works. Probably some issue with the setting?

Strange.

BTW: Wouldn't it be nicer to have an enabled flag in the securityContext, something like reloader.deployment.securityContext.enabled=false

@smuda
Copy link
Contributor

smuda commented Jul 18, 2023

@gmodzelewski
Yes, this is kind of a problem. On one hand helm sucks at removing default values defined in upstream values.yaml, on the other hand it's reasonable that the upstream values.yaml should be as secure as possible.

In this case the reloader team has decided to set the following default values for deployment.securityContext:

reloader:
  deployment:
    securityContext:
      runAsNonRoot: true
      runAsUser: 65534

These values are then used in deployment.yaml.

{{- with .Values.reloader.deployment.containerSecurityContext }}
 securityContext: {{ toYaml . | nindent 10 }}
{{- end }}

Outside of Openshift/OKD context, theses are reasonable default values. As you know however, Openshift/OKD has it's own model of handling runAsUser to make sure there are no conflicting users and thereby increasing security, but that requires the specified user to be in a uid range which are dynamically assigned to the namespace. By not specifying runAsUser, Openshift/OKD will assign the uid automatically from that range but that means we need to get rid of runAsUser in the securityContext.

When there is default values defined in values.yaml, making them disappear in helm is messy, even though it's been getting easier in later versions.

When defining the values from another values.yaml file (I tend to wrap an external helm charts in internal helm charts), one trick is to set maps to a value that makes no sense and the one that seems to work the best is boolean false.

However, when running with "set" commands you are allowed to specify null and setting securityContext to null seems to work best. (I did test with false as well which worked nicely for me which didn't work for you. Perhaps I'm on a newer helm version? I'm using v3.12.2)

helm template stakater/reloader --version v1.0.5 --show-only templates/deployment.yaml --set reloader.isOpenshift=true --set reloader.deployment.securityContext=null

You can even null out only runAsUser. That way you will keep runAsNonRoot: true. Setting runAsUser to false will not work.

helm template stakater/reloader --version v1.0.5 --show-only templates/deployment.yaml --set reloader.isOpenshift=true --set reloader.deployment.securityContext.runAsUser=null

Personally, I'd get rid of the default values of securityContext in the upstream chart, but I'm also aware of the security implications and will set the required values manually (and Openshift/OKD saves me some hassle). I however totally get that the reloader team want reasonably secure default settings for the larger public.

But perhaps your suggestion reloader.deployment.securityContext.enabled=false is a good middle ground?

@gmodzelewski
Copy link
Contributor Author

Hey. That worked. I can install reloader in a one-liner on OpenShift 4.13 without any errors via:
helm install reloader stakater/reloader --set reloader.isOpenshift=true --set reloader.deployment.securityContext.runAsUser=null

So, I suppose that should be documented somewhere. Will check tomorrow and create a PR (if not yet created by someone else).

bnallapeta added a commit that referenced this issue Jul 19, 2023
#491 Readme: Add OpenShift 4.13 runAsUser unset part
@bnallapeta
Copy link
Contributor

Closed by #499

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants