-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run kaniko as non-root user within the container #105
Comments
Right now kaniko requires root for a few reasons:
We should be able to solve the first case using an init container. This container will copy the executor binary into a shared volume. Then the "build" container image is actually the base image in the FROM line, and the entrypoint gets set to the binary we just copied into the shared volume. This also takes advantage of layer caching by letting the underlying runtime unpack the base layers. We can't really fix the second part until something like user namespaces are supported in kubernetes, but we can allow builds to run with only the permissions they need until then. |
Using the tools we've been developing as part of the rootless-containers project (https://github.com/rootless-containers) it is possible to do both of those things completely unprivileged (though it does require |
Yeah, the ptrace trick is something we've looked at with fakeroot-ng (although ptrace is blocked by the default docker capability set). Do you have links to the runc bugs you're working through? |
@cyphar do you have any links on running runc inside a container without something like the "--privileged" flag? |
There are two options: 1. Modify container runtime implementations to support
|
I would have to double-check this, but when you set up a rootless container you have the full capability set (though we drop capabilities just as in the privileged case, so you'd need to add
At the moment the main issue us the |
Why do you need a nested runtime to begin with? Runtime configuration security can always be configured in the "parent" build container, seccomp profiles applied, etc. You would do this in Kubernetes by setting pod security policies (of course not all the features are fully implemented today). Kaniko is meant to be ran from inside a container only and never as a standalone "runtime". It almost seems like the wrong separation of concerns to have set of default security applied during the "parent" container, and then have the nested container apply something else (albeit only more strict). I can't imagine nested containers would be easier to manage with Kubernetes non-container security policies like RBAC, since you would really like to scope whoever is running the build to the nested non-root container policies, which is the minimal scope. |
The nested containers suggestion is so that the build doesn't require root (which it currently does). Docker (or |
Ah I guess I meant once docker or cri-o supports rootless containers, then we won't need the nested container? The only purpose of the nested container is that the parent runtime doesnt support rootless while the nested runtime does. |
Sure, though with Docker and Really it depends how much you want to push the rootless thing. Personally (for the stuff I'm working on) I think that if you have any code-path that requires (Note that Docker will likely never support rootless directly, purely due to the amount of work required -- though I'd be happy to be proven wrong. |
This is a very disappointing thread to read. I was given to understand from the publicity that kaniko did not require any privileges: this is a bit of a holy grail for our org, since the build environment is very locked-down. |
@ianmiell You can do unprivileged builds with Several organisations already use |
@cyphar thanks, but I was hoping for something more ready-packaged. I've a house full of yaks already :) |
I agree we need to clarify the documentation and messaging a bit more here. Kaniko was designed to run in a container in any kubernetes cluster, today. umoci and a few other tools are really cool too, kubernetes and docker just don't support the necessary settings to let them run without the --privileged flag. @ianmiell, can you describe exactly how your build environment is locked down today? This issue is about letting kaniko run without root at all, provided your build doesn't require root to run. User namespaces will be required to let builds that require root run without root on the host, but that's not possible today in kubernetes without the --privileged flag or something similar. |
If anyone's interested, I've got a reproducible build of a poc using the tools described by @cyphar above (and with his help) here: https://github.com/ianmiell/shutit-orca-build on Centos. Does require a little sysctl work to enable (in the script). |
Regarding yak-shaving, yeah we're working on it. @AkihiroSuda has |
Just stumbled upon Kaniko once more and asking myself if with the Docker gVisor support, is it now possible to build images without root or docker-in-docker within a Docker or Kubernetes host? |
Yup, if you have gvisor configured you should be good to go: https://github.com/GoogleContainerTools/kaniko/blob/master/README.md#running-kaniko-in-gvisor |
For anyone searching for a solution on how to run kaniko on a podsecuritypolicy-secured k8s cluster, I found this article useful (best security you can get with kaniko on k8s). |
@kcatro could you share a link to the article you mentioned? |
Oh boi, failed on ctrl+v -.- Here's the article: https://kurtmadel.com/posts/native-kubernetes-continuous-delivery/building-container-images-with-kubernetes/. |
100% agreed with this remark !! |
Any news on the issue? Would be really cool to run kaniko as non-root user inside the container. |
I'm trying to run Kaniko within a Kubernetes context in a secure fashion. securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
capabilities:
drop:
- ALL However setting that makes Kaniko fail. |
|
Thanks @06kellyjac - I've started looking into that myself... forked the repo and created a branch with a test case... you can see it here |
Following minimal permissions are working for me @srfrnk, all other caps are not needed, and as stated beforehand, currently kaniko has to run as root. securityContext:
capabilities:
drop: [ALL]
add: [CHOWN, FOWNER, SETUID, SETGID, DAC_OVERRIDE]
privileged: false
allowPrivilegeEscalation: false |
Thanks for the workaround @cmdjulian. {
"severity": "medium",
"description": "",
"resolve": "Set `securityContext.runAsNonRoot` to `true`",
"id": "SNYK-CC-K8S-10",
"impact": "Container could be running with full administrative privileges",
"msg": "input.spec.template.spec.containers[kaniko-runner].securityContext.runAsNonRoot",
"subType": "Deployment",
"issue": "Container is running without root user control",
"publicId": "SNYK-CC-K8S-10",
"title": "Container is running without root user control",
"references": [
"CIS Docker Benchmark 1.2.0 - 5.5 Ensure sensitive host system directories are not mounted on containers",
"https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups",
"https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/"
],
"isIgnored": false,
"iacDescription": {
"issue": "Container is running without root user control",
"impact": "Container could be running with full administrative privileges",
"resolve": "Set `securityContext.runAsNonRoot` to `true`"
},
"lineNumber": 18,
"documentation": "https://snyk.io/security-rules/SNYK-CC-K8S-10",
"isGeneratedByCustomRule": false,
"path": [
"[DocId: 0]",
"input",
"spec",
"template",
"spec",
"containers[kaniko-runner]",
"securityContext",
"runAsNonRoot"
]
},
{
"severity": "low",
"description": "",
"resolve": "Set `securityContext.readOnlyRootFilesystem` to `true`",
"id": "SNYK-CC-K8S-8",
"impact": "Compromised process could abuse writable root filesystem to elevate privileges",
"msg": "input.spec.template.spec.containers[kaniko-runner].securityContext.readOnlyRootFilesystem",
"subType": "Deployment",
"issue": "`readOnlyRootFilesystem` attribute is not set to `true`",
"publicId": "SNYK-CC-K8S-8",
"title": "Container is running with writable root filesystem",
"references": [
"CIS Docker Benchmark 1.2.0 - Ensure that the container's root filesystem is mounted as read only",
"https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems",
"https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/"
],
"isIgnored": false,
"iacDescription": {
"issue": "`readOnlyRootFilesystem` attribute is not set to `true`",
"impact": "Compromised process could abuse writable root filesystem to elevate privileges",
"resolve": "Set `securityContext.readOnlyRootFilesystem` to `true`"
},
"lineNumber": 20,
"documentation": "https://snyk.io/security-rules/SNYK-CC-K8S-8",
"isGeneratedByCustomRule": false,
"path": [
"[DocId: 0]",
"input",
"spec",
"template",
"spec",
"containers[kaniko-runner]",
"securityContext",
"readOnlyRootFilesystem"
]
} Maybe just trying to add |
You can try that but depending on how kaniko builds the container and outputs results it probably needs some writable mount, so you might need to mount an emptydir and set the working directory or change the build location via flags. Also it wouldnt fix the medium |
@srfrnk readOnlyRootFilesystem is not very relevant for kaniko as kaniko is not a long running service. This only gets relevant if an attacker infects your container and persists some data in the containers root fs so that on each restart the malicious files are still present. As kaniko terminates after a very short amount of time, this is not a real risk in my opinion. If somebody has a different opinion let me know. |
Thanks or the explanation @cmdjulian. |
Can you show me which exact configuration made the readOnlyRootFs work? If I understood you correctly you made that work. |
See this commit: srfrnk@fff4d70
/bin is now mounted into an emptyDir - which means it should have been populated with the filesystem from the base image (in this case alpine...) right? |
Hey @srfrnk, are you able to run kaniko as a NonRootUser? If so, can you please provide the workaround? |
It seems that user namespaces we're added in Kubernetes 1.25 in [alpha] state: Might be worth investigating how they can be used with kaniko as mentioned in the thread |
No description provided.
The text was updated successfully, but these errors were encountered: