-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow exposing status.containerStatuses[*].imageID through Downward API #80346
Comments
/sig node |
@kubernetes/sig-node-feature-requests |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
Thanks. So by "stepping up to do the work" you mean making a KEP or implementing this? So is the idea is to just go with this and make a PR with code or first KEP? So I was hoping that somebody would officially approve this (it seems pretty simple change) and then community could provide a PR. |
Implementing. Though to get it approved might? require a KEP. I don't really work in SIG node, but that's what I'd expect to have to get a feature through myself, unfortunately. |
Thanks. I added it to the meeting agenda for 9/22/2020. |
Overall question for this one: could you enumerate why this info may be useful? What kind of workloads are you running or wanting to run? |
I am not against this. It can be easily done today by injecting the
I would be against pulling the value from the I understand that the status will have the image digest vs the spec which might pull by tag, but if you want the digest, you should specify the image pull path by digest. |
Thank you for comments and questions.
For me it is really about container introspection. Container being able to know some information about itself. This is what downward API provides anyway. And I just seek to expand it a bit to provide some other extra information which is otherwise hard to obtain: what image exactly is it being run. I see/have few use cases:
Not really. This proposal would be exposing the hash/digest version of the image ID, so confusion over tags for that image ID available on the host do not apply.
Same hold for all downward API, no? I could also copy in the same way the resource limits to ENV section. But this approach means that you have to use some additional templating language around pod files. And can be error prone if templating language is not being used but somebody forgets to update it in one place but not another. Moreover, it is not possible in all cases, see below.
This is exactly the use case I want. I want to know the digest while keeping the non-digest image tag. So non-digest image tag includes something like This is also the reason why env variable workaround does not work. That works only if you specify the digest in advance. But in decoupled system where somebody is building an image, and you have
Exactly. This is why this race should be addressed and made sure this does not happen. This can be done by making sure Kubernetes first fetch the image given the tag (
That prevents the use case where user (job submitter) does not care about particular Docker image version (wants the latest), but for debugging/logging/reproducibility purposes you do care what was picked up at the end. So the pod spec contains non-digest version of the image, with |
@timhowes, @mpoqq, @Robpol86, @surenraju, @alexeyzimarev, @alonrbar, @lukinko, @Skonx, @jtackaberry, @yhpark, @CharlyF, @adwin5, @seamusabshere, @hicolour, @daniel-pp, @haizaar, @lopf, @shiwano, you all upvotes this issue. I am working on KEP for it. Would you share your use cases which made you upvote the issue? I could then try to include them in the KEP. |
I started working on the KEP here: kubernetes/enhancements#2013 |
We are using Spinnaker for deploying containers, and injecting the image tag to env by using Spinnaker Pipeline Expression like below.
For example, we get the image tag like I think it is great if Kubernetes will support this, because we will not need to depends on Spinnaker’s difficult feature. |
I think it should be supported in the downward API, but if you're using Kustomize, this will work. kustomization.yml
mydeployment.yml
|
It is not possible to work to get digest of the image as it was pulled by Kubernetes. |
My use case: A job launcher replicate itself with different roles, needs to know the image digest to ensure the same version of image being used. Tag would not work because potential stale image on the nodes. Workarounds include:
As already mentioned above these workaround fails in typical production environment, because the consumer and builder of the image are often separate teams, in separate system, and subject to complicated multitenant cluster, etc. |
Similar use case: an operator that needs to run its own image as a job for longer-running maintenance tasks. |
just having the image id and container id is essential for logging and tracing... this should be specified at the r/t level such that any OCI conformant environment expressed this information in a std fashion ... irrespective of the container mgmt environment in which it is executing... |
/triage accepted |
Has any other way to access container image/imageid from within a container come up or is this still an open issue? |
not to the best of my knowledge, I have been looking at perhaps a workaround whereby using 'events' determining the process id of the container and then correlating that with the metadata available in /proc ... but this is to enable an external process to introspect on a running container, and not for a process within a running container to obtain either its image or container ids ... |
My hackaround:
Not nice, but it works... 😒 |
I'm actually trying to do something in a multi container pod so just knowing the pod name doesn't seem like enough to ascertain which container I'm in (without knowing something about what the images have in them) |
IMO this needs to be solved not only for k8s but for any OCI compliant r/t, not sure why its such a challenge and has languished as per: https://github.com/opencontainers/runtime-spec/issues/1105 |
while its great that there is movement on this, I personally do not think that a kubernetes only solution is satisfactory., there are other container orchestration engines that are responsible for container l/c these engines should also expose |
I'd like this expanded to provide everything for the OpenTelemetry container spec. Each otel vendor does data correlation differently, so best to have all the data to be certain it works. |
+1 |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
What would you like to be added:
It would be great if
status.containerStatuses[0].imageID
andstatus.containerStatuses[0].image
could be exposed through Downward API and passed as an environment variable. That would allow the container to obtain information about which exactly digest of the image it is running, which can then be useful for any logging from that container.Why is this needed:
It is useful for the container to know which exactly version of the image it is running. By allowing passing the image and digest/ID to the container, this is achievable.
The text was updated successfully, but these errors were encountered: