-
Notifications
You must be signed in to change notification settings - Fork 1.5k
VolumeSource: OCI Artifact and/or Image #4639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I've had informal discussions about this - there's enough interest IMO to open a KEP & I will present this issue at upcoming sig-node, sig-storage mtgs with a KEP draft /sig node |
Can you use the Volume Populator? It allows you to create a PVC from an extenal data source. https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1495-volume-populators |
Happy to support here from a SIG node perspective. cc @kubernetes/sig-node-proposals |
+1 happy to help from the SIG-Node/CRI and OCI image/distribution spec perspectives.. |
/label lead-opted-in as discussed at SIG Node meeting this week, we will try and see if this can make it to 1.31 |
/stage alpha |
For reference, in KServe a workaround for directly accessing files within an OCI image is currently implemented and available via a sidecar approach ("modelcar") by leveraging root FS system access via the So KServe would be more than happy to leverage such a volume type, and we are happy to support any efforts in this direction. |
/stage alpha |
Hello @sallyom 👋, v1.31 Enhancements team here. Just checking in as we approach enhancements freeze on 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024. This enhancement is targeting for stage Here's where this enhancement currently stands:
For this KEP, most of the above items are taken care of in #4642. We'd need to do the following:
The status of this enhancement is marked as If you anticipate missing enhancements freeze, you can file an exception request in advance. Let me know if you have any questions! Thank you! |
@sallyom Pinging once again as a slight reminder that we're approaching the enhancements freeze deadline on 14th June, this Friday! |
Hi @sallyom @SergeyKanzhelev 👋, 1.31 Enhancements team here, Just a quick friendly reminder as we approach the enhancements freeze in few hours, at 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024. The current status of this enhancement is marked as If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you! |
Hello @sallyom @SergeyKanzhelev 👋, 1.31 Enhancements team here. Unfortunately, this enhancement did not meet requirements for enhancements freeze. If you still wish to progress this enhancement in v1.31, please file an exception request as soon as possible, within three days. If you have any questions, you can reach out in the #release-enhancements channel on Slack and we'll be happy to help. Thanks! |
/milestone clear |
/assign |
pushing container images to the container registry with compression off (example: --disable-compression) is a client tooling feature that may or may not be available containers/buildah#3904 +1 to sasha's comment -- sync pod context means before the pod is run, kubelet requests the images to be pulled locally normally this is done with unpack,... containerd's 1.7/2.0 config.toml entry for the default timeout is image_pull_progress_timeout = '5m0s' |
Thanks for sharing valuable insight! We were using kubernetes version 1.30.4 + containerd 1.6.26 + pulling image from Azure acr. The image is build using docker with LLM model weight from hugging face: https://huggingface.co/microsoft/Phi-3.5-vision-instruct/tree/main. When a pod pulling the container image (20G+), if it timeout or failed due to various reason, the danling image layer won't get garbage collect right away, leaving some orphant tar file. The retry will worse the scenario, as there were less ephemeral disk to use. The size of the ephemeral disk is just enough to do a "happy day" download and untar, so it's very fragile to any failure. @saschagrunert I don't have a stable reproducer yet, as this issue seems happen more frequently in environment with weak network or lower ephemeral disk. @sudo-bmitch @mikebrow Thanks for sharing the option to disable compression, I will try disable the image compression and see if that helps. One additional concern is the quota management for ephemeral disk. As the volume image is pull using similar mechanism as container image, it will use some ephemeral disk, but it doesn't have a clear isolation boundary like pvc, if volume image pull encounter error, the ephemeral disk pressure might impact all pods on that node. |
Hello @sallyom @saschagrunert 👋, The v1.33 Enhancements team here again! With all the implementation (code-related) PRs merged as per the issue description:
I noticed that kubernetes/kubernetes#130681 is currently marked as optional. Could you confirm whether it’s required for this release, or if we can consider the enhancement complete? If not, we’ll go ahead and mark it as Tracked for Code Freeze for v1.33. Additionally, are there any other PRs in Thanks! 🚀 |
@fykaa code for this enhancement is complete for now, I don't think that the optional e2e test will land because we have no runtime support yet. |
Hi, Currently in the containerStatuses field of the pod status, the
This is quite useful for detecting when an image has changed, especially for those using the latest tag. Is there any plan to include similar information for the imageVolume in the pod's status? |
@saschagrunert Thanks, for the update and confirming that all required changes are merged (here). We can mark this as tracked for code freeze. Also, please let us know if anything changes before the freeze or if there are any other PRs in k/k we should track for this KEP to keep the status accurate. This enhancement is now marked as |
Hi @saschagrunert 👋, 1.34 Enhancements Lead here. I am closing the v1.33 milestone now. If you'd like to work on this enhancement in v1.34, please have the SIG lead opt-in by adding the Thanks! /remove-label lead-opted-in |
@jenshu thanks for reaching out! There is no graduation planned for v1.34. 👍 |
Hey @saschagrunert!
Thanks for the heads-up. In addition, regarding the comment above,
Do you think this is something we can plan for GA? Thanks! |
Where would be the appropriate venue to request that this feature support adding image volumes without restarting a Pod? The CloudNativePG project is looking at using it to deploy Postgres extensions via OCI images, and most don't require a cluster restart, just the insertion of the files. Would be nice not to have to restart the database. |
Uh oh!
There was an error while loading. Please reload this page.
Enhancement Description
k/enhancements
) update PR(s):k/k
) update PR(s):ImageVolumeSource
API kubernetes#125660ImageVolumeSource
implementation kubernetes#125663fsGroupChangePolicy
has no effect kubernetes#126281CanSupport
method kubernetes#126323ImageVolumeSource
node e2e tests kubernetes#126220crictl [create|run] --with-pull
kubernetes-sigs/cri-tools#1464pull-kubernetes-node-crio-cgrpv2-imagevolume-e2e
test job test-infra#33071k/website
) update PR(s):k/enhancements
) update PR(s):k/k
) update PR(s):ImageVolume
tests forpull-kubernetes-node-e2e-containerd-alpha-features
test-infra#34398k/website
) update(s):bdcf72
test-infra#34547Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: