-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache Azure Container Registry Repositories #689
Comments
here is one more use case, where limit of 50 rules won't work problem: it may be hard to safely cleanup container registry, and in some cases registries are filled up quickly with tens of terabytes of images idea: imagine following setup:
as result:
why: while playing with Azure Container Registry and connecting it, it took only 2 days to fill 50gb of data ![]() when: seems like limit of 50 rules is real blocker here, as a workaround we probably may have many registries, but it is error prone alternative: of course, it will be much better to have idempotent builds and image tagging strategy that will override labels, but that's not always suitable |
It is annoying that you need to define a cache-rule for each and every repo - having a wildcard like docker.io/* mapping over to a prefix of docker.io/ within ACR would scale a lot better. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
@JXavierMSFT do we have any updates on prefix matching. |
We have released Wildcard cache rules. You can check the docs at aka.ms/acr/cache. However, we haven't released Azure Container Registry as an upstream yet. I will update this thread as soon as we release. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
@JXavierMSFT anything new for caching ACR? |
Caching for ACR will soon allow users to cache repositories from other Azure Container Registries. This functionality is tentatively scheduled for release in late December 2023.
The text was updated successfully, but these errors were encountered: