Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatize container hashes update #125

Closed
qinqon opened this issue Jul 24, 2019 · 6 comments
Closed

Automatize container hashes update #125

qinqon opened this issue Jul 24, 2019 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@qinqon
Copy link
Contributor

qinqon commented Jul 24, 2019

The process of update hashes when something is change at the containers is very error prone, for example changing stuff a base image means:
/test push-base
fix hashes
/test push-centos
fix hashes
/test providers
fix hashes

A possible solution would be to just point to "stable" all the providers and make a "post-merge" job that will update stable.

@qinqon
Copy link
Contributor Author

qinqon commented Jul 24, 2019

@slintes there it is

@slintes
Copy link
Contributor

slintes commented Jul 24, 2019

Hey @qinqon, thanks for that proposal. We need more automation for this for sure!
About your suggestion: using a moving stable tag is not an ideal solution, because that can result in using old cached images. We would like to stick with shasums because of that.
My suggestion would be to track all shasums in 1 parsable text file, and get them from there where needed.
A postsubmit job which rebuilds and pushes all needed images sounds great. It only needs to update that shasum file as well then.
Let's use this issue to elaborate more on this.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2019
@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 21, 2019
@kubevirt-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link
Contributor

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants