Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container image versioning #77

Closed
cgwalters opened this issue Jun 5, 2018 · 5 comments
Closed

container image versioning #77

cgwalters opened this issue Jun 5, 2018 · 5 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@cgwalters
Copy link
Member

Today we're building the ostree twice (once in the Dockerfile triggered by prow, and once in Jenkins), and pushing two different containers to two different registries.

The Dockerfile path entirely tosses the ostree commit history concept and loses our "did something change" handling. We are also not using this container content to build virt images.

We really need to get to one ostree commit stream.

Further, we need to handle versioning for this in a saner way - just pushing :latest isn't going to cut it. I'd strawman we have buildmaster, latest, beta, stable or so - the buildmaster concept here means that it hasn't undergone any tests yet. latest is what I've used smoketested for in the past. Then beta and stable are manual promotions?

@ashcrow
Copy link
Member

ashcrow commented Jun 5, 2018

The outcome of this discussion is perfect for one or more implementation cards.

I generally think of latest as the latest built image which may or may not be "released". But I'm fine with making latest the latest version we've done the smallest amount of tests with. The rest of the tags 👍 from me.

@cgwalters cgwalters added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 5, 2018
@cgwalters
Copy link
Member Author

Some other random thoughts on this. We could probably convert to a "repo stored in container" model if we switched the Anaconda components to download the container and unpack it in CI.

(Actually, I'd still like to do a switch to rojig-in-container, so we'd short term do container->rojig->bare, but it really shouldn't be too hard to teach anaconda about rojig)

@ashcrow
Copy link
Member

ashcrow commented Jun 19, 2018

@cgwalters I'm +1 to switch to rojig ... it may be worth creating a card set to bite the bullet and do that before doing work like this.

@cgwalters
Copy link
Member Author

cgwalters commented Aug 2, 2018

It'd be great to have someone look at automatically tagging :alpha after it passes tests.

Also, to make things more complex: I think we should defer uploading AMIs until after basic qemu sanity checking. Right now the AMI upload is 20 minutes; we could have run tons of basic qemu tests in that time.

So maybe the flow is e.g.:

treecompose -> :latest
basic sanity tests -> :alpha-sanity -> upload to EC2
Once it's in EC2 we do a lot more extensive testing, maybe of a cluster -> :alpha-cluster-sanity

Promotion from :alpha-cluster-sanity to e.g. :beta would be manual?

Yet another tricky thing in all of this is that we will almost certainly need to (in practice) decouple OS-basics testing from kube-and-above testing.

@cgwalters
Copy link
Member Author

The original issue here was fixed, moving the tags to #201

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests

2 participants