Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build and release `kindest/node:VERSION` on kubernetes release #197

Open
chuckha opened this issue Jan 2, 2019 · 6 comments

Comments

@chuckha
Copy link
Contributor

commented Jan 2, 2019

Right now the node image that kind uses is published by hand by @BenTheElder or @munnerz. It would be good to get this into the release pipeline somehow so that when a new version of kubernetes is published we also get a new node image for kind to use. The tooling exists, it's just a matter of getting the image pushed to the correct place. See the tooling at https://github.com/kubernetes-sigs/kind/blob/master/hack/build/push-node.sh

For complete context, please see this slack thread.

There are a few approaches discussed in the thread:

  1. Shove this into anago in a similar way that the conformance image works. This is easy but the downside is that anago is getting too big and we shouldn't be adding more stuff to anago. This may become a slippery slope and we don't want anago to grow.

  2. periodically check if the release tags have changed. If they have changed, do a build, otherwise don't do anything.

  3. @BenTheElder has been looking at extending prow to trigger off GCS/GCR pubsub so we can kick off a build after a normal release is finished.

We would like to move this off dockerhub before 1.0, but there isn't really a great place to put it right now. Ideally we could have a CNCF sponsored gcr.io bucket so that google isn't responsible for the storage.

@BenTheElder BenTheElder self-assigned this Jan 3, 2019

@BenTheElder BenTheElder added this to To do in 1.0 via automation Jan 3, 2019

@BenTheElder BenTheElder modified the milestones: 1.0, 2018 Goals Jan 3, 2019

@neolit123

This comment has been minimized.

Copy link
Contributor

commented Jan 3, 2019

i guess, i'm only -1 only on the anago approach.

We would like to move this off dockerhub before 1.0, but there isn't really a great place to put it right now. Ideally we could have a CNCF sponsored gcr.io bucket so that google isn't responsible for the storage.

this overlaps with the work by the k8s-infra-team, so if they manage to do the above soon the kind image can be on CNCF ground, otherwise dockerhub seems like a viable option for 1.0.

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Jan 3, 2019

Ditto w/ @neolit123 on anago per our previous discussion, I don't think Kubernetes the core project should be in the business of releasing SIG subprojects just yet, we can ensure we publish images for each release by other means.

For now if anyone really wants them today, they can check out a k8s release tag and build an "unnoficial image" with the same tools we use.

I'm also ambivalent on dockerhub, there haven't been major downsides so far but I'd be happy to move it to whatever hosting k8s-infra-team settles on going forward. The current dockerhub was indeed a stopgap to have joint access with @munnerz but has worked fine.

@chuckha

This comment has been minimized.

Copy link
Contributor Author

commented Jan 3, 2019

cross posting a few communication links

email to steering committee about this
https://groups.google.com/a/kubernetes.io/forum/#!topic/steering/ASZGGcvJQts

CNCF slack discussion about this
https://cloud-native.slack.com/archives/C08PSKWPN/p1546470742035300

k8s-infra slack message about this
https://kubernetes.slack.com/archives/CCK68P2Q2/p1546528406005400

tracking issue
kubernetes/k8s.io#158

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Jan 16, 2019

@munnerz and I have been looking into this more. We think we can set up jobs based on kubernetes/kubernetes git tags to publish new images.

@fejta-bot

This comment has been minimized.

Copy link

commented Apr 28, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@BenTheElder

This comment has been minimized.

Copy link
Member

commented Apr 30, 2019

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.