-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kind build broken #509
Comments
yes this is caused by the removal of vendor, I'm working on documentation updates, kind requires go modules now, go 1.12+ should be used and modules enabled. there is now a makefile that will build with a new enough go in docker reproducibly without requiring a particular go version on the host. |
With kind modules - it still fails: go: finding sigs.k8s.io/kind v0.2.1 |
Can you add back vendor until people get a chance to transition ? Breaking the world is really bad practice. |
#510 fixed modules.
master is not stable. We can spell this out in the docs, we have release tags for stable versions. Kind is alpha. |
can you clone at install from a stable commit, or use a release binary? |
development in master and cut stable releases is standard practice for projects in the kubernetes org, including the main repo. |
But there are no instructions how to install "cut stable releases". The only instruction in README installs the master. |
that is true, installation can just be download the binary and put it in your $PATH somewhere, but we should document this. I thought we had already, but it turns out we haven't! binaries are available at EG https://github.com/kubernetes-sigs/kind/releases/download/0.2.1/kind-linux-amd64 altenratively it is |
instructions in the README and docs were updated and tested in clean environments, they should work. |
Thanks for updating README - but I don't think 'go mod vendor' is such a huge burden and will allow more people to use or test master and keep 'go get' working. 0.2.1 is from Mar 27 - and it seems Kind is evolving quite rapidly. We have a similar problem in Istio with master not being stable, so I can't complain much - but at least |
+1 to make |
That is true, the burden isn't the concern so much as the git bloat.
Yes, but we have intentionally not tagged a release while we finish making sure the changes are ironed out. Releases should "just work". The next one will be between now and KubeCon (end of next week) if we can finish ensuring the changes are good to go. We aim for ~monthly releases currently. We're slightly overdue due to trying to finish ironing out the breaking image improvements. It should work much faster,
I will again point out that master did and always has compiled, and in fact passed conformance against Kubernetes 1.12, 1.13, 1.14, and master (and until recently 1.11, but kubernetes stopped supporting it) we have pretty thorough CI. All PRs must pass this. Master is "unstable" in that we don't have the resources to fully guarantee it. Releases get a little extra QA. EG we do not have access to Windows CI, but we test there before releasing. Most of this is just me :-) We did however break the node image format between releases, and we changed some implementation details of the node. We make no guarantees about this yet, kind is alpha. We're pushing to figure out what changes to make on the way to beta and GA/stable. |
The second report
was from a 'go get ' with modules enabled. We are also switching to go mod - but some form of vendor is still useful. Disk is not that expensive. There are few other options - like having a second git repo with just a main and vendor. Fetching the deps from a single git (packed/compressed/etc) is faster than fetching from large number of git repos, |
See the updated readme, the I'm also looking at making tools a second module.
Getting a second repo is wildly painful in this project and unlikely to be approved for that purpose. As for speed, https://proxy.golang.org/ pretty much solved that issue. |
@ all we do the same mistake in the kubernetes CI jobs, but at least we know when and how we are going to break it. |
might be a good idea to not depend on the lint package? EDIT: i guess the followup on that is #514 |
Agreed @neolit123, see: #514
It's just to pin for CI. #514 moves all dev time tools to their own module.
That way we also don't require all the code-generator code when just
building kind.
*From: *Lubomir I. Ivanov <notifications@github.com>
*Date: *Thu, May 9, 2019, 08:55
*To: *kubernetes-sigs/kind
*Cc: *Benjamin Elder, Mention
@BenTheElder <https://github.com/BenTheElder>
…
See the updated readme, the @master <https://github.com/master> part is
breifly required until we put 0.3 out, otherwise it tries to get 0.2.1
which has github.com/golang/lint instead of the golang.org/x/lint import.
might be a good idea to not depend on the lint package?
unless the verify tools demand air-gaped support i don't see that much
overhead in pulling the package and rebuilding it on demand outside of the
scope of /go.mod
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#509 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADK4WBUXLWQWQV6ZXCPDPURCOVANCNFSM4HLWDN5Q>
.
|
I think ideally our 'daily' build should use a daily build of Kind, our weekly builds should use weekly builds of kind, and same for releases. On the short term - we have the problem that 'master' is far better than the last release ( speed, fixes, etc). Regarding 'build tools' - as I mentioned in a different bug, it would be ideal if Kind publishes docker |
Please consider using a specific commit / release for all of your builds. we can't know what will and won't break your CI and we need to iterate to build out the features everyone is demanding and improve kind.
That sounds nice ... but
FWIW, the kind node image has all of the dependent images cached in it hermetically. Caching that image as appropriate for your CI setup will make it work without the internet.
As I said above, the cluster creation itself should be about as reproducible as possible, the kubernetes setup is hermetic. sticking this hermetic setup into another one doesn't really make it any more hermetic, just pin which kind binary and image you use. (pin to a commit, pin to a node image SHA256) |
I think having at least one pipeline using master is important - it doesn't have to be pre-submit or blocking, but it is good to catch problems earlier and provide feedback. Re. CIs to tackle - you don't have to take all of them, but it would be great if you can get GCB working. Re. image cached in the node - that's what we want to do for Istio as well, have 'air-gapped' images On publishing: if you just add Kind binary to the node image you already publish, it would be enough. Users can then either extract from that node image or use it with a different entrypoint. I noticed that different versions of Kind binary don't work with different node images - would be good to |
This also hit us and affects all our CI and documentation 😞 The initial fix for one project meant doing this: openfaas/ofc-bootstrap@50410e0 I'd really appreciate seeing the following improved upon: #208 as I hear the latest binary is from March. |
Again, we have extensive presubmit and CI testing on prow. We know it works and passes conformance for 1.12..master
@munnerz span up a project for this, it's a work in progress, started with circleCI in absence of a particular request.
We won't be adding the kind binary to the node image. The node image also does not contain docker for a while now. The node image is a snapshot of Kubernetes and what we need to boot it. I will consider another image in the future, in the meantime is
So until some changes in HEAD that are unreleased all images worked with all releases. the We're also going to resume pinning the default image to a sha256, and recommend consumers do the same. Avoiding breakage is a key tenant, but we are in alpha. I spent multiple weeks trying to get the improvements in without any breakage, but we needed to move forward. It is still only v0.2 after all, we haven't been around all that long! For 0.4 I'm also aiming to make it easier / cheaper to build node images by supporting building from tarballs, there's an existing PR to fix up, this will make it relatively cheap and easy to pick a kubernetes version and publish an image for it to pin for your project(s) until kind is fully stable.
Apologies for that but, please do not consume kind at
Yeah, the cadence currently is about 1.5 months instead of 1 month. This is still twice as fast as Kubernetes and faster than most deployers. I'd like to bring it down to a month, but this release is biting the bullet on some major breaking changes we'd been discussing for a while. As a result the image size and boot times will both drop, and booting should be more reliable. |
See https://github.com/kubernetes-sigs/kind/milestone/5 for 0.3, slated by next weekend. |
Even ignoring changes like go modules, using kind at HEAD for important CI is a risky bet. Even an innocuous change on our end could somehow break your setup, and kind at HEAD has no SLA. I don't think kops, minikube etc. do either. Even Kubernetes testing has or will be moving to pin a particular kind version. |
My point is that the recommended instructions changed overnight as a breaking change. From:
To:
They should really not change that drastically without some proper warning. It reminds me of Logrus/logrus. If the maintainers group don't want people to use HEAD (read typing in: |
That is true, though really we should update to reflect that those are not recommended for CI. It's easy to try out kind that way (also note: with no options / config / flags), but none of the projects I work with use any_ tools from HEAD like this in CI, all of them pin a version to avoid this.
Apologies, FWIW we did pin a warning in the #kind slack channel. The intention was for release notes to improve this. Again, I didn't expect CI to actually be installing from HEAD, which was clearly an oversight. The documentation does cover release binaries and I expected that CI would of course use that for stability on such a young project... Clearly this was an oversight, but it is still recommended.
Yes, when This is one of the behaviors using modules improves. [1]: https://github.com/golang/go/wiki/Modules#how-to-upgrade-and-downgrade-dependencies |
v0.3.0 is out with binaries now. go get will default to that version, and we've updated all the installation docs to prefer this. Closing this and moving forward with modules now. Also noting that minikube went from go dep to modules and removed vendor in one PR in the past week as well. kubernetes/minikube#4241 |
Probably caused by removal of vendor.,
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kind version
):kubectl version
):docker info
):/etc/os-release
):The text was updated successfully, but these errors were encountered: