-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving or replacing godep for dependency vendoring #44873
Comments
/area build-release |
I looked at moving to Glide about a year ago. It fell over for our repo.
A bunch of bugs were filed, and as far as I can tell, glide is EOL'ed in
favor of `go dep`, which isn't ready yet.
…On Mon, Apr 24, 2017 at 12:28 PM, Christoph Blecker < ***@***.***> wrote:
/sig contributor-experience
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#44873 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVBHMu6MZSbzU9DHqXLFRZIvzqXR-ks5rzPfugaJpZM4NGmjD>
.
|
I don't think it's EOL'd, but perhaps @technosophos (who is also involved with kubernetes/helm) can help us out. |
I'm not a fan of moving to glide if we're planning on moving to dep one or two years later once it's stable. |
@spxtr I guess it depends on the effort involved. If it's a low bar to move to glide and some of the bugs Tim saw last year are fixed, then it may significantly improve the experience for new contributors even if it's just for the 1-3 years until the first-party dep tool is released (which may not even end up working for us initially, because of the way kubernetes does things like staging/) Right now godep as is is presenting some huge barriers, and maybe those can be fixed with a layer of bash script duct tape.. but perhaps glide is easier/better. |
Things that use godep in a nontrivial way:
I probably missed some, just did |
If we can get rid of staging/ and develop in the repos directly, 3,4,5 will go away, though it probably requires more effort than switching to glide. |
I tried to use |
We routinely have issues with the godep instructions failing, not producing the correct change, taking forever, using too much memory, etc. Internally, CoreOS moved away from godep about a year ago for glide because the tool routinely ate into developers' time, and I've see Kubernetes contributors have similar issues with tool (e.g. #42669 (comment)). I think moving off godep is something worth pursing. Even if we want to wait for Having been fighting with godep all day, I'd be happy to help with any work in this direction. |
I had a look at using glide a couple days ago, but my experiment didn't go well due to the complexity of k/k. Ran into two different bugs, as well as issues with the staging dir. The other issue with glide is the way it interacts with bazel. The way it handles subpackages doesn't play well with bazel's build files. I have a feeling it could be worked around, but it would be nontrivial for sure. Very timely, there was also some movement on dep (golang/dep#170 (comment)). It looks like @ericchiang is taking a poke at it. Still more looking into to be done. |
We dep folks are trying to actively pay attention to y'all (and I've not forgotten my discussions with @thockin), though there are a lot of plates spinning right now 😄. If someone could continue poking at it regularly, that'd be the best way to ensure we're continuing to iron out bugs that only projects the size of k8s tend to run into. For my purposes, anyway, I think just giving it a poke once a week should be plenty. For example:
I don't know how k/k uses or:
While dep's architecture is quite different from glide's, I've no idea what the underlying issue here might be, as I don't know bazel. This is exactly the sort of thing that I'd love to get information about - and the sooner the better, so that we can validate our architectural decisions. Because...
We're actually trying to move towards, at least, a stable manifest and lock (golang/dep#276), so that folks can feel safe about committing those files and using |
Commenting to link related issue: kubernetes/client-go#78 |
To update on this:
This would get us to the point that users wouldn't need to change their local environment/GOPATH to be able to work on k8s, or to run the Please let me know if anyone has any thoughts. I'll operate on lazy consensus and if there are no objections, I'll continue work on this 😃 . |
that's really unfortunate... godep already dominates the verify CI job... ~30 minutes of the 1 hour run time from what I've seen debugging timeouts on https://k8s-gubernator.appspot.com/builds/kubernetes-jenkins/pr-logs/pull/49642/pull-kubernetes-verify/ |
It's because we're restoring the godeps potentially up to 4 times depending on a couple different factors. However, once we transition to restoring the godeps in a container and passing that data container between steps, this should drop the time by a good chunk. |
we should not knowingly merge changes that drastically lengthen the run time of the godep scripts... the verify CI job is already by far the longest running job on the merge queue (it's now at 1h9m when godeps are verified, the next longest job is at 45m) |
I'm especially concerned about this step taking longer... it currently takes ~18 minutes |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Godep scripts cleanup Another try at sanitizing godep scripts. Instead of messing with containers, or assuming anything about GOPATHs, let's just work with whatever GOPATH we are given. If you want sane godep/restore behavior you can use `./hack/run-in-gopath.sh ./hack/godep-restore.sh`. This will restore into your _output dir. You can update deps yourself, and then run `./hack/run-in-gopath.sh ./hack/godep-save.sh`. xref #44873 This also checks out godep into your working GOPATH. Without this, we have to `go get` it every time. We should just vendor it.
I want to throw my hat into the ring here and propose
So it takes 13 second to process k8s (on my i7-7600U laptop, with SSD). That is assuming all git repos are already cached (trash maintains a cache of repos in ~/.trash-cache). If you have to pull the git repos fresh it just depends on bandwidth and all that. The output of trash is not a perfect match of what Godeps produces though. It's not far off though. I could get an engineer at Rancher Labs to modify trash specifically to make it work for k8s. But I'd rather not do that if it won't get accepted. I agree that long term |
As written on Twitter, I have my concerns switching to some intermetiate tool (before dep). Main reason: there are more artifacts and consumers than in a "normal" Go project. We have to
There are reasons that godep is as slow as it is. Compare my old tools/godep#538. If speed is our only issue, we can also fork godep. In general, changing the vendoring tool can easily disrupt the workflow of k/k devs, possibly the publishing tool chain and a number of downstream consumers. As much as I hate godep and would love to see us move to something better, we have to be careful to understand all the consequences. As the very least such a switch must be prototyped with the new tool before we can realistically discuss whether it is worth it. The devil is in the details. If trash happens to be a drop-in replacement hidden in our hack/ scripts, 30x faster, but with the same artifacts (Godeps.json for k/k and all the staging repos), this prototype could be very convincing though. |
@sttts Yep, totally understand the concerns with the switch which is why I'm asking if anybody would want this. Basically what I'm saying is that I'm willing to bend trash to the requirements of k8s. Its our stupid little tool and it works well for our needs. We work more and more with k/k code base and frankly godeps is like pulling teeth for us being that our day to day activities we usually use a much faster tool. So I'm more than willing to support Godeps.json, make it produce the same output. It basically comes down to this. Do you want the "correct" solution, or one that "works." I think everybody agrees long term dep should be the correct answer. For that to happen dep needs to mature, and k8s needs to change to match how dep works. Going with trash would be something that just works but it won't exactly be a pretty solution. It would mostly keep the status quo and just make things faster and easier to deal with. So it's a hack.... If you aren't caching the git clones in CI then it's hard to make things very fast. I just tried running trash using an empty cache on k8s repo and it takes about 10 minutes just to clone the dependencies using trash. That was on a 2GB digital ocean VM (so not super fast machine). |
An update on this:
As @sttts mentioned, actually moving away from godep is a big change, and will be disruptive to both developer workflow and those who vendor kubernetes (and the derivative staging repos), and developers working on kubernetes itself. With how close dep is to working for us, I don't think moving to another vendoring tool makes sense. |
Adding notes about how we got this to work for kops (mostly). Originally posted in #59332, but reposting here. With the latest version of dep and a few tweaks, it now appears to be possible to get dep working with kubernetes. Here are some learnings from getting dep to not crash for kops (cf kubernetes/kops#4382 ):
It's not ideal, we're basically use dep as a glorified wrapper around git pull, but at least it finally can be made to work. One intriguing option: this could solve our problem of coordinating versions for a release in a multi-repo world: we could publish a list of the dep |
What's the status here? I'm very interested in this issue (in fact, I was looking for one because I'd file one if this wasn't here yet... Been quite frustrated with godep lately.) I saw that kubernetes/test-infra#4868 (comment) from January says "We are using dep now" but #44873 (comment) from February describes a plan to move from godep to dep... So I imagine this is still in progress? One part that's been particularly frustrating to me with godep (because there's an easy fix for it) is how it uses unstable Thanks! |
@filbranden The discrepancy you note there is test-infra has moved to dep, but kubernetes/kubernetes has not yet. The goal is still to move towards dep, but some of the work stalled out due to other priorities (specifically the near the end of the 1.10 release). |
Oh thanks a lot for the clarification @cblecker ! So looks like things are on-track for Cheers, |
Looks like |
I filed #63607 about figuring out our vgo strategy, though that is from the perspective of what we publish and this seems to be about how we consume. |
An update on this.. yes, we're looking at moving towards vgo now that it is experimental in go1.11. There's a POC here: #65683 Both the publish and consumption use cases need to be taken into account. |
Is there an update on this now that we have reached Go 1.12 and go.mod has been supported for a full version cycle? |
|
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There are a number of issues with godep for dependency management including but not limited to:
Overall, this creates a barrier to contributors if they haven't used godep. Long term, we may want to look at moving to dep, but it's still in alpha and won't be released for awhile.
I wanted to open up a discussion around possibly moving to another vendor dependency solution such as Glide. It may help deal with things like vendor caching, pinning at specific versions without destroying your GOPATH, and the like. What may be complex however is the staging directory and having that vendor correctly with this tool.
Does anyone have any thoughts or opinions?
Related issues:
cc: @kubernetes/sig-contributor-experience-misc @sttts @thockin @caesarxuchao
The text was updated successfully, but these errors were encountered: