-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/go: get of module already present in proxy can be slow #50158
Comments
Yes, in theory we could return as soon as we find the package at the most precise path. That short-circuit isn't optimized because the shorter paths are supposed to return quickly — so the question is, why did the proxy take 2+ seconds to tell you about a version that trivially doesn't exist? |
Indeed. Plus the question above about the 15-20 secs delay we can't account for in the |
Checking if a version exists or not takes some measurable amount of time, and the proxy's implementation creates some overhead. I'm not sure what "trivially doesn't exist" means, but if we have to run a go command and it has to talk to an origin server, 2 seconds does not seem unreasonable to me. |
Why would a |
FWIW, the
So that seems like 2+ seconds for container setup? Which I guess seems ok, but given the previous trace I would have expected the negative result to already be present in the cache. |
Agreed. If that happens again, could you kill the |
Yes.
I don't believe this was done with an empty module cache. For me it takes anywhere from 300ms to 1.5s. But yes, there is quite a bit overhead proportional to the required work for something this small. The behavior of the proxy is well within my expectations, so I'm going to drop out of the discussion now. |
Thanks, Heschi. Very much appreciate the clarifications.
@bcmills I think this points towards there being some merit in the short-circuit described earlier? |
https://gist.github.com/myitcv/b7939fcc511a007af00297ceb39847b3 I triggered this about 7 seconds after the last line of output shown (didn't want to wait too much longer in case I missed the opportunity). What's interesting is that in this time the precise path hasn't returned. What's even more interesting is that these calls shouldn't be necessary because I have this module in my module cache. |
Hmm. There isn't much actually running in that goroutine dump — one in a Nothing seems to have been blocked for a long time, so it isn't obvious to me whether we're bottlenecked on reading the module graph one time (or reading checksums one time), or reloading the graph (or reloading the subdb tiles) repeatedly for some reason. A trace file from |
@bcmills - an update from my side. Turns out some of the issues I have been seeing are related to local disk errors (thankfully not actual disk errors, just problems with a NVM disk and the Linux kernel I am using). So I'm going to attribute the unexplained long delays (15-20secs) to those disk errors. This is reasonable and consistent with other behaviour I was able to reliably reproduce. That leaves two issues to my mind (hence I've retitled the issue):
|
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
The command stalled at the line
**********************
for a good 15-20 secs.What did you expect to see?
-x
logs to indicate where the delay occurred; the 15-20 secs delay I experienced is not accounted for aboveWhat did you see instead?
As abve.
Noting a subsequent call had output as follows:
Here the
go get
command ultimately took as long as it did (~2.5 secs) because of the long response time to:Given
go get
already by this point got a successful response from:could
cmd/go
not optimise to simply "drop" the other requests, given the most specific one returned successfully?cc @bcmills for
cmd/go
and @heschi in case there is any proxy-related issues hereThe text was updated successfully, but these errors were encountered: