It's kind of to do with having to download the internet, and once you hit a proxy with all the request for git object, it chokes up. One option would be to have retries, as I suggested above, another could be to provide a flag that allows user to set a pause interval between requests.
Does using a GOPROXY, e.g. GOPROXY=https://proxy.golang.org, improve things? It should be less demanding of an HTTP proxy than Git. That doesn't answer the question of whether the go command should retry though.
That said, I'm not sure that there is anything we should do on the go command side if the problem is that the git commands overwhelm the proxy. For HTTP requests initiated on the go command side we rely on whatever connection pooling and limiting the net/http package provides by default, but I assume that individual git commands probably don't share proxy connections.
Ideally, your HTTP proxy should be robust enough to just not read from the incoming socket when it doesn't have the capacity to spare, rather than returning spurious 407s for some of the requests.
That said, I've been thinking about moving the parallelism limits from the various par.Work calls down to the leaf packages where we actually need to limit things. If we do that, perhaps we can use a more restrictive semaphore for git invocations than we use for package fetches in general.