proposal: add package version support to Go toolchain #24301

Open
rsc opened this Issue Mar 7, 2018 · 111 comments

Comments

Projects
None yet
@rsc
Contributor

rsc commented Mar 7, 2018

proposal: add package version support to Go toolchain

It is long past time to add versions to the working vocabulary of both Go developers and our tools.
The linked proposal describes a way to do that. See especially the Rationale section for a discussion of alternatives.

This GitHub issue is for discussion about the substance of the proposal.

Other references:

@gopherbot gopherbot added this to the Proposal milestone Mar 7, 2018

@gopherbot gopherbot added the Proposal label Mar 7, 2018

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 7, 2018

Contributor

Frequently Asked Questions

This issue comment answers the most frequently asked questions, whether from the discussion below or from other discussions. Other questions from the discussion are in the next issue comment.

Why is the proposal not “use Dep”?

At the start of the journey that led to this proposal, almost two years ago, we all believed the answer would be to follow the package versioning approach exemplified by Ruby's Bundler and then Rust's Cargo: tagged semantic versions, a hand-edited dependency constraint file known as a manifest, a separate machine-generated transitive dependency description known as a lock file, a version solver to compute a lock file satisfying the manifest, and repositories as the unit of versioning. Dep follows this rough plan almost exactly and was originally intended to serve as the model for go command integration. However, the more I understood the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, and the more I discussed those details with others on the Go team, a few of the details seemed less and less a good fit for Go. The proposal adjusts those details in the hope of shipping a system that is easier for developers to understand and to use. See the proposal's rationale section for more about the specific details we wanted to change, and also the blog post announcing the proposal.

Why must major version numbers appear in import paths?

To follow the import compatibility rule, which dramatically simplifies the rest of the system. See also the blog post announcing the proposal, which talks more about the motivation and justification for the import compatibility rule.

Why are major versions v0, v1 omitted from import paths?

v1 is omitted from import paths for two reasons. First, many developers will create packages that never make a breaking change once they reach v1, which is something we've encouraged from the start. We don't believe all those developers should be forced to have an explicit v1 when they may have no intention of ever releasing v2. The v1 becomes just noise. If those developers do eventually create a v2, the extra precision kicks in then, to distinguish from the default, v1. There are good arguments about visible stability for putting the v1 everywhere, and if we were designing a system from scratch, maybe that would make it a close call. But the weight of existing code tips the balance strongly in favor of omitting v1.

v0 is omitted from import paths because - according to semver - there are no compatibility guarantees at all for those versions. Requiring an explicit v0 element would do little to ensure compatibility; you'd have to say v0.1.2 to be completely precise, updating all import paths on every update of the library. That seems like overkill. Instead we hope that developers will simply look at the list of modules they depend on and be appropriately wary of any v0.x.y versions they find.

This has the effect of not distinguishing v0 from v1 in import paths, but usually v0 is a sequence of breaking changes leading to v1, so it makes sense to treat v1 as the final step in that breaking sequence, not something that needs distinguishing from v0. As @Merovius put it (#24301 (comment)):

By using v0.x, you are accepting that v0.(x+1) might force you to fix your code. Why is it a problem if v0.(x+1) is called v1.0 instead?

Finally, omitting the major versions v0 and v1 is mandatory - not optional - so that there is a single canonical import path for each package.

Why must I create a new branch for v2 instead of continuing to work on master?

You don't have to create a new branch. The vgo modules post unfortunately gives that impression in its discussion of the "major branch" repository layout. But vgo doesn't care about branches. It only looks up tags and resolves which specific commits they point at. If you develop v1 on master, you decide you are completely done with v1, and you want to start making v2 commits on master, that's fine: start tagging master with v2.x.y tags. But note that some of your users will keep using v1, and you may occasionally want to issue a minor v1 bug fix. You might at least want to fork a new v1 branch for that work at the point where you start using master for v2.

Won't minimal version selection keep developers from getting important updates?

This is a common fear, but I really think if anything the opposite will happen. Quoting the "Upgrade Speed" section of https://research.swtch.com/vgo-mvs:

Given that minimal version selection takes the minimum allowed version of each dependency, it's easy to think that this would lead to use of very old copies of packages, which in turn might lead to unnecessary bugs or security problems. In practice, however, I think the opposite will happen, because the minimum allowed version is the maximum of all the constraints, so the one lever of control made available to all modules in a build is the ability to force the use of a newer version of a dependency than would otherwise be used. I expect that users of minimal version selection will end up with programs that are almost as up-to-date as their friends using more aggressive systems like Cargo.

For example, suppose you are writing a program that depends on a handful of other modules, all of which depend on some very common module, like gopkg.in/yaml.v2. Your program's build will use the newest YAML version among the ones requested by your module and that handful of dependencies. Even just one conscientious dependency can force your build to update many other dependencies. This is the opposite of the Kubernetes Go client problem I mentioned earlier.

If anything, minimal version selection would instead suffer the opposite problem, that this “max of the minimums” answer serves as a ratchet that forces dependencies forward too quickly. But I think in practice dependencies will move forward at just the right speed, which ends up being just the right amount slower than Cargo and friends.

By "right amount slower" I was referring to the key property that upgrades happen only when you ask for them, not when you haven't. That means that code only changes (in potentially unexpected and breaking ways) when you are expecting that to happen and ready to test it, debug it, and so on.

See also the response #24301 (comment) by @Merovius.

If $GOPATH is deprecated, where does downloaded code live?

Code you check out and work on and modify can be stored anywhere in your file system, just like with essentially every other developer tool.

Vgo does need some space to hold downloaded source code and install binaries, and for that it does still use $GOPATH, which as of Go 1.9 defaults to $HOME/go. So developers will never need to set $GOPATH unless they want these files to be in a different directory. To change just the binary install location, they can set $GOBIN (as always).

Why are you introducing the // import comment?

We're not. That was a pre-existing convention. The point of that example in the tour was to show how go.mod can deduce the right module paths from import comments, if they exist. Once all projects use go.mod files, import comments will be completely redundant and probably deprecated.

Contributor

rsc commented Mar 7, 2018

Frequently Asked Questions

This issue comment answers the most frequently asked questions, whether from the discussion below or from other discussions. Other questions from the discussion are in the next issue comment.

Why is the proposal not “use Dep”?

At the start of the journey that led to this proposal, almost two years ago, we all believed the answer would be to follow the package versioning approach exemplified by Ruby's Bundler and then Rust's Cargo: tagged semantic versions, a hand-edited dependency constraint file known as a manifest, a separate machine-generated transitive dependency description known as a lock file, a version solver to compute a lock file satisfying the manifest, and repositories as the unit of versioning. Dep follows this rough plan almost exactly and was originally intended to serve as the model for go command integration. However, the more I understood the details of the Bundler/Cargo/Dep approach and what they would mean for Go, especially built into the go command, and the more I discussed those details with others on the Go team, a few of the details seemed less and less a good fit for Go. The proposal adjusts those details in the hope of shipping a system that is easier for developers to understand and to use. See the proposal's rationale section for more about the specific details we wanted to change, and also the blog post announcing the proposal.

Why must major version numbers appear in import paths?

To follow the import compatibility rule, which dramatically simplifies the rest of the system. See also the blog post announcing the proposal, which talks more about the motivation and justification for the import compatibility rule.

Why are major versions v0, v1 omitted from import paths?

v1 is omitted from import paths for two reasons. First, many developers will create packages that never make a breaking change once they reach v1, which is something we've encouraged from the start. We don't believe all those developers should be forced to have an explicit v1 when they may have no intention of ever releasing v2. The v1 becomes just noise. If those developers do eventually create a v2, the extra precision kicks in then, to distinguish from the default, v1. There are good arguments about visible stability for putting the v1 everywhere, and if we were designing a system from scratch, maybe that would make it a close call. But the weight of existing code tips the balance strongly in favor of omitting v1.

v0 is omitted from import paths because - according to semver - there are no compatibility guarantees at all for those versions. Requiring an explicit v0 element would do little to ensure compatibility; you'd have to say v0.1.2 to be completely precise, updating all import paths on every update of the library. That seems like overkill. Instead we hope that developers will simply look at the list of modules they depend on and be appropriately wary of any v0.x.y versions they find.

This has the effect of not distinguishing v0 from v1 in import paths, but usually v0 is a sequence of breaking changes leading to v1, so it makes sense to treat v1 as the final step in that breaking sequence, not something that needs distinguishing from v0. As @Merovius put it (#24301 (comment)):

By using v0.x, you are accepting that v0.(x+1) might force you to fix your code. Why is it a problem if v0.(x+1) is called v1.0 instead?

Finally, omitting the major versions v0 and v1 is mandatory - not optional - so that there is a single canonical import path for each package.

Why must I create a new branch for v2 instead of continuing to work on master?

You don't have to create a new branch. The vgo modules post unfortunately gives that impression in its discussion of the "major branch" repository layout. But vgo doesn't care about branches. It only looks up tags and resolves which specific commits they point at. If you develop v1 on master, you decide you are completely done with v1, and you want to start making v2 commits on master, that's fine: start tagging master with v2.x.y tags. But note that some of your users will keep using v1, and you may occasionally want to issue a minor v1 bug fix. You might at least want to fork a new v1 branch for that work at the point where you start using master for v2.

Won't minimal version selection keep developers from getting important updates?

This is a common fear, but I really think if anything the opposite will happen. Quoting the "Upgrade Speed" section of https://research.swtch.com/vgo-mvs:

Given that minimal version selection takes the minimum allowed version of each dependency, it's easy to think that this would lead to use of very old copies of packages, which in turn might lead to unnecessary bugs or security problems. In practice, however, I think the opposite will happen, because the minimum allowed version is the maximum of all the constraints, so the one lever of control made available to all modules in a build is the ability to force the use of a newer version of a dependency than would otherwise be used. I expect that users of minimal version selection will end up with programs that are almost as up-to-date as their friends using more aggressive systems like Cargo.

For example, suppose you are writing a program that depends on a handful of other modules, all of which depend on some very common module, like gopkg.in/yaml.v2. Your program's build will use the newest YAML version among the ones requested by your module and that handful of dependencies. Even just one conscientious dependency can force your build to update many other dependencies. This is the opposite of the Kubernetes Go client problem I mentioned earlier.

If anything, minimal version selection would instead suffer the opposite problem, that this “max of the minimums” answer serves as a ratchet that forces dependencies forward too quickly. But I think in practice dependencies will move forward at just the right speed, which ends up being just the right amount slower than Cargo and friends.

By "right amount slower" I was referring to the key property that upgrades happen only when you ask for them, not when you haven't. That means that code only changes (in potentially unexpected and breaking ways) when you are expecting that to happen and ready to test it, debug it, and so on.

See also the response #24301 (comment) by @Merovius.

If $GOPATH is deprecated, where does downloaded code live?

Code you check out and work on and modify can be stored anywhere in your file system, just like with essentially every other developer tool.

Vgo does need some space to hold downloaded source code and install binaries, and for that it does still use $GOPATH, which as of Go 1.9 defaults to $HOME/go. So developers will never need to set $GOPATH unless they want these files to be in a different directory. To change just the binary install location, they can set $GOBIN (as always).

Why are you introducing the // import comment?

We're not. That was a pre-existing convention. The point of that example in the tour was to show how go.mod can deduce the right module paths from import comments, if they exist. Once all projects use go.mod files, import comments will be completely redundant and probably deprecated.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 7, 2018

Contributor

Discussion Summary (last updated 2017-03-29)

This issue comment holds a summary of the discussion below.

How can we handle migration?

[#24301 (comment) by @ChrisHines.]

Response #24301 (comment) by @rsc. The original proposal assumes the migration is handled by authors moving to subdirectories when compatibility is important to them, but of course that motivation is wrong. Compatibility is most important to users, who have little influence on authors moving. And it doesn't help older versions. The linked comment proposes a minimal change to old "go build" to be able to consume and build module-aware code.

How can we deal with singleton registrations?

[#24301 (comment) by @jimmyfrasche.]

Response #24301 (comment) by @rsc. Singleton registration collisions (such as http.Handle of the same path) between completely different modules are unaffected by the proposal. For collisions between different major versions of a single module, authors can write the different major versions to expect to coordinate, usually by making v1 call into v2, and then use a requirement cycle to make sure v2 is not used with older v1 that don't know about the coordination.

How should we install a versioned command?

[#24301 (comment) by @leonklingele.]

Response #24301 (comment) by @rsc. In short, use go get. We still use $GOPATH/bin for the install location. Remember that $GOPATH now defaults to $HOME/go, so commands will end up in $HOME/go/bin, and $GOBIN can override that.

Why are v0, v1 omitted in the import paths? Why must the others appear? Why must v0, v1 never appear?

[#24301 (comment) by @justinian.]
[#24301 (comment) by @jayschwa.]
[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @kaikuehne.]
[#24301 (comment) by @kaikuehne.]
[#24301 (comment) by @Merovius.]
[#24301 (comment) by @kaikuehne.]

Added to FAQ above.

Why are zip files mentioned in the proposal?

[#24301 (comment) by @nightlyone.]

The ecosystem will benefit from defining a concrete interchange format. That will enable proxies and other tooling. At the same time, we're abandoning direct use of version control (see rationale at top of this post). Both of this motivate describing the specific format. Most developers will not need to think about zip files at all; no developers will need to look inside them, unless they're building something like godoc.org.

See also #24057 about zip vs tar.

Doesn't putting major versions in import paths violate DRY?

[#24301 (comment) by @jayschwa.]

No, because an import's semantics should be understandable without reference to the go.mod file. The go.mod file is only specifying finer detail. See the second half of the semantic import versions section of the proposal, starting at the block quote.

Also, if you DRY too much you end up with fragile systems. Redundancy can be a good thing. So "violat[ing] DRY" - that is to say, limited repeating yourself - is not always bad. For example we put the package clause in every .go file in the directory, not just one. That caught honest mistakes early on and later turned into an easy way to distinguish external test packages (package x vs package x_test). There's a balance to be struck.

Which timezone is used for the timestamp in pseudo-versions?

[#24301 (comment) by @tpng.]

UTC. Note also that you never have to type a pseudo-version yourself. You can type a git commit hash (or hash prefix) and vgo will compute and substitute the appropriate pseudo-version.

Will vgo address non-Go dependencies, like C or protocol buffers? Generated code?

[#24301 (comment) by @AlexRouSg.]
[#24301 (comment) by @stevvooe.]
[#24301 (comment) by @nim-nim.]

Non-Go development continues to be a non-goal of the go command, so there won't be support for managing C libraries and such, nor will there be explicit support for protocol buffers.

That said, we certainly do understand that using protocol buffers with Go is too difficult, and we'd like to see that addressed separately.

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

Won't minimal version selection keep developers from getting important updates?

[#24301 (comment) by @TocarIP.]
[#24301 (comment) by @nim-nim.]
[#24301 (comment) by @Merovius.]

Added to FAQ.

Can I use master to develop v1 and then reuse it to develop v2?

[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @aarondl.]

Yes. Added to FAQ.

What is the timeline for this?

[#24301 (comment) by @flibustenet.]

Response in #24301 (comment) by @rsc. In short, the goal is to land a "technology preview" in Go 1.11; work may continue a few weeks into the freeze but not further. Probably don't send PRs adding go.mod to every library you can find until the proposal is marked accepted and the development copy of cmd/go has been updated.

How can I make a backwards-incompatible security change?

[#24301 (comment) by @buro9.]

Response in #24301 (comment) by @rsc. In short, the Go 1 compatibility guidelines do allow breaking changes for security reasons to avoid bumping the major version, but it's always best to do so in a way that keeps existing code working as much as possible. For example, don't remove a function. Instead, make the function panic or log.Fatal only if called improperly.

If one repo holds different modules in subdirectories (say, v2, v3, v4), can vgo mix and match from different commits?

[#24301 (comment) by @jimmyfrasche.]
[#24301 (comment) by @AlexRouSg.]

Yes. It treats each version tag as corresponding only to one subtree of the overall repository, and it can use a different tag (and therefore different commit) for each decision.

What if projects misuse semver? Should we allow minor versions in import paths?

[#24301 (comment) by @pbx0.]
[#24301 (comment) by @powerman.]
[#24301 (comment) by @pbx0.]
[#24301 (comment) by @powerman.]

As @powerman notes, we definitely need to provide an API consistency checker so that projects at least can be told when they are about to release an obviously breaking change.

Can you determine if you have more than one package in a build?

[#24301 (comment) by @pbx0.]

The easiest thing to do would be to use goversion -m on the resulting binary. We should make a go option to show the same thing without building the binary.

Concerns about vgo reliance on proxy vs vendor, especially open source vs enterprise.

[#24301 (comment) by @joeshaw.]
[#24301 (comment) by @kardianos.]
[#24301 (comment) by @Merovius.]
[#24301 (comment) by @joeshaw.]
[#24301 (comment) by @jamiethermo.]
[#24301 (comment) by @Merovius.]

Response: [#24301 (comment) by @rsc.] Proxy and vendor will both be supported. Proxy is very important to enterprise, and vendor is very important to open source. We also want to build a reliable mirror network, but only once vgo becomes go.

Contributor

rsc commented Mar 7, 2018

Discussion Summary (last updated 2017-03-29)

This issue comment holds a summary of the discussion below.

How can we handle migration?

[#24301 (comment) by @ChrisHines.]

Response #24301 (comment) by @rsc. The original proposal assumes the migration is handled by authors moving to subdirectories when compatibility is important to them, but of course that motivation is wrong. Compatibility is most important to users, who have little influence on authors moving. And it doesn't help older versions. The linked comment proposes a minimal change to old "go build" to be able to consume and build module-aware code.

How can we deal with singleton registrations?

[#24301 (comment) by @jimmyfrasche.]

Response #24301 (comment) by @rsc. Singleton registration collisions (such as http.Handle of the same path) between completely different modules are unaffected by the proposal. For collisions between different major versions of a single module, authors can write the different major versions to expect to coordinate, usually by making v1 call into v2, and then use a requirement cycle to make sure v2 is not used with older v1 that don't know about the coordination.

How should we install a versioned command?

[#24301 (comment) by @leonklingele.]

Response #24301 (comment) by @rsc. In short, use go get. We still use $GOPATH/bin for the install location. Remember that $GOPATH now defaults to $HOME/go, so commands will end up in $HOME/go/bin, and $GOBIN can override that.

Why are v0, v1 omitted in the import paths? Why must the others appear? Why must v0, v1 never appear?

[#24301 (comment) by @justinian.]
[#24301 (comment) by @jayschwa.]
[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @kaikuehne.]
[#24301 (comment) by @kaikuehne.]
[#24301 (comment) by @Merovius.]
[#24301 (comment) by @kaikuehne.]

Added to FAQ above.

Why are zip files mentioned in the proposal?

[#24301 (comment) by @nightlyone.]

The ecosystem will benefit from defining a concrete interchange format. That will enable proxies and other tooling. At the same time, we're abandoning direct use of version control (see rationale at top of this post). Both of this motivate describing the specific format. Most developers will not need to think about zip files at all; no developers will need to look inside them, unless they're building something like godoc.org.

See also #24057 about zip vs tar.

Doesn't putting major versions in import paths violate DRY?

[#24301 (comment) by @jayschwa.]

No, because an import's semantics should be understandable without reference to the go.mod file. The go.mod file is only specifying finer detail. See the second half of the semantic import versions section of the proposal, starting at the block quote.

Also, if you DRY too much you end up with fragile systems. Redundancy can be a good thing. So "violat[ing] DRY" - that is to say, limited repeating yourself - is not always bad. For example we put the package clause in every .go file in the directory, not just one. That caught honest mistakes early on and later turned into an easy way to distinguish external test packages (package x vs package x_test). There's a balance to be struck.

Which timezone is used for the timestamp in pseudo-versions?

[#24301 (comment) by @tpng.]

UTC. Note also that you never have to type a pseudo-version yourself. You can type a git commit hash (or hash prefix) and vgo will compute and substitute the appropriate pseudo-version.

Will vgo address non-Go dependencies, like C or protocol buffers? Generated code?

[#24301 (comment) by @AlexRouSg.]
[#24301 (comment) by @stevvooe.]
[#24301 (comment) by @nim-nim.]

Non-Go development continues to be a non-goal of the go command, so there won't be support for managing C libraries and such, nor will there be explicit support for protocol buffers.

That said, we certainly do understand that using protocol buffers with Go is too difficult, and we'd like to see that addressed separately.

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

Won't minimal version selection keep developers from getting important updates?

[#24301 (comment) by @TocarIP.]
[#24301 (comment) by @nim-nim.]
[#24301 (comment) by @Merovius.]

Added to FAQ.

Can I use master to develop v1 and then reuse it to develop v2?

[#24301 (comment) by @mrkanister.]
[#24301 (comment) by @aarondl.]

Yes. Added to FAQ.

What is the timeline for this?

[#24301 (comment) by @flibustenet.]

Response in #24301 (comment) by @rsc. In short, the goal is to land a "technology preview" in Go 1.11; work may continue a few weeks into the freeze but not further. Probably don't send PRs adding go.mod to every library you can find until the proposal is marked accepted and the development copy of cmd/go has been updated.

How can I make a backwards-incompatible security change?

[#24301 (comment) by @buro9.]

Response in #24301 (comment) by @rsc. In short, the Go 1 compatibility guidelines do allow breaking changes for security reasons to avoid bumping the major version, but it's always best to do so in a way that keeps existing code working as much as possible. For example, don't remove a function. Instead, make the function panic or log.Fatal only if called improperly.

If one repo holds different modules in subdirectories (say, v2, v3, v4), can vgo mix and match from different commits?

[#24301 (comment) by @jimmyfrasche.]
[#24301 (comment) by @AlexRouSg.]

Yes. It treats each version tag as corresponding only to one subtree of the overall repository, and it can use a different tag (and therefore different commit) for each decision.

What if projects misuse semver? Should we allow minor versions in import paths?

[#24301 (comment) by @pbx0.]
[#24301 (comment) by @powerman.]
[#24301 (comment) by @pbx0.]
[#24301 (comment) by @powerman.]

As @powerman notes, we definitely need to provide an API consistency checker so that projects at least can be told when they are about to release an obviously breaking change.

Can you determine if you have more than one package in a build?

[#24301 (comment) by @pbx0.]

The easiest thing to do would be to use goversion -m on the resulting binary. We should make a go option to show the same thing without building the binary.

Concerns about vgo reliance on proxy vs vendor, especially open source vs enterprise.

[#24301 (comment) by @joeshaw.]
[#24301 (comment) by @kardianos.]
[#24301 (comment) by @Merovius.]
[#24301 (comment) by @joeshaw.]
[#24301 (comment) by @jamiethermo.]
[#24301 (comment) by @Merovius.]

Response: [#24301 (comment) by @rsc.] Proxy and vendor will both be supported. Proxy is very important to enterprise, and vendor is very important to open source. We also want to build a reliable mirror network, but only once vgo becomes go.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 7, 2018

Contributor

.

Contributor

rsc commented Mar 7, 2018

.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 7, 2018

Contributor

.

Contributor

rsc commented Mar 7, 2018

.

@gopherbot

This comment has been minimized.

Show comment Hide comment
@gopherbot

gopherbot Mar 20, 2018

Change https://golang.org/cl/101678 mentions this issue: design: add 24301-versioned-go

Change https://golang.org/cl/101678 mentions this issue: design: add 24301-versioned-go

gopherbot pushed a commit to golang/proposal that referenced this issue Mar 20, 2018

design: add 24301-versioned-go
As a reminder, it's fine to make comments about grammar, wording,
and the like on Gerrit, but comments about the substance of the
proposal should be made on GitHub: golang.org/issue/24301.

For golang/go#24301.

Change-Id: I5dcf204da3ebe947eecad7d020002dd52f64faa2
Reviewed-on: https://go-review.googlesource.com/101678
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
@ChrisHines

This comment has been minimized.

Show comment Hide comment
@ChrisHines

ChrisHines Mar 20, 2018

Contributor

This proposal is impressive and I like most everything about it. However, I posted the following concern on the mailing list, but never received any replies. In the meantime I've seen this issue raised by others in the Gophers slack channel for vgo and haven't seen a satisfactory answer there either.

From: https://groups.google.com/d/msg/golang-dev/Plc42fslQEk/rlfeNlazAgAJ

I am most worried about the migration path between a pre-vgo world and a vgo world going badly. I think we risk inflicting major pain on the Go community if there isn't a smooth migration path. Clearly the migration cannot be atomic across the whole community, but if I've understood all that you've written about vgo so far, there may be some situations where existing widely used packages will not be usable by both pre-vgo tools and post-vgo tools.

Specifically, I believe that existing packages that already have tagged releases with major versions >= 2 will not work with vgo until they have a go.mod file and also are imported with a /vN augmented import path. However, once those changes are made to the repository it will break pre-vgo uses of the package.

This seems to create a different kind of diamond import problem in which the two sibling packages in the middle of the diamond import a common v2+ package. I'm concerned that the sibling packages must adopt vgo import paths atomically to prevent the package at the top of the diamond from being in an unbuildable state whether it's using vgo or pre-vgo tools.

I haven't seen anything yet that explains the migration path in this scenario.

The proposal states:

Module-aware builds can import non-module-aware packages (those outside a tree with a go.mod file) provided they are tagged with a v0 or v1 semantic version. They can also refer to any specific commit using a “pseudo-version” of the form v0.0.0-yyyymmddhhmmss-commit. The pseudo-version form allows referring to untagged commits as well as commits that are tagged with semantic versions at v2 or above but that do not follow the semantic import versioning convention.

But I don't see a way for non-module-aware packages to import module-aware packages with transitive dependencies >= v2. That seems to cause ecosystem fragmentation in a way not yet addressed. Once you have a module-aware dependency that has a package >= v2 somewhere in its transitive dependencies that seems to force all its dependents to also adopt vgo to keep the build working.

Update: see also #24454

Contributor

ChrisHines commented Mar 20, 2018

This proposal is impressive and I like most everything about it. However, I posted the following concern on the mailing list, but never received any replies. In the meantime I've seen this issue raised by others in the Gophers slack channel for vgo and haven't seen a satisfactory answer there either.

From: https://groups.google.com/d/msg/golang-dev/Plc42fslQEk/rlfeNlazAgAJ

I am most worried about the migration path between a pre-vgo world and a vgo world going badly. I think we risk inflicting major pain on the Go community if there isn't a smooth migration path. Clearly the migration cannot be atomic across the whole community, but if I've understood all that you've written about vgo so far, there may be some situations where existing widely used packages will not be usable by both pre-vgo tools and post-vgo tools.

Specifically, I believe that existing packages that already have tagged releases with major versions >= 2 will not work with vgo until they have a go.mod file and also are imported with a /vN augmented import path. However, once those changes are made to the repository it will break pre-vgo uses of the package.

This seems to create a different kind of diamond import problem in which the two sibling packages in the middle of the diamond import a common v2+ package. I'm concerned that the sibling packages must adopt vgo import paths atomically to prevent the package at the top of the diamond from being in an unbuildable state whether it's using vgo or pre-vgo tools.

I haven't seen anything yet that explains the migration path in this scenario.

The proposal states:

Module-aware builds can import non-module-aware packages (those outside a tree with a go.mod file) provided they are tagged with a v0 or v1 semantic version. They can also refer to any specific commit using a “pseudo-version” of the form v0.0.0-yyyymmddhhmmss-commit. The pseudo-version form allows referring to untagged commits as well as commits that are tagged with semantic versions at v2 or above but that do not follow the semantic import versioning convention.

But I don't see a way for non-module-aware packages to import module-aware packages with transitive dependencies >= v2. That seems to cause ecosystem fragmentation in a way not yet addressed. Once you have a module-aware dependency that has a package >= v2 somewhere in its transitive dependencies that seems to force all its dependents to also adopt vgo to keep the build working.

Update: see also #24454

@Merovius

This comment has been minimized.

Show comment Hide comment
@Merovius

Merovius Mar 20, 2018

The Go project has encouraged this convention from the start of the project, but this proposal gives it more teeth: upgrades by package users will succeed or fail only to the extent that package authors follow the import compatibility rule.

It is unclear to me what this means and how it changes from the current situation. It would seem to me, that this describes the current situation as well: If I break this rule, upgrades and go-get will fail. AIUI nothing really changes and I'd suggest removing at least the mention of "more teeth". Unless, of course, this paragraph is meant to imply that there are additional mechanisms in place to penalize/prevent breakages?

The Go project has encouraged this convention from the start of the project, but this proposal gives it more teeth: upgrades by package users will succeed or fail only to the extent that package authors follow the import compatibility rule.

It is unclear to me what this means and how it changes from the current situation. It would seem to me, that this describes the current situation as well: If I break this rule, upgrades and go-get will fail. AIUI nothing really changes and I'd suggest removing at least the mention of "more teeth". Unless, of course, this paragraph is meant to imply that there are additional mechanisms in place to penalize/prevent breakages?

@jimmyfrasche

This comment has been minimized.

Show comment Hide comment
@jimmyfrasche

jimmyfrasche Mar 20, 2018

Contributor

This would also affect things like database drivers and image formats that register themselves with another package during init, since multiple major versions of the same package can end up doing this. It's unclear to me what all the repercussions of that would be.

Contributor

jimmyfrasche commented Mar 20, 2018

This would also affect things like database drivers and image formats that register themselves with another package during init, since multiple major versions of the same package can end up doing this. It's unclear to me what all the repercussions of that would be.

@justinian

This comment has been minimized.

Show comment Hide comment
@justinian

justinian Mar 21, 2018

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

Why is this? In the linked post, I only see the rationale that this is what developers currently do to create alternate paths when they make breaking changes - but this is a workaround for the fact that they don't initially plan for the tooling not handling versions for them. If we're switching to a new practice, why not allow and encourage (or even mandate) that new vgo-enabled packages include v0 or v1? It seems like paths lacking versions are just opportunities for confusion. (Is this a vgo-style package? Where is the module boundary? etc.)

justinian commented Mar 21, 2018

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

Why is this? In the linked post, I only see the rationale that this is what developers currently do to create alternate paths when they make breaking changes - but this is a workaround for the fact that they don't initially plan for the tooling not handling versions for them. If we're switching to a new practice, why not allow and encourage (or even mandate) that new vgo-enabled packages include v0 or v1? It seems like paths lacking versions are just opportunities for confusion. (Is this a vgo-style package? Where is the module boundary? etc.)

@jayschwa

This comment has been minimized.

Show comment Hide comment
@jayschwa

jayschwa Mar 21, 2018

Contributor

I generally like the proposal, but am hung up on requiring major versions in import paths:

  1. It violates the DRY principle when the major version can already be known from the go.mod. Understanding what will happen if there's a mismatch between the two is also hard to intuit.
  2. The irregularity of allowing v0 and v1 to be absent is also unintuitive.
  3. Changing all the import paths when upgrading a dependency seems potentially tedious.

I understand that scenarios like the moauth example need to be workable, but hopefully not at the expense of keeping things simple for more common scenarios.

Contributor

jayschwa commented Mar 21, 2018

I generally like the proposal, but am hung up on requiring major versions in import paths:

  1. It violates the DRY principle when the major version can already be known from the go.mod. Understanding what will happen if there's a mismatch between the two is also hard to intuit.
  2. The irregularity of allowing v0 and v1 to be absent is also unintuitive.
  3. Changing all the import paths when upgrading a dependency seems potentially tedious.

I understand that scenarios like the moauth example need to be workable, but hopefully not at the expense of keeping things simple for more common scenarios.

@nightlyone

This comment has been minimized.

Show comment Hide comment
@nightlyone

nightlyone Mar 21, 2018

Contributor

First of all: Impressive work!

One thing that is totally unclear to me and seems a bit underspecified:

Why there is a zip files in this proposal?

Layout, constraints and multiple use cases like when it is created and how it's life cycle is managed, what tools need support, how tools like linters should interact with it are also unclear, because they are not covered in the proposal.

So I would suggest to either refer to a later, still unwritten, proposal here and remove the word zip or remove the whole part from the proposal text, if you plan not discuss it at all within the scope of this proposal.

Discussing this later also enables a different audiences to contribute better here.

Contributor

nightlyone commented Mar 21, 2018

First of all: Impressive work!

One thing that is totally unclear to me and seems a bit underspecified:

Why there is a zip files in this proposal?

Layout, constraints and multiple use cases like when it is created and how it's life cycle is managed, what tools need support, how tools like linters should interact with it are also unclear, because they are not covered in the proposal.

So I would suggest to either refer to a later, still unwritten, proposal here and remove the word zip or remove the whole part from the proposal text, if you plan not discuss it at all within the scope of this proposal.

Discussing this later also enables a different audiences to contribute better here.

@tpng

This comment has been minimized.

Show comment Hide comment
@tpng

tpng Mar 21, 2018

Which timezone is used for the timestamp in the pseudo-version (v0.0.0-yyyymmddhhmmss-commit)?

Edit:
It is in UTC as stated in https://research.swtch.com/vgo-module.

tpng commented Mar 21, 2018

Which timezone is used for the timestamp in the pseudo-version (v0.0.0-yyyymmddhhmmss-commit)?

Edit:
It is in UTC as stated in https://research.swtch.com/vgo-module.

@AlexRouSg

This comment has been minimized.

Show comment Hide comment
@AlexRouSg

AlexRouSg Mar 21, 2018

@rsc Will you be addressing C dependencies?

@rsc Will you be addressing C dependencies?

@TocarIP

This comment has been minimized.

Show comment Hide comment
@TocarIP

TocarIP Mar 21, 2018

Contributor

Looks like Minimal version selection makes propagation of non-breaking changes very slow. Suppose we have a popular library Foo, which is used by projects A,B and C. Someone improves Foo performance without changing API. Currently receiving updates is an opt-out process. If project A vendored Foo, but B and C didn't, author only needs to send pr with update to vendored dependency to A. So non-api breaking contributions won't have as much effect on community and are somewhat discouraged compared to current situation. This is even more problematic for security updates. If some abandoned/small/not very active project (not library) declares direct dependency on old version of e. g. x/crypto all users of that project will be vulnerable to flaw in x/crypto until project is updated, potentially forever. Currently users of such projects will receive latest fixed version, so this makes security situation worse. IIRC there were some suggestions how to fix this in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

Contributor

TocarIP commented Mar 21, 2018

Looks like Minimal version selection makes propagation of non-breaking changes very slow. Suppose we have a popular library Foo, which is used by projects A,B and C. Someone improves Foo performance without changing API. Currently receiving updates is an opt-out process. If project A vendored Foo, but B and C didn't, author only needs to send pr with update to vendored dependency to A. So non-api breaking contributions won't have as much effect on community and are somewhat discouraged compared to current situation. This is even more problematic for security updates. If some abandoned/small/not very active project (not library) declares direct dependency on old version of e. g. x/crypto all users of that project will be vulnerable to flaw in x/crypto until project is updated, potentially forever. Currently users of such projects will receive latest fixed version, so this makes security situation worse. IIRC there were some suggestions how to fix this in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

@jba

This comment has been minimized.

Show comment Hide comment
@jba

jba Mar 21, 2018

IIRC there were some suggestions how to fix [getting security patches] in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

See the mention of go get -p.

jba commented Mar 21, 2018

IIRC there were some suggestions how to fix [getting security patches] in maillist discussion, but, as far as I can tell this proposal doesn't mention it.

See the mention of go get -p.

@TocarIP

This comment has been minimized.

Show comment Hide comment
@TocarIP

TocarIP Mar 21, 2018

Contributor

See the mention of go get -p.

I've seen it, but this is still an opt-in mechanism.
I was thinking of way for library to mark all previous releases as unsafe, to force user to run go get -p or explicitly opt-in into insecure library.

Contributor

TocarIP commented Mar 21, 2018

See the mention of go get -p.

I've seen it, but this is still an opt-in mechanism.
I was thinking of way for library to mark all previous releases as unsafe, to force user to run go get -p or explicitly opt-in into insecure library.

@leonklingele

This comment has been minimized.

Show comment Hide comment
@leonklingele

leonklingele Mar 21, 2018

Contributor

If support for go get as we know it today will be deprecated and eventually removed, what's the recommended way to fetch & install (untagged) Go binaries then? Does it require git clone'ing the project first, followed by a manual go install to install the binary?
If $GOPATH is deprecated, where will these binaries be installed to?

Contributor

leonklingele commented Mar 21, 2018

If support for go get as we know it today will be deprecated and eventually removed, what's the recommended way to fetch & install (untagged) Go binaries then? Does it require git clone'ing the project first, followed by a manual go install to install the binary?
If $GOPATH is deprecated, where will these binaries be installed to?

@dolanor

This comment has been minimized.

Show comment Hide comment
@dolanor

dolanor Mar 21, 2018

@leonklingele: from my understanding, go get will not be deprecated, on the contrary.
It will be enhanced with automatic and transparent versioning capabilities. If a project depends from an untagged project, it would just take the master and "vendor" it at this exact version.
Again, my own understanding from reading just a little bit about vgo. I'm still in the process of understanding it completely.

dolanor commented Mar 21, 2018

@leonklingele: from my understanding, go get will not be deprecated, on the contrary.
It will be enhanced with automatic and transparent versioning capabilities. If a project depends from an untagged project, it would just take the master and "vendor" it at this exact version.
Again, my own understanding from reading just a little bit about vgo. I'm still in the process of understanding it completely.

@mrkanister

This comment has been minimized.

Show comment Hide comment
@mrkanister

mrkanister Mar 22, 2018

I wonder how this will affect the flow of working with a Git repository in general, also building on this sentence from the proposal:

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

At the moment, it seems common to work on master (for me this includes short-lived feature branches) and to tag a commit with a new version every now and then. I feel this workflow is made more confusing with Go modules as soon as I release v2 of my library, because now I have a master and a v2 branch. I would expect master to be the current branch and v2 to be a maintenance branch, but it is exactly the other way around.

I know that the default branch can be changed from master to v2, but this still leaves me with the task to update that every time I release a new major version. Personally, I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

I wonder how this will affect the flow of working with a Git repository in general, also building on this sentence from the proposal:

If the major version is v0 or v1, then the version number element must be omitted; otherwise it must be included.

At the moment, it seems common to work on master (for me this includes short-lived feature branches) and to tag a commit with a new version every now and then. I feel this workflow is made more confusing with Go modules as soon as I release v2 of my library, because now I have a master and a v2 branch. I would expect master to be the current branch and v2 to be a maintenance branch, but it is exactly the other way around.

I know that the default branch can be changed from master to v2, but this still leaves me with the task to update that every time I release a new major version. Personally, I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

@stapelberg

This comment has been minimized.

Show comment Hide comment
@stapelberg

stapelberg Mar 22, 2018

Contributor

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

I think this aspect of the proposal sets the right incentive: it encourages upstream authors to think about how they can do changes in a backwards-compatible way, reducing overall ecosystem churn.

Contributor

stapelberg commented Mar 22, 2018

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

I think this aspect of the proposal sets the right incentive: it encourages upstream authors to think about how they can do changes in a backwards-compatible way, reducing overall ecosystem churn.

@jba

This comment has been minimized.

Show comment Hide comment
@jba

jba Mar 22, 2018

now I have a master and a v2 branch

You can instead create a v2/ subdirectory in master.

jba commented Mar 22, 2018

now I have a master and a v2 branch

You can instead create a v2/ subdirectory in master.

@AlexRouSg

This comment has been minimized.

Show comment Hide comment
@AlexRouSg

AlexRouSg Mar 22, 2018

@mrkanister

I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

@mrkanister

I would rather have a master and a v1 branch, but I am not sure how exactly this would fit the proposal.

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

@justinian

This comment has been minimized.

Show comment Hide comment
@justinian

justinian Mar 22, 2018

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

This is a problematic style of thinking that I think has bitten Go hard in the past. For one person on one project, switching what branch is default is simple in the moment, yes. But going against workflow conventions will mean people forget, especially when they work in several languages. And it will be one more quirky example of how Go does things totally differently that newcomers have to learn. Going against common programmer workflow conventions is not at all a minor cost.

New major releases cause churn. If you have to change one setting in your Git repository (the default branch) whenever you make a new release, that’s a very minor cost compared to your library’s users switching to the new version.

This is a problematic style of thinking that I think has bitten Go hard in the past. For one person on one project, switching what branch is default is simple in the moment, yes. But going against workflow conventions will mean people forget, especially when they work in several languages. And it will be one more quirky example of how Go does things totally differently that newcomers have to learn. Going against common programmer workflow conventions is not at all a minor cost.

@cznic

This comment has been minimized.

Show comment Hide comment
@cznic

cznic Mar 22, 2018

Contributor

Going against common programmer workflow conventions is not at all a minor cost.

Not following the conventional path is sometimes the necessary condition for innovation.

Contributor

cznic commented Mar 22, 2018

Going against common programmer workflow conventions is not at all a minor cost.

Not following the conventional path is sometimes the necessary condition for innovation.

@the42 the42 referenced this issue in dominikh/go-mode.el Mar 22, 2018

Open

Prepare for Modules (vgo support) #237

@marwan-at-work

This comment has been minimized.

Show comment Hide comment
@marwan-at-work

marwan-at-work Mar 22, 2018

If I understood parts of the proposal correctly, you never have to create a subdirectory or a new branch. You can potentially have only a master branch and git tag your repo from 0.0, to 1.0, to 2.0 and so on as long as you make sure to update your go.module to the correct import path for your library.

If I understood parts of the proposal correctly, you never have to create a subdirectory or a new branch. You can potentially have only a master branch and git tag your repo from 0.0, to 1.0, to 2.0 and so on as long as you make sure to update your go.module to the correct import path for your library.

@flibustenet

This comment has been minimized.

Show comment Hide comment
@flibustenet

flibustenet Mar 22, 2018

@mrkanister I think, for dev, your clone your master (or any dev branch) and use "replace" directive (see vgo-tour) to point to it. (if i understand what you mean, no sure).

@mrkanister I think, for dev, your clone your master (or any dev branch) and use "replace" directive (see vgo-tour) to point to it. (if i understand what you mean, no sure).

@flibustenet

This comment has been minimized.

Show comment Hide comment
@flibustenet

flibustenet Mar 22, 2018

@rsc I'd like to ask you to be more precise about the road map and what we should do now.
Will it follow the Go policy and feature freeze vgo at 3 month (2 now) ?
Should we now go with our pilgrim's baton asking every libs maintainer to add a go.mod file or should we wait for the proposal to be officially accepted (to be sure that name and syntax will not change) ?

@rsc I'd like to ask you to be more precise about the road map and what we should do now.
Will it follow the Go policy and feature freeze vgo at 3 month (2 now) ?
Should we now go with our pilgrim's baton asking every libs maintainer to add a go.mod file or should we wait for the proposal to be officially accepted (to be sure that name and syntax will not change) ?

@AlexRouSg

This comment has been minimized.

Show comment Hide comment
@AlexRouSg

AlexRouSg Mar 22, 2018

@flibustenet Tools are not covered by the 1.0 policy so anything can change.

https://golang.org/doc/go1compat

Finally, the Go toolchain (compilers, linkers, build tools, and so on) is under active development and may change behavior. This means, for instance, that scripts that depend on the location and properties of the tools may be broken by a point release.

Also from the proposal

The plan, subject to proposal approval, is to release module support in Go 1.11 as an optional feature that may still change. The Go 1.11 release will give users a chance to use modules “for real” and provide critical feedback. Even though the details may change, future releases will be able to consume Go 1.11-compatible source trees. For example, Go 1.12 will understand how to consume the Go 1.11 go.mod file syntax, even if by then the file syntax or even the file name has changed. In a later release (say, Go 1.12), we will declare the module support completed. In a later release (say, Go 1.13), we will end support for go get of non-modules. Support for working in GOPATH will continue indefinitely.

AlexRouSg commented Mar 22, 2018

@flibustenet Tools are not covered by the 1.0 policy so anything can change.

https://golang.org/doc/go1compat

Finally, the Go toolchain (compilers, linkers, build tools, and so on) is under active development and may change behavior. This means, for instance, that scripts that depend on the location and properties of the tools may be broken by a point release.

Also from the proposal

The plan, subject to proposal approval, is to release module support in Go 1.11 as an optional feature that may still change. The Go 1.11 release will give users a chance to use modules “for real” and provide critical feedback. Even though the details may change, future releases will be able to consume Go 1.11-compatible source trees. For example, Go 1.12 will understand how to consume the Go 1.11 go.mod file syntax, even if by then the file syntax or even the file name has changed. In a later release (say, Go 1.12), we will declare the module support completed. In a later release (say, Go 1.13), we will end support for go get of non-modules. Support for working in GOPATH will continue indefinitely.

@mrkanister

This comment has been minimized.

Show comment Hide comment
@mrkanister

mrkanister Mar 22, 2018

Thanks for the feedback.

@AlexRouSg

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

You are correct, this will continue to work as before (just double checked to be sure), good catch!

With that out of the way, the thing that I (and apparently others) don't understand is the reasoning behind disallowing a v1 package to exist. I tried to import one using /v1 at the end of the import and also adding that to the go.mod of the package being imported, but vgo will look for a folder named v1 instead.

Thanks for the feedback.

@AlexRouSg

According to my understanding of https://research.swtch.com/vgo-module vgo uses tags not branches to identify the versions. So you can keep development on master and branch off v1 as long as the tags point to the correct branch and commit.

You are correct, this will continue to work as before (just double checked to be sure), good catch!

With that out of the way, the thing that I (and apparently others) don't understand is the reasoning behind disallowing a v1 package to exist. I tried to import one using /v1 at the end of the import and also adding that to the go.mod of the package being imported, but vgo will look for a folder named v1 instead.

@tpng

This comment has been minimized.

Show comment Hide comment
@tpng

tpng Mar 23, 2018

@mrkanister
I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.
Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

tpng commented Mar 23, 2018

@mrkanister
I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.
Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

@nim-nim

This comment has been minimized.

Show comment Hide comment
@nim-nim

nim-nim Mar 25, 2018

Hi,

While a lot of the points in the proposal are more than welcome and will help taming the large Go codebases that emerged over time the "use minimal version" rule is quite harmful:

  • you want your code ecosystem to progress. That means you want people testing and using new versions and detect problems early before they accumulate.
  • you want new module releases, that fix security problems, to be applied as soon as possible
  • you want to be able to apply new module releases, that fix security problems, as soon as possible. They are not always tagged at security fixes. If you avoid new releases you also avoid those fixes
  • even when a new release does not contain security fixes applying its changes early means there will be less changes to vet when the next release that does contain security fixes is published (and the last thing you want when such a release is published and you need to be quick is to be bogged down in intermediary changes you didn't look at before).
  • applying intermediary releases is only harmful if they break compat, and they shouldn't break compat, and if they do break compat better to detect it and tell the module authors before they make it an habit for the next releases you'll eventually will absolutely need.
  • you do not want old bits of code to drag you down because they still specify an ancient dependency version and not one finds the time to update their manifest. Using the latest version of a major release serves this social need in other code ecosystems: force devs to test the latest version and not postpone till it's too late because “there are more important” (ie more fun) things to do.
  • while in theory, you can ship a limitless number of module versions so every piece of code can use the one it wants, in practice as soon as you compose two modules that use the same dep you have to choose a version so the more complex your software is, the less you'll tolerate multiple versions. So you soon hit the old problem of what to do with stragglers that slow down the whole convoy. I never met a human culture that managed this problem by telling stragglers "you're right, go as slow as you want, everyone will wait for you". It might be nice and altruistic but it's not productive.

Fighting human inertia is hard and painful, and we're fighting it because it is required to progress not because it is pleasant. Making pleasant tools that avoid the problem and incite humans to procrastinate some more is not helpful at all it will only accelerate project sedimentation and technical debt accumulation. There are already dozens of Go projects on github with most of their readme devoted to the author begging his users to upgrade because he made important fixes, defaulting to the oldest release will generalize the problem.

A good rule would be "use the latest release that matches the major release, not every intermediary commit". That would be a compromise going forward and stability. It puts the original project in command, that knows the codebase best, and can decide sanely when to switch its users to a new code state.

nim-nim commented Mar 25, 2018

Hi,

While a lot of the points in the proposal are more than welcome and will help taming the large Go codebases that emerged over time the "use minimal version" rule is quite harmful:

  • you want your code ecosystem to progress. That means you want people testing and using new versions and detect problems early before they accumulate.
  • you want new module releases, that fix security problems, to be applied as soon as possible
  • you want to be able to apply new module releases, that fix security problems, as soon as possible. They are not always tagged at security fixes. If you avoid new releases you also avoid those fixes
  • even when a new release does not contain security fixes applying its changes early means there will be less changes to vet when the next release that does contain security fixes is published (and the last thing you want when such a release is published and you need to be quick is to be bogged down in intermediary changes you didn't look at before).
  • applying intermediary releases is only harmful if they break compat, and they shouldn't break compat, and if they do break compat better to detect it and tell the module authors before they make it an habit for the next releases you'll eventually will absolutely need.
  • you do not want old bits of code to drag you down because they still specify an ancient dependency version and not one finds the time to update their manifest. Using the latest version of a major release serves this social need in other code ecosystems: force devs to test the latest version and not postpone till it's too late because “there are more important” (ie more fun) things to do.
  • while in theory, you can ship a limitless number of module versions so every piece of code can use the one it wants, in practice as soon as you compose two modules that use the same dep you have to choose a version so the more complex your software is, the less you'll tolerate multiple versions. So you soon hit the old problem of what to do with stragglers that slow down the whole convoy. I never met a human culture that managed this problem by telling stragglers "you're right, go as slow as you want, everyone will wait for you". It might be nice and altruistic but it's not productive.

Fighting human inertia is hard and painful, and we're fighting it because it is required to progress not because it is pleasant. Making pleasant tools that avoid the problem and incite humans to procrastinate some more is not helpful at all it will only accelerate project sedimentation and technical debt accumulation. There are already dozens of Go projects on github with most of their readme devoted to the author begging his users to upgrade because he made important fixes, defaulting to the oldest release will generalize the problem.

A good rule would be "use the latest release that matches the major release, not every intermediary commit". That would be a compromise going forward and stability. It puts the original project in command, that knows the codebase best, and can decide sanely when to switch its users to a new code state.

@aarondl

This comment has been minimized.

Show comment Hide comment
@aarondl

aarondl Mar 25, 2018

My unanswered question copied from mailing list:

We expect that most developers will prefer to follow the usual “major branch” convention, in which different major versions live in different branches. In this case, the root directory in a v2 branch would have a go.mod indicating v2, like this:

It seems like there's subdirectories and this major branch convention that are both supported by vgo. In my anecdotal experience no repositories follow this convention in Go or other languages (can't actually think of a single one other than the ones forced to by gopkg.in which seems relatively unused these days). Master branch is whatever latest is and has v2.3.4 tags in it's history. Tags exist to separate everything (not just minor versions). If it's necessary to patch an old version, a branch is temporarily created off the last v1 tag, commits pushed, a new tag pushed, and the branch summarily deleted. There is no branch for versions, it's just current master/dev/feature branches + version tags. I know that "everything is a ref" in Git, but for other VCS the distinction may not be as fuzzy.

Having said that, I've tested the above described workflow with vgo (just having tags that say v2.0.0, v2.0.1 and no branches) and it does seem to work. So my question is: Although this works now, is it intended? As it doesn't seem as thoroughly described as the other two workflows in the blog, and I want to ensure that working without a v2/v3... branch is not accidental functionality that will disappear since as I explained above I've never seen this (or the other) described workflow in the post to be massively adopted by anyone (especially outside the Go community).

Of course my argument is coming down to preference and anecdotes, so I'd be willing to do some repo-scraping to prove this across all languages if needed. So far I've really liked the proposal posts and am generally on board with the changes, will continue to follow along and play with vgo.

Thanks for all your efforts.

aarondl commented Mar 25, 2018

My unanswered question copied from mailing list:

We expect that most developers will prefer to follow the usual “major branch” convention, in which different major versions live in different branches. In this case, the root directory in a v2 branch would have a go.mod indicating v2, like this:

It seems like there's subdirectories and this major branch convention that are both supported by vgo. In my anecdotal experience no repositories follow this convention in Go or other languages (can't actually think of a single one other than the ones forced to by gopkg.in which seems relatively unused these days). Master branch is whatever latest is and has v2.3.4 tags in it's history. Tags exist to separate everything (not just minor versions). If it's necessary to patch an old version, a branch is temporarily created off the last v1 tag, commits pushed, a new tag pushed, and the branch summarily deleted. There is no branch for versions, it's just current master/dev/feature branches + version tags. I know that "everything is a ref" in Git, but for other VCS the distinction may not be as fuzzy.

Having said that, I've tested the above described workflow with vgo (just having tags that say v2.0.0, v2.0.1 and no branches) and it does seem to work. So my question is: Although this works now, is it intended? As it doesn't seem as thoroughly described as the other two workflows in the blog, and I want to ensure that working without a v2/v3... branch is not accidental functionality that will disappear since as I explained above I've never seen this (or the other) described workflow in the post to be massively adopted by anyone (especially outside the Go community).

Of course my argument is coming down to preference and anecdotes, so I'd be willing to do some repo-scraping to prove this across all languages if needed. So far I've really liked the proposal posts and am generally on board with the changes, will continue to follow along and play with vgo.

Thanks for all your efforts.

@Merovius

This comment has been minimized.

Show comment Hide comment
@Merovius

Merovius Mar 25, 2018

Can someone maybe clarify how the proposed alternative model to MVS would work to improve upgrade-cadence? Because it isn't clear to me. My understanding of the alternative (widely used) model is

  • Developer creates handcrafted manifest, listing version constraints for all used dependencies
  • Developer runs $solver, that creates a lockfile, listing some chosen subset of transitive dependency versions that satisfy the specified constraints
  • This lockfile gets committed and is used at build and install time to guarantee reproducible builds
  • When a new version of a dependency is released and to be used, developer potentially updates the manifest, reruns the solver and recommits the new lockfile

The proposed MVS model as I understand it is

  • Developer autogenerates go.mod, based on the set of import paths in the module, selecting the currently newest version of any transitive dependency
  • go.mod gets committed and is used to get lower bounds on versions at build and install time. MVS guarantees reproducible builds
  • When a new version of a dependency is released and to be used, developer runs vgo get -u, which fetches the newest versions of transitive dependencies and overwrites go.mod with the new lower bounds. That then gets submitted.

It seems I must grossly overlook something and it would be helpful if someone would point out what. Because this understanding seems to imply that due lockfiles specifying exact versions and those being used in the actual build, that MVS is better at increasing upgrade-cadence - as it doesn't allow holding back versions, in general.

Clearly I'm missing something (and will feel stupid in about 5m), what is that?

Can someone maybe clarify how the proposed alternative model to MVS would work to improve upgrade-cadence? Because it isn't clear to me. My understanding of the alternative (widely used) model is

  • Developer creates handcrafted manifest, listing version constraints for all used dependencies
  • Developer runs $solver, that creates a lockfile, listing some chosen subset of transitive dependency versions that satisfy the specified constraints
  • This lockfile gets committed and is used at build and install time to guarantee reproducible builds
  • When a new version of a dependency is released and to be used, developer potentially updates the manifest, reruns the solver and recommits the new lockfile

The proposed MVS model as I understand it is

  • Developer autogenerates go.mod, based on the set of import paths in the module, selecting the currently newest version of any transitive dependency
  • go.mod gets committed and is used to get lower bounds on versions at build and install time. MVS guarantees reproducible builds
  • When a new version of a dependency is released and to be used, developer runs vgo get -u, which fetches the newest versions of transitive dependencies and overwrites go.mod with the new lower bounds. That then gets submitted.

It seems I must grossly overlook something and it would be helpful if someone would point out what. Because this understanding seems to imply that due lockfiles specifying exact versions and those being used in the actual build, that MVS is better at increasing upgrade-cadence - as it doesn't allow holding back versions, in general.

Clearly I'm missing something (and will feel stupid in about 5m), what is that?

@mrkanister

This comment has been minimized.

Show comment Hide comment
@mrkanister

mrkanister Mar 26, 2018

@tpng

Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

This should actually not be necessary. Let me give an example:

A user is currently using e.g. v1.0.0 of a library, pinned by a dependency manager and the tag in the upstream repository. Now upstream decides to create a go.mod and also calls the module /v1. This should result in a new commit and a new tag (e.g. v1.0.1). Since vgo will never attempt to update dependencies on its own, this should not break anything for the user, but he/she can update consciously by also changing the import path (pr vgo can do that for him/her).

I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.

Yes, I guess I can indeed see that point to not confuse new users of a library.

@tpng

Using the plain import path instead of /v1 is to ease the transition, so you don't have to update all your import paths to add /v1 at the end.

This should actually not be necessary. Let me give an example:

A user is currently using e.g. v1.0.0 of a library, pinned by a dependency manager and the tag in the upstream repository. Now upstream decides to create a go.mod and also calls the module /v1. This should result in a new commit and a new tag (e.g. v1.0.1). Since vgo will never attempt to update dependencies on its own, this should not break anything for the user, but he/she can update consciously by also changing the import path (pr vgo can do that for him/her).

I think the main reason for not allowing v1 or v0 in the import path is to ensure that there is only one import path for each compatible version of a package.

Yes, I guess I can indeed see that point to not confuse new users of a library.

@ChrisHines

This comment has been minimized.

Show comment Hide comment
@ChrisHines

ChrisHines Mar 30, 2018

Contributor

@rsc I've been trying to figure out how we could make the transition to vgo work as well, I've come to the same conclusions that you laid out in your response and your suggestion matches the best approach I've come up with on my own. I like your proposed change.

Contributor

ChrisHines commented Mar 30, 2018

@rsc I've been trying to figure out how we could make the transition to vgo work as well, I've come to the same conclusions that you laid out in your response and your suggestion matches the best approach I've come up with on my own. I like your proposed change.

@powerman

This comment has been minimized.

Show comment Hide comment
@powerman

powerman Mar 30, 2018

#24301 (comment), @rsc:

Then we'll create v1.6.0 that is implemented with this forwarding. v1.6.0 does not call http.Handle; it delegates that to v2.0.0. Now expvar v1.6.0 and expvar/v2 can co-exist, because we planned it that way.

This sounds easier than it is. In reality, in most cases, this mean v1.6.0 have to be a complete rewrite of v1 in form of v2 wrapper (forwarded call to http.Handle will result in registering another handler - one from v2 - which in turn means all related code also should be from v2 to correctly interact with registered hander).

This very likely will change subtle details about v1 behaviour, especially with time, as v2 evolves. Even if we'll be able to compensate these subtle detail changes and emulate v1 good enough in v1.6.x - still, it's a lot of extra work and very likely makes future support of v1 branch (I mean successors of v1.5.0 here) meaningless.

#24301 (comment), @rsc:

Then we'll create v1.6.0 that is implemented with this forwarding. v1.6.0 does not call http.Handle; it delegates that to v2.0.0. Now expvar v1.6.0 and expvar/v2 can co-exist, because we planned it that way.

This sounds easier than it is. In reality, in most cases, this mean v1.6.0 have to be a complete rewrite of v1 in form of v2 wrapper (forwarded call to http.Handle will result in registering another handler - one from v2 - which in turn means all related code also should be from v2 to correctly interact with registered hander).

This very likely will change subtle details about v1 behaviour, especially with time, as v2 evolves. Even if we'll be able to compensate these subtle detail changes and emulate v1 good enough in v1.6.x - still, it's a lot of extra work and very likely makes future support of v1 branch (I mean successors of v1.5.0 here) meaningless.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 30, 2018

Contributor

@powerman, I'm absolutely not saying this is trivial. And you only need to coordinate to the extent that v1 and v2 fight over some shared resource like an http registration. But developers who participate in this packaging ecosystem absolutely need to understand that v1 and v2 of their packages will need to coexist in large programs. Many packages won't need any work - yaml and blackfriday, for example, are both on v2 that are completely different from v1 but there's no shared state to fight over, so there's no need for explicit coordination - but others will.

Contributor

rsc commented Mar 30, 2018

@powerman, I'm absolutely not saying this is trivial. And you only need to coordinate to the extent that v1 and v2 fight over some shared resource like an http registration. But developers who participate in this packaging ecosystem absolutely need to understand that v1 and v2 of their packages will need to coexist in large programs. Many packages won't need any work - yaml and blackfriday, for example, are both on v2 that are completely different from v1 but there's no shared state to fight over, so there's no need for explicit coordination - but others will.

@AlexRouSg

This comment has been minimized.

Show comment Hide comment
@AlexRouSg

AlexRouSg Mar 30, 2018

@powerman @rsc
I'm developing a GUI package which means I cannot even have 2+ instances due to the use of the "main" thread. So coming from the worst case singleton scenario, this is what I've decided to do.

  • Only have a v0/v1 release so it is impossible to import 2+ versions

  • Have public code in it's own api version folder, e.g. v1/v2 assuming vgo allows that or maybe api1/api2.

  • Those public api packages will then depend on an internal package so instead of having to rewrite on a v2, it is a rolling rewrite as the package grows and is much easier to handle.

@powerman @rsc
I'm developing a GUI package which means I cannot even have 2+ instances due to the use of the "main" thread. So coming from the worst case singleton scenario, this is what I've decided to do.

  • Only have a v0/v1 release so it is impossible to import 2+ versions

  • Have public code in it's own api version folder, e.g. v1/v2 assuming vgo allows that or maybe api1/api2.

  • Those public api packages will then depend on an internal package so instead of having to rewrite on a v2, it is a rolling rewrite as the package grows and is much easier to handle.

@zeebo

This comment has been minimized.

Show comment Hide comment
@zeebo

zeebo Mar 30, 2018

Contributor

In #24301 (comment) the proposed change defines "new" code as code with a go.mod file in the same directory or a parent directory. Does this include "synthesized" go.mod files created from reading in dependencies from a Gopkg.toml for example?

Contributor

zeebo commented Mar 30, 2018

In #24301 (comment) the proposed change defines "new" code as code with a go.mod file in the same directory or a parent directory. Does this include "synthesized" go.mod files created from reading in dependencies from a Gopkg.toml for example?

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 30, 2018

Contributor

@zeebo, yes. If you have a go.mod file in your tree then the assumption is that your code actually builds with vgo. If not, then rm go.mod (or at least don't check it into your repo where others might find it).

Contributor

rsc commented Mar 30, 2018

@zeebo, yes. If you have a go.mod file in your tree then the assumption is that your code actually builds with vgo. If not, then rm go.mod (or at least don't check it into your repo where others might find it).

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 30, 2018

Contributor

@AlexRouSg, your plan for your GUI package makes sense to me.

Contributor

rsc commented Mar 30, 2018

@AlexRouSg, your plan for your GUI package makes sense to me.

@zeebo

This comment has been minimized.

Show comment Hide comment
@zeebo

zeebo Mar 30, 2018

Contributor

@rsc hmm.. I'm not sure I understand and sorry if I was unclear. Does a package with only a Gopkg.toml in the file tree count as "new" for the definition?

Contributor

zeebo commented Mar 30, 2018

@rsc hmm.. I'm not sure I understand and sorry if I was unclear. Does a package with only a Gopkg.toml in the file tree count as "new" for the definition?

@stevvooe

This comment has been minimized.

Show comment Hide comment
@stevvooe

stevvooe Mar 30, 2018

@rsc

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

We managed to solve this by mapping protobuf into the GOPATH. Yes, we have it such that casual users don't need the tools to update, but for those modifying and regenerating protobufs, the solution in protobuild works extremely well.

The answer here is pretty disappointing. Finding a new build system that doesn't exist is just a non-answer. The reality here is that we won't rebuild these build systems and we'll continue using what works, avoiding adoption of the new vgo system.

Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

@rsc

As for generated code more generally, a real cross-language build system is the answer, specifically because we don't want every user to need to have the right generators installed. Better for the author to run the generators and check in the result.

We managed to solve this by mapping protobuf into the GOPATH. Yes, we have it such that casual users don't need the tools to update, but for those modifying and regenerating protobufs, the solution in protobuild works extremely well.

The answer here is pretty disappointing. Finding a new build system that doesn't exist is just a non-answer. The reality here is that we won't rebuild these build systems and we'll continue using what works, avoiding adoption of the new vgo system.

Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 30, 2018

Contributor

@zeebo, no, having a Gopkg.toml does not count as new; here "new" means expected to use vgo-style (semantic import versioning) imports.

Contributor

rsc commented Mar 30, 2018

@zeebo, no, having a Gopkg.toml does not count as new; here "new" means expected to use vgo-style (semantic import versioning) imports.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Mar 30, 2018

Contributor

@stevvooe:

We managed to solve this by mapping protobuf into the GOPATH. ...
Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

I haven't looked at your protobuild, but in general, yes, we are moving to a non-GOPATH model, and some of the tricks that GOPATH might have enabled will be left behind. For example GOPATH enabled the original godep to simulate vendoring without having vendoring support. That won't be possible anymore. On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own. That kind of global operation is not going to be supported anymore, no. I'm completely serious and sincere about wanting to make sure that protobufs are well supported, separate from vgo. @neild would probably be interested to hear suggestions but maybe not on this issue.

Contributor

rsc commented Mar 30, 2018

@stevvooe:

We managed to solve this by mapping protobuf into the GOPATH. ...
Does vgo just declare bankruptcy for those that liked and adopted GOPATH and worked around its issues?

I haven't looked at your protobuild, but in general, yes, we are moving to a non-GOPATH model, and some of the tricks that GOPATH might have enabled will be left behind. For example GOPATH enabled the original godep to simulate vendoring without having vendoring support. That won't be possible anymore. On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own. That kind of global operation is not going to be supported anymore, no. I'm completely serious and sincere about wanting to make sure that protobufs are well supported, separate from vgo. @neild would probably be interested to hear suggestions but maybe not on this issue.

@myitcv

This comment has been minimized.

Show comment Hide comment
@myitcv

myitcv Mar 30, 2018

Member

@stevvooe given @rsc's comments in #24301 (comment) I've cross referenced golang/protobuf#526 in case that issue ends up covering the vgo angle. If things end up being dealt with elsewhere I'm sure @dsnet et al will signpost us.

Member

myitcv commented Mar 30, 2018

@stevvooe given @rsc's comments in #24301 (comment) I've cross referenced golang/protobuf#526 in case that issue ends up covering the vgo angle. If things end up being dealt with elsewhere I'm sure @dsnet et al will signpost us.

@kybin

This comment has been minimized.

Show comment Hide comment
@kybin

kybin Mar 31, 2018

Contributor

Note: I didn't see the previous comments closely, it seems like the problem solved with different approach. Below was my idea.

Just an idea.

How about make vgo get aware specific tag like vgo-v1-lock?
When a repository has the tag, it may ignore other version tags, and pinned to the tag.

So, when a repository tagged v2.1.3 as last version,
but also the owner push vgo-v1-lock tag to the same commit that is tagged v2.1.3
it could write go.mod

require (
    "github.com/owner/repo" vgo-v1-lock
)

It should not get updated even if vgo get -u, until the repository owner changed or removed the tag.
It could make big repositories easier to prepare their moving.

When a library author is prepared, the author could announce to user
that they could manually update by putting "/v2" to it's import path.

Contributor

kybin commented Mar 31, 2018

Note: I didn't see the previous comments closely, it seems like the problem solved with different approach. Below was my idea.

Just an idea.

How about make vgo get aware specific tag like vgo-v1-lock?
When a repository has the tag, it may ignore other version tags, and pinned to the tag.

So, when a repository tagged v2.1.3 as last version,
but also the owner push vgo-v1-lock tag to the same commit that is tagged v2.1.3
it could write go.mod

require (
    "github.com/owner/repo" vgo-v1-lock
)

It should not get updated even if vgo get -u, until the repository owner changed or removed the tag.
It could make big repositories easier to prepare their moving.

When a library author is prepared, the author could announce to user
that they could manually update by putting "/v2" to it's import path.

@chirino

This comment has been minimized.

Show comment Hide comment
@chirino

chirino Apr 3, 2018

How do we handle the case where need to patch a deep dependency (for example to apply a CVE fix that the original author has not yet released in a tag). Seems the vendor strategy could handle this since your could apply a patch to the original authors release. Don't see how vgo can handle this.

chirino commented Apr 3, 2018

How do we handle the case where need to patch a deep dependency (for example to apply a CVE fix that the original author has not yet released in a tag). Seems the vendor strategy could handle this since your could apply a patch to the original authors release. Don't see how vgo can handle this.

@kardianos

This comment has been minimized.

Show comment Hide comment
@kardianos

kardianos Apr 3, 2018

Contributor

@chirino you can use the replace directive in the go.mod file to point to the patched package.

Contributor

kardianos commented Apr 3, 2018

@chirino you can use the replace directive in the go.mod file to point to the patched package.

@stevvooe

This comment has been minimized.

Show comment Hide comment
@stevvooe

stevvooe Apr 3, 2018

@rsc

On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own.

This is not, at all, what the project does. It builds up an import path from the GOPATH and the vendor dir. Any protobuf files in your project will then get generated with that import path. It also does things like map imports to specific Go packages.

The benefit of this is that it allows one to generate protobufs in a leaf project that are dependent on other protobufs defined in dependencies without regenerating everything. The GOPATH effectively becomes the import paths for the protobuf files.

The big problem with this proposal is that we completely lose the ability to resolve files in projects relative to Go packages on the filesystem. Most packaging systems have the ability to do this, albeit they make it hard. GOPATH is unique in that it is very easy to do this.

stevvooe commented Apr 3, 2018

@rsc

On a quick glance, it looks like protobuild is based on the assumption that it can drop files (pb.go) into other packages that you don't own.

This is not, at all, what the project does. It builds up an import path from the GOPATH and the vendor dir. Any protobuf files in your project will then get generated with that import path. It also does things like map imports to specific Go packages.

The benefit of this is that it allows one to generate protobufs in a leaf project that are dependent on other protobufs defined in dependencies without regenerating everything. The GOPATH effectively becomes the import paths for the protobuf files.

The big problem with this proposal is that we completely lose the ability to resolve files in projects relative to Go packages on the filesystem. Most packaging systems have the ability to do this, albeit they make it hard. GOPATH is unique in that it is very easy to do this.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Apr 3, 2018

Contributor

@stevvooe I'm sorry but I guess I'm still confused about what protobuild does. Can you file a new issue "x/vgo: not compatible with protobuild" and give a simple worked example of a file tree that exists today, what protobuild adds to the tree, and why that doesn't work with vgo? Thanks.

Contributor

rsc commented Apr 3, 2018

@stevvooe I'm sorry but I guess I'm still confused about what protobuild does. Can you file a new issue "x/vgo: not compatible with protobuild" and give a simple worked example of a file tree that exists today, what protobuild adds to the tree, and why that doesn't work with vgo? Thanks.

@NDari NDari referenced this issue in gorgonia/gorgonia Apr 4, 2018

Open

Adopt "dep" as the official installation mechanism #116

3 of 4 tasks complete
@jimmyfrasche

This comment has been minimized.

Show comment Hide comment
@jimmyfrasche

jimmyfrasche Apr 5, 2018

Contributor

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

Contributor

jimmyfrasche commented Apr 5, 2018

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

@AlexRouSg

This comment has been minimized.

Show comment Hide comment
@AlexRouSg

AlexRouSg Apr 5, 2018

@jimmyfrasche

As the user:
Then as a temp fix you can edit the go.mod file to replace the old module with a new one while keeping the same import paths. https://research.swtch.com/vgo-tour

But in the long term, you would want to change all the import paths and edit the go.mod file to use the new module. Basically the same thing you'd have to do with or without vgo.

As the package maintainer:
Just update the go.mod file to change the module import path and tell your users of the change.

AlexRouSg commented Apr 5, 2018

@jimmyfrasche

As the user:
Then as a temp fix you can edit the go.mod file to replace the old module with a new one while keeping the same import paths. https://research.swtch.com/vgo-tour

But in the long term, you would want to change all the import paths and edit the go.mod file to use the new module. Basically the same thing you'd have to do with or without vgo.

As the package maintainer:
Just update the go.mod file to change the module import path and tell your users of the change.

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Apr 6, 2018

Contributor

@jimmyfrasche,

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

These are real, pre-existing problems that the vgo proposal does not attempt to address directly, but clearly we should address them eventually. The answer to code disappearing is to have caching proxies (mirrors) along with a reason to trust them; that's future work. The answer to code moving is to add an explicit concept of module or package redirects, much like type aliases are type redirects; that's also future work.

Contributor

rsc commented Apr 6, 2018

@jimmyfrasche,

What if the module name has to change (lost domain, change of ownership, trademark dispute, etc.)?

These are real, pre-existing problems that the vgo proposal does not attempt to address directly, but clearly we should address them eventually. The answer to code disappearing is to have caching proxies (mirrors) along with a reason to trust them; that's future work. The answer to code moving is to add an explicit concept of module or package redirects, much like type aliases are type redirects; that's also future work.

@mvrhov

This comment has been minimized.

Show comment Hide comment
@mvrhov

mvrhov Apr 9, 2018

The answer to code disappearing is to have caching proxies (mirrors)

IMO that really is for the enterprises. Most small companies and others would be perfectly fine with vendoring and committing all the dependencies into the same repo

mvrhov commented Apr 9, 2018

The answer to code disappearing is to have caching proxies (mirrors)

IMO that really is for the enterprises. Most small companies and others would be perfectly fine with vendoring and committing all the dependencies into the same repo

@rsc

This comment has been minimized.

Show comment Hide comment
@rsc

rsc Apr 18, 2018

Contributor

Filed #24916 for the compatibility I mentioned in the comment above.
Also filed #24915 proposing to go back to using git etc directly instead of insisting on HTTPS access. It seems clear that code hosting setups are not ready for API-only yet.

Contributor

rsc commented Apr 18, 2018

Filed #24916 for the compatibility I mentioned in the comment above.
Also filed #24915 proposing to go back to using git etc directly instead of insisting on HTTPS access. It seems clear that code hosting setups are not ready for API-only yet.

@sdwarwick

This comment has been minimized.

Show comment Hide comment
@sdwarwick

sdwarwick Apr 19, 2018

minor proposal to create consistency in mod files with the planned vgo get command

In the "vgo-tour" document, the vgo get command is shown as:

vgo get rsc.io/sampler@v1.3.1

How about mirroring this format in the mod file? For example:

module "github.com/you/hello"
require (
    "golang.org/x/text" v0.0.0-20180208041248-4e4a3210bb54
    "rsc.io/quote" v1.5.2
)

could be simply:

module "github.com/you/hello"
require (
    "golang.org/x/text@v0.0.0-20180208041248-4e4a3210bb54"
    "rsc.io/quote@v1.5.2"
)
  • improves consistency with command line
  • single identifier defines item completely
  • better structure for supporting operations defined in the mod file that require multiple versioned package identifiers

minor proposal to create consistency in mod files with the planned vgo get command

In the "vgo-tour" document, the vgo get command is shown as:

vgo get rsc.io/sampler@v1.3.1

How about mirroring this format in the mod file? For example:

module "github.com/you/hello"
require (
    "golang.org/x/text" v0.0.0-20180208041248-4e4a3210bb54
    "rsc.io/quote" v1.5.2
)

could be simply:

module "github.com/you/hello"
require (
    "golang.org/x/text@v0.0.0-20180208041248-4e4a3210bb54"
    "rsc.io/quote@v1.5.2"
)
  • improves consistency with command line
  • single identifier defines item completely
  • better structure for supporting operations defined in the mod file that require multiple versioned package identifiers
@sdwarwick

This comment has been minimized.

Show comment Hide comment
@sdwarwick

sdwarwick Apr 19, 2018

Seeking more clarity on how this proposal deals with "binary only" package distribution.

binary library versioning / distribution doesn't seem to show up in any of the description documents around vgo. is there a need to look at this more carefully?

Seeking more clarity on how this proposal deals with "binary only" package distribution.

binary library versioning / distribution doesn't seem to show up in any of the description documents around vgo. is there a need to look at this more carefully?

@korya

This comment has been minimized.

Show comment Hide comment
@korya

korya Apr 20, 2018

The way it works today, if I can use plain git tool, go get will work just fine. It does not matter if it is a private Github repository or my own Git server. I really love it.

According to what I understand, it is going to be impossible to keep working that way. Is that true? If yes, is it possible to keep the option of using a locally installed git binary in order to checkout the code? (event it is using an explicit CLI flag)

korya commented Apr 20, 2018

The way it works today, if I can use plain git tool, go get will work just fine. It does not matter if it is a private Github repository or my own Git server. I really love it.

According to what I understand, it is going to be impossible to keep working that way. Is that true? If yes, is it possible to keep the option of using a locally installed git binary in order to checkout the code? (event it is using an explicit CLI flag)

@kardianos

This comment has been minimized.

Show comment Hide comment
@kardianos

kardianos Apr 20, 2018

Contributor

@korya Please see the recently filed issue #24915

Contributor

kardianos commented Apr 20, 2018

@korya Please see the recently filed issue #24915

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment