Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish an tagged v12 alpha version for client-go HEAD #76304

Closed
fejta opened this issue Apr 9, 2019 · 23 comments
Closed

Publish an tagged v12 alpha version for client-go HEAD #76304

fejta opened this issue Apr 9, 2019 · 23 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@fejta
Copy link
Contributor

fejta commented Apr 9, 2019

Attempting a minor upgrade of test-infra's k8s imports results in go choosing

  • the latest tagged release for client-go (v11.0.0+incompatible)
  • the latest commit for everything else (k8s.io/api v0.0.0-20190408172450-b1350b9e3bc2, for example).

This causes problems as client-go v11 is only guaranteed to behave correctly with kubernetes 1.14. Indeed apimachinery HEAD has backwards incompatible changes to things such as the watch.NewStreamWatcher call signature:

execroot/io_k8s_test_infra/vendor/k8s.io/client-go/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
	have (*versioned.Decoder)
	want (watch.Decoder, watch.Reporter)

This can be worked around with a replace k8s.io/client-go => k8s.io/client-go master stanza in go.mod.

However we get the module system to behave correctly. Ideas include:

@kubernetes/sig-api-machinery-bugs @liggitt

ref kubernetes/test-infra#12107

@fejta fejta added the kind/bug Categorizes issue or PR as related to a bug. label Apr 9, 2019
@k8s-ci-robot k8s-ci-robot added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Apr 9, 2019
@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

We don't intend to switch to semantic versioning at this time, since that requires all consumers to change their go import paths and we haven't determined if we want to be that disruptive yet.

edit: hmm, just found a quote in a golang/go issue related to this: "First, it appears you think you can opt out of semantic import versioning. You cannot. If you're using modules, you must use semantic import versioning." This doesn't align with what I saw experimentally (go get github.com/liggitt/modulea@v2.0.0 worked, despite not using semantic import versioning), so I'll need to do more research on this.

Until 1.15 is released, you need to add require directives to indicate the coordinating releases of k8s.io/apimachinery and k8s.io/api.

If you want to auto-upgrade, you'll also need to pin the k8s.io deps using replace directives.

In the 1.15 timeframe, we are considering tagging published repos with major versions just like client-go

/remove-kind bug
/kind feature
/assign

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 9, 2019
@k8s-ci-robot
Copy link
Contributor

@liggitt: Those labels are not set on the issue: kind/bug

In response to this:

We don't intend to switch to semantic versioning at this time, since that requires all consumers to change their go import paths and we haven't determined if we want to be that disruptive yet.

Until 1.15 is released, you need to add require directives to indicate the coordinating releases of k8s.io/apimachinery and k8s.io/api.

If you want to auto-upgrade, you'll also need to pin the k8s.io deps using replace directives.

In the 1.15 timeframe, we are considering tagging published repos with major versions just like client-go

/remove-kind bug
/kind feature
/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

cc @sttts for major version tagging of all staging repos

@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

I don't think publishing a v12.0.0 client-go would help your scenario. Automated go patch upgrades would not upgrade client-go, but would still bump to HEAD of the other repos

@k8s-ci-robot
Copy link
Contributor

@liggitt: Those labels are not set on the issue: kind/bug

In response to this:

We don't intend to switch to semantic versioning at this time, since that requires all consumers to change their go import paths and we haven't determined if we want to be that disruptive yet.

edit: hmm, just found a quote in a golang/go issue related to this: "First, it appears you think you can opt out of semantic import versioning. You cannot. If you're using modules, you must use semantic import versioning." This doesn't align with what I saw experimentally (go get github.com/liggitt/modulea@v2.0.0 worked, despite not using semantic import versioning), so I'll need to do more research on this.

Until 1.15 is released, you need to add require directives to indicate the coordinating releases of k8s.io/apimachinery and k8s.io/api.

If you want to auto-upgrade, you'll also need to pin the k8s.io deps using replace directives.

In the 1.15 timeframe, we are considering tagging published repos with major versions just like client-go

/remove-kind bug
/kind feature
/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sttts
Copy link
Contributor

sttts commented Apr 9, 2019

So what we suggest here is

  1. tag master as early as possible with a alpha.0 after a previous release has been forked
  2. tag all staging repos uniformly.

I have not heard any strong argument why we shouldn't do (2).

(1) means we encourage consumation of the master branch. Do we want that in general or is it just something for now where 1.15 is not released and master is the only way to enjoy go.mods?

@fejta
Copy link
Contributor Author

fejta commented Apr 9, 2019

Do we want that in general or is it just something for now where 1.15 is not released and master is the only way to enjoy go.mods?

My goal for the moment is more to get to where I can write automation that will automatically perform minor upgrades to a repo's modules (starting with test-infra), conditionally on the upgrade still passing unit tests.

Tagging all repos with v15 or whatever would meet this need.

The problem with only tagging client-go is that this causes the module system to use HEAD for everything else but v12 for client-go, which won't build.

@nikhita
Copy link
Member

nikhita commented Apr 9, 2019

xref #72638

@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

Tagging all repos with v15 or whatever would meet this need.

it actually wouldn't, if I understand correctly what I'm reading around go modules that do not use semantic import versioning...

experimentally, depending on v2.0.0 of a module that has a v2.0.0 (resolving to pseudo version v0.0.0-20190320161252-34f5313c42d3) and v3.0.0 (pseudoversion v0.0.0-20190320162433-6d6a5425e95c) but does not use semantic import versioning, go get -u all (and go get -u=patch all) will bump the module from the v2.0.0 SHA (pseudo-version v0.0.0-20190320161252-34f5313c42d3) to v1.0.0

@ryandawsonuk
Copy link

Would anyone be able to provide a go.mod snippet that should work right now?

@liggitt
Copy link
Member

liggitt commented Apr 12, 2019

Would anyone be able to provide a go.mod snippet that should work right now?

if you want the dev stream of client-go:

require k8s.io/client-go master

if you want the latest 1.14.x release:

require k8s.io/client-go kubernetes-1.14.1
require k8s.io/api kubernetes-1.14.1
require k8s.io/apimachinery kubernetes-1.14.1

@ryandawsonuk
Copy link

Thanks!

@liggitt
Copy link
Member

liggitt commented Apr 15, 2019

@fejta @sttts once a go.mod file is present in a given tag, go expects (and requires, in some cases, for version auto-selection) the module name to reflect the major version, so for go version upgrades to work completely the way you would expect, k8s.io/client-go v12.x.x would need to rename the module to ks8.io/client-go/v12, and all import paths would have to change

@fejta
Copy link
Contributor Author

fejta commented May 16, 2019

  • k8s.io/client-go has a valid v11 tag but has not opted into modules.
    • Go treats this as equivalent to a v1 api (v11+Incompatible)
    • Go will not pick a commit after the latest tag.
  • k8s.io/apimachinery and k8s.io/api have no valid tags
    • Go treats this as a v0 pseudo-version.
    • Go will pick the latest commit in this repo

IMO even without solving the rest of the module issues, it would be useful to make all the repos around here behave in the same way.

In other words, if I upgrade all my k8s.io/* imports at the same time I would expect use the same kubernetes release for all these imports. This results in me often winding up with latest production k8s.io/client-go kubernetes release but the alpha release for everything else.

It would be nice to wind up at the same alpha release for everything.

@fejta
Copy link
Contributor Author

fejta commented May 16, 2019

Specifically the issue is that as far as i can tell these instructions:

If you want the dev stream of client-go:

require k8s.io/client-go master

Do not result in the dev stream.

It get converted into k8s.io/client-go v2.0.0-alpha.0.0.20190112054256-b831b8de7155+incompatible which is v11.0.0 (head is 7b18d6600f6b0022e31c46b46875beffd85cc71a at the time of writing)

@liggitt
Copy link
Member

liggitt commented May 16, 2019

Specifically the issue is that as far as i can tell these instructions:

If you want the dev stream of client-go:

require k8s.io/client-go master

Do not result in the dev stream.

It get converted into k8s.io/client-go v2.0.0-alpha.0.0.20190112054256-b831b8de7155+incompatible which is v11.0.0 (head is 7b18d6600f6b0022e31c46b46875beffd85cc71a at the time of writing)

That is not what I see.

require k8s.io/client-go master gets auto-converted to require k8s.io/client-go v0.0.0-20190515063710-7b18d6600f6b, which is master.

@liggitt
Copy link
Member

liggitt commented May 16, 2019

go mod init example.com/foo
go get k8s.io/client-go@master
more go.mod
module example.com/foo

go 1.12

require k8s.io/client-go v0.0.0-20190515063710-7b18d6600f6b // indirect

@FedeBev
Copy link

FedeBev commented Jul 19, 2019

if you want the latest 1.14.x release:

require k8s.io/client-go kubernetes-1.14.1
require k8s.io/api kubernetes-1.14.1
require k8s.io/apimachinery kubernetes-1.14.1

This is not working for me, and fallback in

k8s.io/api v0.0.0-20190409021203-6e4e0e4f393b
k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d
k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible
k8s.io/utils v0.0.0-20190712204705-3dccf664f023 // indirect

I'm stuck on this problem, I'm back to dep for now.

Does anyone have a solution? I think is a shame a project like this doesn't use a well supported versioning

@nikhita
Copy link
Member

nikhita commented Jul 19, 2019

This is not working for me, and fallback in

@FedeBev could you elaborate on the issue/error that you are facing? Running go mod tidy on the go.mod file with require k8s.io/api kubernetes-1.14.1, etc directives will fallback to using the corresponding to pseudoversion (this is expected).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 17, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 16, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

8 participants