-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Input contains a loop" on vendoring new dependencies that depend on staging projects #104827
Comments
@dcantah: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/area code-organization |
Components that have a transitive dependency to anything in k8s.io/kubernetes, including the staging components, can not be added as a dependency. The detection and prevention of this in the dependency management scripts is intentional. The reason is that it introduces a gridlock/chicken-and-egg scenario that couples the staging components to their dependencies in a way that makes it impossible to make updates until the dependencies update (which they can't do until the changes are made and released). |
That is indeed the solution |
@liggitt Ok, I appreciate the prompt response on this! Looks like we'll need to start trimming at the hcsshim/containerd layers. |
Due to some unfortunate deps in the root go.mod in containerd + some k8s staging directory behavior, containerd 1.5 errors out when trying to get vendored into kubernetes/kubernetes. Because we depend on ctrd 1.5, if hcsshim tries to get bumped it will ask that we also bump their dep of containerd to 1.5 and things will fail also. This reverts our containerd dep for the release/0.8 branch to 1.4 so we're able to circumvent this, at least temporarily. Issue tracking this problem that contains k8s proposed solution and statements on this: kubernetes/kubernetes#104827 "Components that have a transitive dependency to anything in k8s.io/kubernetes, including the staging components, can not be added as a dependency. The detection and prevention of this in the dependency management scripts is intentional." While this work is temporary just to get k8s a new hcsshim release, longer term we will need to find a way to trim our containerd dep either here, or containerd will need to get rid of it's k8s.io/* dependencies in their root go.mod to remove this circular import issue. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Due to some unfortunate deps in the root go.mod in containerd + some k8s staging directory behavior, containerd 1.5 errors out when trying to get vendored into kubernetes/kubernetes. Because we depend on ctrd 1.5, if hcsshim tries to get bumped it will ask that we also bump their dep of containerd to 1.5 and things will fail also. This reverts our containerd dep for the release/0.8 branch to 1.4 so we're able to circumvent this, at least temporarily. Issue tracking this problem that contains k8s proposed solution and statements on this: kubernetes/kubernetes#104827 "Components that have a transitive dependency to anything in k8s.io/kubernetes, including the staging components, can not be added as a dependency. The detection and prevention of this in the dependency management scripts is intentional." While this work is temporary just to get k8s a new hcsshim release, longer term we will need to find a way to trim our containerd dep either here, or containerd will need to get rid of it's k8s.io/* dependencies in their root go.mod to remove this circular import issue. The tests ran to make sure nothing broke with this --------------------------------------------------------------------- -All of the tests that currently get ran on a K8s pull request which includes the AKS engine e2e containerd test suite. -Containerd integration tests -Our cri-containerd tests with a shim built from this commit Signed-off-by: Daniel Canter <dcanter@microsoft.com>
What happened:
If you try and vendor in a dependency into kubernetes/kubernetes that depends on a select few of the projects that get published out of /staging (api or apimachinery that also happen to depend on each other), it seems the update-vendor.sh script whines that the input contains a loop:
The error is that a certain package doesn't exist in the tag I was using but did locally. This is on tag 0.20.6 for k8s.io/api. With tag 0.22.0 the error goes away, but the "input contains a loop" remains.
As the staging projects depend on each other at version 0.0.0 and use replace directives to actually depend on the code locally, when vendoring in a project that actually uses a real tag, all of these get updated which is definitely not what is desired regardless.
What you expected to happen:
Truthfully, I'm not sure. The go modules behavior isn't a shocker here, but I would've hoped this would be possible. I wouldn't imagine this scenario to be too outlandish, but maybe this is just a downside of the staging setup here.
How to reproduce it (as minimally and precisely as possible):
Have a project that depends on k8s.io/api (https://github.com/dcantah/deps or https://github.com/dcantah/deps2) for example and run:
hack/pin-dependency.sh github.com/dcantah/deps2 90b0ada1dfdb243bd29f9718f72615517a716c21
+
hack/update-vendor.sh
Anything else we need to know?:
This is mainly for tracking and trying to figure out if this has been a pain point for anything else. The answer may be to just try and trim the deps of the project trying to get vendored into here to not include api/apimachinery etc.
Where'd we'd ran into this is through trying to vendor in a new tag of https://github.com/microsoft/hcsshim that has a https://github.com/containerd/containerd v1.5.x dependency. As containerd depends on a lot of the staging projects, it will try and update all of these tags and then run into the stated error.
For more info on this: microsoft/hcsshim#1148
cc @dims @mikebrow @kevpar @jsturtevant @marosset
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: