-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e test failure in CAPZ w/ CAPI v1.7.0-beta.1 #10332
Comments
This issue is currently awaiting triage. CAPI contributors will take a look as soon as possible, apply one of the Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
STEP: Setting up the bootstrap cluster @ 03/27/24 14:18:31.327
INFO: Loading image: "localhost:5000/ci-e2e/cluster-api-azure-controller-arm64:20240327201550"
INFO: Image localhost:5000/ci-e2e/cluster-api-azure-controller-arm64:20240327201550 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.7.0-beta.1"
INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.7.0-beta.1 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.0-beta.1"
INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.0-beta.1 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.0-beta.1"
INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.0-beta.1 is present in local container image cache
INFO: Loading image: "registry.k8s.io/cluster-api-helm/cluster-api-helm-controller:v0.1.1-alpha.1"
INFO: Image registry.k8s.io/cluster-api-helm/cluster-api-helm-controller:v0.1.1-alpha.1 is present in local container image cache
STEP: Initializing the bootstrap cluster @ 03/27/24 14:18:39.008
INFO: clusterctl init --config /Users/matt/projects/cluster-api-provider-azure/_artifacts/repository/clusterctl-config.yaml --kubeconfig /Users/matt/.kube/config --wait-providers --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure --addon helm
[FAILED] in [SynchronizedBeforeSuite] - /Users/matt/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.0-beta.1/framework/clusterctl/client.go:90 @ 03/27/24 14:18:39.052
<< Timeline
[FAILED] failed to run clusterctl init
Unexpected error:
<*errors.withStack | 0x140007f6648>:
failed to get provider components for the "cluster-api" provider: failed to get repository client for the CoreProvider with name cluster-api: error creating the local filesystem repository client: failed to get latest version: failed to find releases tagged with a valid semantic version number
{
error: <*errors.withMessage | 0x140000b1860>{
cause: <*errors.withStack | 0x140007f6618>{
error: <*errors.withMessage | 0x140000b1840>{
cause: <*errors.withStack | 0x140007f65b8>{
error: <*errors.withMessage | 0x140000b1760>{
cause: <*errors.withStack | 0x140007f6588>{
error: <*errors.withMessage | 0x140000b1740>{
cause: <*errors.fundamental | 0x140007f6558>{
msg: "failed to find releases tagged with a valid semantic version number",
stack: [..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ...],
},
msg: "failed to get latest version",
},
stack: [0x1026b5d64, 0x1026ac9b4, 0x1026ac5d8, 0x1026ac428, 0x102a3afb8, 0x102a3be50, 0x102a40340, 0x102a3fd44, 0x102a3f550, 0x102b57f24, 0x102b5c078, 0x103493754, 0x103492150, 0x100f88ef4, 0x100f88374, 0x10132aba8, 0x101338ef8, 0x10133bc58, 0x100f14574],
},
msg: "error creating the local filesystem repository client",
},
stack: [0x1026ac9d0, 0x1026ac5d8, 0x1026ac428, 0x102a3afb8, 0x102a3be50, 0x102a40340, 0x102a3fd44, 0x102a3f550, 0x102b57f24, 0x102b5c078, 0x103493754, 0x103492150, 0x100f88ef4, 0x100f88374, 0x10132aba8, 0x101338ef8, 0x10133bc58, 0x100f14574],
},
msg: "failed to get repository client for the CoreProvider with name cluster-api",
},
stack: [0x1026ac6b8, 0x1026ac428, 0x102a3afb8, 0x102a3be50, 0x102a40340, 0x102a3fd44, 0x102a3f550, 0x102b57f24, 0x102b5c078, 0x103493754, 0x103492150, 0x100f88ef4, 0x100f88374, 0x10132aba8, 0x101338ef8, 0x10133bc58, 0x100f14574],
},
msg: "failed to get provider components for the \"cluster-api\" provider",
},
stack: [0x102a405e8, 0x102a3fd44, 0x102a3f550, 0x102b57f24, 0x102b5c078, 0x103493754, 0x103492150, 0x100f88ef4, 0x100f88374, 0x10132aba8, 0x101338ef8, 0x10133bc58, 0x100f14574],
}
occurred
In [SynchronizedBeforeSuite] at: /Users/matt/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.0-beta.1/framework/clusterctl/client.go:90 @ 03/27/24 14:18:39.052
Full Stack Trace
sigs.k8s.io/cluster-api/test/framework/clusterctl.Init({0x104537b68, 0x106317fc0}, {{0x14000c6aaf0, 0x4d}, {0x1400006a660, 0x5c}, {0x140004d7dd0, 0x18}, {0x1034aca57, 0xb}, ...})
/Users/matt/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.0-beta.1/framework/clusterctl/client.go:90 +0x2d0
sigs.k8s.io/cluster-api/test/framework/clusterctl.InitManagementClusterAndWatchControllerLogs({0x104537b68?, _}, {{0x10454b370, 0x14000c26e40}, {0x1400006a660, 0x5c}, {0x1034aca57, 0xb}, {0x14000c26fa0, 0x1, ...}, ...}, ...)
/Users/matt/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.0-beta.1/framework/clusterctl/clusterctl_helpers.go:98 +0x508
sigs.k8s.io/cluster-api-provider-azure/test/e2e.initBootstrapCluster({0x10454b370, 0x14000c26e40}, 0x140006ba4b0, {0x1400006a660, 0x5c}, {0x16ef656f2, 0x3a})
/Users/matt/projects/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:189 +0x484
sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func4()
/Users/matt/projects/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:79 +0x3d0
reflect.Value.call({0x103c3a5a0?, 0x1044e4320?, 0x13?}, {0x10349e1cf, 0x4}, {0x1400003ff08, 0x0, 0x0?})
/usr/local/go/src/reflect/value.go:596 +0x994
reflect.Value.Call({0x103c3a5a0?, 0x1044e4320?, 0x0?}, {0x140006f9f08?, 0x0?, 0x0?})
/usr/local/go/src/reflect/value.go:380 +0x94
------------------------------
[SynchronizedBeforeSuite] [FAILED] [30.970 seconds]
[SynchronizedBeforeSuite]
/Users/matt/projects/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:63
[FAILED] SynchronizedBeforeSuite failed on Ginkgo parallel process #1
The first SynchronizedBeforeSuite function running on Ginkgo parallel process
#1 failed. This suite will now abort.
In [SynchronizedBeforeSuite] at: /Users/matt/projects/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:63 @ 03/27/24 14:18:39.073 |
I think I've figured it out; I'd forgotten to add a I'll close this issue after the tests pass. |
/close Indeed, this problem boils down to user error as described above. |
@mboersma: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which jobs are failing?
CAPZ (not CAPI):
Which tests are failing?
All e2e tests that require provisioning a cluster by running
clusterctl init
.Since when has it been failing?
Since trying to integrate CAPI v1.7.0-beta.0 in this PR: kubernetes-sigs/cluster-api-provider-azure#4646
Testgrid link
No response
Reason for failure (if possible)
We are reusing CAPI's e2e framework, but provisioning is unable to run
clusterctl init
to set up CAPI. AFAICT, at runtime the framework is trying to locate CAPI at the path_artifacts/repository/cluster-api/latest
, but that doesn't exist. Instead, the resources are at_artifacts/repository/cluster-api/v1.7.0-beta.1
.Anything else we need to know?
I wouldn't be surprised if CAPZ is doing something wrong here, but this code has always worked before, and I followed the previous changes we did when I integrated CAPI v1.6.0... Maybe I missed something?
Label(s) to be applied
/kind failing-test
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: