Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserve path for cluster server endpoints (fixes virtctl with Rancher) #7361

Merged
merged 1 commit into from Mar 18, 2022

Conversation

SeanKnight
Copy link
Contributor

What this PR does / why we need it:

This PR fixes virtctl so that it can be used with clusters accessed via Rancher's authentication proxy.

Here is an example kubeconfig to illustrate the problem:

apiVersion: v1
kind: Config
clusters:
- name: "rancher-managed-cluster"
  cluster:
    server: "https://rancher.example.com/k8s/clusters/c-137"  # this is a cluster managed by Rancher
- name: "rancher-server"
  cluster:
    server: "https://rancher.example.com"  # the cluster running Rancher Server, similar URL but no path
...

As you can see, the URL for the rancher-managed-cluster contains a path component, which is valid, but uncommon so one might assume that they are never present if they have not encountered it before.

I ran virtctl console myvmi with a debugger until I found where the path component was being discarded:

u.Path = fmt.Sprintf("/apis/subresources.kubevirt.io/%s/namespaces/%s/%s/%s/%s", v1.ApiStorageVersion, namespace, resource, name, subresource)

Before the above line is executed, the value of u.Path was:

/k8s/clusters/c-137

After the line was executed the observed value was:

/apis/subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachinesinstances/myvmi/console

The correct value that should be expected is:

/k8s/clusters/c-137/apis/subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachinesinstances/myvmi/console

The result is that the API request ends up going to the rancher-server cluster instead of rancher-managed-cluster, which results in very confusing error messages since it was not obvious that the wrong cluster was responding.

The proposed code change appends to the u.Path value instead of replacing it.

Which issue(s) this PR fixes:

Fixes #3760, Fixes #2608

Special notes for your reviewer:

Tested with my cluster accessed through Rancher, and tests in vmi_test.go passed, but no other testing was performed.

Release note:

Fixed a bug that prevents virtctl from working with clusters accessed via Rancher authentication proxy, or any other cluster where the server URL contains a path component. (#3760)

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Mar 12, 2022
@kubevirt-bot
Copy link
Contributor

Hi @SeanKnight. Thanks for your PR.

I'm waiting for a kubevirt member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kubevirt-bot kubevirt-bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Mar 12, 2022
u.Path = path.Join(
u.Path,
fmt.Sprintf("/apis/subresources.kubevirt.io/%s/namespaces/%s/%s/%s/%s", v1.ApiStorageVersion, namespace, resource, name, subresource),
)
u.Path = fmt.Sprintf("/apis/subresources.kubevirt.io/%s/namespaces/%s/%s/%s/%s", v1.ApiStorageVersion, namespace, resource, name, subresource)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line would need to be removed otherwise it will overwite u.Path

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much, it slipped back in there when I was typing up the PR and wanted to get the variable value. I have fixed it.

@vasiliy-ul
Copy link
Contributor

/ok-to-test

@kubevirt-bot kubevirt-bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 14, 2022
@vasiliy-ul
Copy link
Contributor

/retest

1 similar comment
@SeanKnight
Copy link
Contributor Author

/retest

@kubevirt-bot kubevirt-bot added dco-signoff: no Indicates the PR's author has not DCO signed all their commits. and removed dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Mar 17, 2022
@kubevirt-bot kubevirt-bot added dco-signoff: yes Indicates the PR's author has DCO signed all their commits. and removed dco-signoff: no Indicates the PR's author has not DCO signed all their commits. labels Mar 17, 2022
@SeanKnight
Copy link
Contributor Author

/retest

@SeanKnight
Copy link
Contributor Author

I undid my changes (as a new commit) and force pushed, and the same test failed (and now an additional one), so I do not believe my commit is what is causing these tests to fail.

@SeanKnight
Copy link
Contributor Author

@dankenigsberg is it possible that the pull-kubevirt-e2e-k8s-1.23-sig-compute-cgroupsv2 failure could be caused by ca2686a#diff-005747350099de38fcd22401693fe634b2c4607b4deec581b298efe440b93d2fR154 ? That is the most recent change I see on the test that is failing.

Signed-off-by: Sean Knight <git@seanknight.com>
@SeanKnight
Copy link
Contributor Author

Rebased from origin/main in case recent commits fixed something.

@dankenigsberg
Copy link
Member

dankenigsberg is it possible that the pull-kubevirt-e2e-k8s-1.23-sig-compute-cgroupsv2 failure could be caused by ca2686a#diff-005747350099de38fcd22401693fe634b2c4607b4deec581b298efe440b93d2fR154 ? That is the most recent change I see on the test that is failing.

The failure in https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/7361/pull-kubevirt-e2e-k8s-1.23-sig-compute-cgroupsv2/1503304370333159424#1:build-log.txt%3A3038 seems about the kubevirtci cluster not responding fast enough; it seem unrelated to my recent changes.

Unfortunately, it is not uncommon for a single test of ours to fail due to a fluke timeout. My PR #7251 is a small step towards simplifying and normalizing our tests. I hope more people join this cleanup effort, which should eventually make our tests more reliable and easier to debug.

@vasiliy-ul
Copy link
Contributor

I undid my changes (as a new commit) and force pushed, and the same test failed (and now an additional one), so I do not believe my commit is what is causing these tests to fail.

The CI failure does not seem related to the changes here. Probably just some flaky tests. From my perspective the PR looks fine. @rmohr , maybe you could take a look as well?

/lgtm
/retest

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2022
@rmohr
Copy link
Member

rmohr commented Mar 17, 2022

I undid my changes (as a new commit) and force pushed, and the same test failed (and now an additional one), so I do not believe my commit is what is causing these tests to fail.

The CI failure does not seem related to the changes here. Probably just some flaky tests. From my perspective the PR looks fine. @rmohr , maybe you could take a look as well?

/lgtm /retest

Yes everything looks great. We had temporary issues due to a github issue and, and a public golang proxy going down. The job backlog is causing a little bit more pressure on CI than it should.

@SeanKnight I love that PR! The fix is so simple. This issue was bugging us for a really long time.

/approve

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rmohr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 17, 2022
@SeanKnight
Copy link
Contributor Author

@dankenigsberg thank you for taking a look, I could not for the life of me figure out what was going on.

@vasiliy-ul @rmohr thank you too for taking a look at this.

/retest

🤞

@kubevirt-commenter-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@kubevirt-commenter-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@SeanKnight
Copy link
Contributor Author

/retest

@rmohr
Copy link
Member

rmohr commented Mar 18, 2022

@SeanKnight I know our merge experience for new contributors is not optimal, but you can just leave it as is. Our bot will take care of the retests (but you are welcome to write retest yourself if you like).

@SeanKnight
Copy link
Contributor Author

@rmohr thank you I will let the bot take care of it.

@kubevirt-commenter-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot kubevirt-bot merged commit fd67d84 into kubevirt:main Mar 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS
Projects
None yet
7 participants