-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Take dataSource topology into account when scheduling a pod using unbound WFFC storage #107479
Comments
/sig storage |
Not sure if storage or scheduling is the right sig, but this is mainly a storage issue so setting it to sig-storage for now. I am more than happy to supply a PR to fix this, just want some confirmation I am looking at the right thing for this particular issue. Basically need to filter the nodes down based on the dataSource for an unbound WFFC storage PVC, instead of just assuming it can go anywhere. |
/triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
please do not sent anything to me.it“s to much--发自新浪邮箱客户端
在 8月21日 22:09,Kubernetes Triage Robot ***@***.***> 写道:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
After 90d of inactivity, lifecycle/stale is applied
After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
Mark this issue or PR as fresh with /remove-lifecycle stale
Mark this issue or PR as rotten with /lifecycle rotten
Close this issue or PR with /close
Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
going to close this issue as I found an acceptable work around. It is technically still a problem though. |
What happened?
I have some dynamically provisioned storage that uses WFFC like the included csi-hostpath. I have a multi node cluster. When I attempt to do a csi clone using that storage, the scheduler does not take the topology of the dataSource into account when scheduling the pod. Example:
What did you expect to happen?
I am expecting the VolumeZone plugin to take the topology of the dataSource of a PVC Into account when filtering nodes. A snapshot restore or csi clone cannot succeed if the source volume doesn't exist in the same topology as the node. I am fairly certain the offending piece of code is here where it just skips checking unbound WFFC PVCs.
How can we reproduce it (as minimally and precisely as possible)?
Similar to the above setup, but add an intermediate step of making a snapshot of the first PVC, and set the dataSource to that snapshot will yield the same result. You will need a version of the csi-snapshot sidecar/controller that includes kubernetes-csi/external-snapshotter#585 so the snapshots are properly created on the right node. This will also label the snapshot volume content to include the node name
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: