OSDOCS#12034: HCP Kubevirt topology spread constraint#93263
OSDOCS#12034: HCP Kubevirt topology spread constraint#93263xenolinux merged 1 commit intoopenshift:mainfrom
Conversation
|
🤖 Thu May 15 16:08:20 - Prow CI generated the docs preview: |
23c34a2 to
8825f04
Compare
| - SoftTopologyAndDuplicates // <2> | ||
| - EvictPodsWithPVC // <3> | ||
| - EvictPodsWithLocalStorage // <4> | ||
| - LongLifecycle // <5> |
There was a problem hiding this comment.
I'd suggest:
apiVersion: operator.openshift.io/v1
kind: KubeDescheduler
metadata:
name: cluster
namespace: openshift-kube-descheduler-operator
spec:
managementState: Managed
deschedulingIntervalSeconds: 30
mode: "Automatic"
profiles:
- DevKubeVirtRelieveAndMigrate
- SoftTopologyAndDuplicates
profileCustomizations:
devEnableSoftTainter: true
devDeviationThresholds: AsymmetricLow
devActualUtilizationProfile: PrometheusCPUCombined
DevKubeVirtRelieveAndMigrate is an enhanced variant of LongLifecycle for the Kubevirt use case.
With that EvictPodsWithPVC and EvictPodsWithLocalStorage are implicitly enabled.
| <3> By default, the {descheduler-operator} prevents the pod eviction with persistent volume claims (PVCs). Use this profile to allow eviction of pods with PVCs. | ||
| <4> By default, pods with local storage are not eligible for eviction. Use this profile to allow eviction of your VMs that use the local storage. |
There was a problem hiding this comment.
with DevKubeVirtRelieveAndMigrate we can avoid those two.
| <2> This profile evicts pods that follow the soft topology constraint: `whenUnsatisfiable: ScheduleAnyway`. | ||
| <3> By default, the {descheduler-operator} prevents the pod eviction with persistent volume claims (PVCs). Use this profile to allow eviction of pods with PVCs. | ||
| <4> By default, pods with local storage are not eligible for eviction. Use this profile to allow eviction of your VMs that use the local storage. | ||
| <5> This profile balances resource usage between nodes and enables the strategies, such as `RemovePodsHavingTooManyRestarts` and `LowNodeUtilization`. |
There was a problem hiding this comment.
the same consideration is valid for DevKubeVirtRelieveAndMigrate
| <3> By default, the {descheduler-operator} prevents the pod eviction with persistent volume claims (PVCs). Use this profile to allow eviction of pods with PVCs. | ||
| <4> By default, pods with local storage are not eligible for eviction. Use this profile to allow eviction of your VMs that use the local storage. | ||
| <5> This profile balances resource usage between nodes and enables the strategies, such as `RemovePodsHavingTooManyRestarts` and `LowNodeUtilization`. | ||
| <6> You must use this setting when performing a live migration so that the descheduler runs in the background during the migration process. |
There was a problem hiding this comment.
This is not needed with DevKubeVirtRelieveAndMigrate
7a76bf4 to
ebfb886
Compare
|
/lgtm |
|
/remove-label peer-review-needed |
| devActualUtilizationProfile: PrometheusCPUCombined | ||
| # ... | ||
| ---- | ||
| <1> Sets the number of seconds between the descheduler running cycles. |
There was a problem hiding this comment.
Did you mean to also comment out the callout lines? The preview looks good though.
There was a problem hiding this comment.
Fixed
Added # for call outs
|
|
||
| By default, KubeVirt virtual machines (VMs) created by a node pool are scheduled on any available nodes that have the capacity to run the VMs. By default, the `topologySpreadConstraint` constraint is set to schedule VMs on multiple nodes. | ||
|
|
||
| In some scenarios, node pool VMs might run on the same node, which can cause availability issues. To avoid distribution of VMs on a single node, use the descheduler to continuously honour the `topologySpreadConstraint` constraint to spread VMs on multiple nodes. |
There was a problem hiding this comment.
| In some scenarios, node pool VMs might run on the same node, which can cause availability issues. To avoid distribution of VMs on a single node, use the descheduler to continuously honour the `topologySpreadConstraint` constraint to spread VMs on multiple nodes. | |
| In some scenarios, node pool VMs might run on the same node, which can cause availability issues. To avoid distribution of VMs on a single node, use the descheduler to continuously honor the `topologySpreadConstraint` constraint to spread VMs on multiple nodes. |
|
/remove-label peer-review-in-progress |
ebfb886 to
4fca8a7
Compare
|
New changes are detected. LGTM label has been removed. |
4fca8a7 to
a7ad529
Compare
|
@xenolinux: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/cherrypick enterprise-4.19 |
|
/cherrypick enterprise-4.18 |
|
@xenolinux: new pull request created: #93449 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@xenolinux: new pull request created: #93450 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Version(s): 4.18+
Issue: Issue: https://issues.redhat.com/browse/OSDOCS-12034
Link to docs preview: https://93263--ocpdocs-pr.netlify.app/openshift-enterprise/latest/hosted_control_planes/hcp-manage/hcp-manage-virt.html#hcp-topology-spread-constraint_hcp-manage-virt
QE review:
SME review:
Additional information:
This content is QE/SME approved and peer-reviewed. It was reverted as per QE's suggestion. Adding the same content back to docs. Newly added content is reviewed by SMEs.