-
I have updated my placement settings for my mons to prefer my more powerful nodes, but schedule on my arm nodes if they absolutely must in order to maintain uptime. My placement spec in
I can see that the affinity applies to the pods, because they show the following while running:
But when I delete the mon pods to force them to reschedule, they never schedule where I want them to in the affinity, they just schedule back where they were. When I drain the node one of the mons is on, the pod remains How do I get this node selector to go away, or get the operator to reschedule a completely new mon to get the affinity to apply? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
With the cluster.yaml example, after scheduled, the mons will have affinity to a specific node because they use a host path to store data. If you want the mons to be schedulable to a different node, you will need to specify a volumeClaimTemplate for the mon as seen in the cluster-on-pvc example. Alternatively, you may want to taint the nodes and then failed over the mons. |
Beta Was this translation helpful? Give feedback.
Correct, you can failover a mon to get a new mon created on a different node.