You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice to select a priorityclass so i can define prevent the pods from being deleted by another higher priority pod.
Node tolerations & affinities would also mean i can provision the pods on special nodes meant to run the load tests.
The text was updated successfully, but these errors were encountered:
@mhaddon Thank you for opening the issue 🙂 k6-operator does have support for affinity and nodeSelector as options though not for priorities. Out of curiosity, in your use case, can you rely on globalDefault option to avoid deletion of the pods?
At the moment, this seems to be relatively straight-forward to implement by passing priorityClassName to operator's pods. Open question would be whether the same field is sufficient for all operator's pods, runner and starter / initializer.
My globalDefault is normally on a lower priority setting which allows the pod to be evicted if something more important is running. The issue is that if k6 is load testing another application in kubernetes, that tool might scale up, and if it has a higher-priority, kubernetes will evict k6 to make it fit.
it would be nice to give k6 a priority that means it can never be evicted.
The issue is that it seems that if a k6 pod is re-created mid-test, it kinda borks the test.
It would be nice to select a priorityclass so i can define prevent the pods from being deleted by another higher priority pod.
Node tolerations & affinities would also mean i can provision the pods on special nodes meant to run the load tests.
The text was updated successfully, but these errors were encountered: