-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Affinity assistant only takes first task's resource requirements into account when choosing a node #3049
Comments
The intention with the Affinity Assistant is to exactly prevent this, because if two
This is the intention by the Affinity Assistant. However, it can be disabled.
One solution to this may be to use dedicated Nodes for Tekton workload. E.g. using |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
The affinity assistant should either take the resource requirements of all
Task
s in thePipelineRun
into account when choosing the first node, or it should be flexible enough to allow laterTaskRun
s to end up on different nodes if the initial node can't satisfy the requirements.Actual Behavior
The affinity assistant ends up on the node used by the first
TaskRun
, and subsequentTaskRun
s are also tied to that node. This can cause problems when the initial node has sufficient resources for the firstTaskRun
, but not the second - the secondTaskRun
's pod will end up just sitting in pending until either the initial node has sufficient resources or thePipelineRun
gets deleted.Steps to Reproduce the Problem
PipelineRun
with an initialTask
with no resource requirements, and a secondTask
with more resource requirements than a given node in the cluster will have available (due to other pods running on that node, etc).PipelineRun
until the firstTaskRun
's pod ends up on the constrained node.TaskRun
's pod sit in pending foreverAdditional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
The text was updated successfully, but these errors were encountered: