Skip to content
This repository has been archived by the owner on May 25, 2023. It is now read-only.

node idle resources not considered in preemption #911

Closed
mateuszlitwin opened this issue Nov 25, 2019 · 4 comments
Closed

node idle resources not considered in preemption #911

mateuszlitwin opened this issue Nov 25, 2019 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mateuszlitwin
Copy link
Contributor

mateuszlitwin commented Nov 25, 2019

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:

There is a pod A (memory: 1GB) running on a node (memory: 2GB), then I scheduled a pod B (memory: 2GB). Pod A was not preempted, because sum of victim resources (1GB) is lower than pod B resource requirements.

What you expected to happen:

Pod A being preempted, because idle resources (1GB) plus victims resources (1GB) is enough to fit pod B (2GB).

How to reproduce it (as minimally and precisely as possible):

Modify TestPreempt in pkg/scheduler/actions/preempt/preempt_test.go.

Anything else we need to know?:

I do not know if it is intentional, but node idle resources are ignored when making preemption decisions. I would expect that we want to check if resources from preempting victims plus node idle resources are enough to fit a preemptor.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 25, 2019
@mateuszlitwin mateuszlitwin changed the title Node idle resources not included during node idle resources not considred in preemption Nov 25, 2019
@mateuszlitwin mateuszlitwin changed the title node idle resources not considred in preemption node idle resources not considered in preemption Nov 25, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 23, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants