Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evenly distribute the fractions when allocating from multiple GPUs #85

Closed
achimnol opened this issue Dec 27, 2019 · 3 comments
Closed
Assignees
Labels
comp:agent Related to Agent component type:feature Add new features
Milestone

Comments

@achimnol
Copy link
Member

achimnol commented Dec 27, 2019

For instance, if we have available GPU shares [0.3, 0.3] for two GPUs and try to allocate 0.4 vGPU, the current FractionAllocMap in the agent allocates [0.3, 0.1] vGPU.
But this may cause problems with GPU applications that assume that all (multiple) GPUs they have access to has identical resources. For such applications, we need to allocate [0.2, 0.2].

More test cases:

[0.3, 0.2, 0.1] allocate 0.1 => [0, 0, 0.1]   # should favor smaller fit to reduce fragmentation
[0.3, 0.2, 0.1] allocate 0.15 => [0, 0.15, 0] # should favor smaller fit but use the largest chunk possible
[0.3, 0.2, 0.1] allocate 0.2 => [0, 0.2, 0]   # should favor smaller fit but use the largest chunk possible
[0.3, 0.2, 0.1] allocate 0.3 => [0.3, 0, 0]
[0.3, 0.2, 0.1] allocate 0.4 => [0.2, 0.2, 0]
[0.3, 0.2, 0.1] allocate 0.5 => [0.3, 0.2, 0] or [0.2, 0.2, 0.1]  # if both possible, i'd prefer the lesser number of GPUs with bigger chunks
[0.3, 0.2, 0.1] allocate 0.6 => [0.3, 0.2, 0.1]
[0.3, 0.2, 0.1] allocate 0.7 => insufficient
[0.3, 0.3] allocate 0.3 => [0.3, 0]
[0.3, 0.3] allocate 0.4 => [0.2, 0.2]
[0.3, 0.3] allocate 0.5 => [0.25, 0.25]
[0.3, 0.3] allocate 0.6 => [0.3, 0.3]
[0.3, 0.3] allocate 0.7 => insufficient
[0.2, 0.2, 0.2] allocate 0.2 => [0.2, 0, 0]
[0.2, 0.2, 0.2] allocate 0.3 => [0.15, 0.15, 0]
[0.2, 0.2, 0.2] allocate 0.4 => [0.2, 0.2, 0]
[0.2, 0.2, 0.2] allocate 0.5 => [0.17, 0.17, 0.16]
[0.2, 0.2, 0.2] allocate 0.6 => [0.2, 0.2, 0.2]
[0.2, 0.2, 0.2] allocate 0.7 => insufficient

We also need to limit allocation of too small fractions which results in under ~500 MiB GPU memory (this should be configurable though) since it would not be able to execute anything with some deep learning frameworks due to the framework's default GPU memory footprint (e.g., PyTorch and TensorFlow).

Internal ticket: OP#706

@achimnol achimnol added the type:feature Add new features label Dec 27, 2019
@achimnol achimnol added this to the 19.12 milestone Dec 27, 2019
@achimnol achimnol added the comp:agent Related to Agent component label Jan 20, 2020
@achimnol achimnol modified the milestones: 19.12, Bank Jan 31, 2020
@achimnol achimnol modified the milestones: Bank, 20.09 Aug 11, 2020
@achimnol
Copy link
Member Author

We have to apply the new allocator by adding alloctator options in the manager-side.

@achimnol
Copy link
Member Author

achimnol commented Sep 29, 2020

We also need to expand this allocator to the manager side for distributed multi-container sessions (lablup/backend.ai-manager#217)

@achimnol
Copy link
Member Author

Now this became the default in Backend.AI v20.09 with an addition of quantum_size configuration to limit fragmentation.
We will revisit the allocator to reduce fragmentation further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:agent Related to Agent component type:feature Add new features
Projects
None yet
Development

No branches or pull requests

2 participants