Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customize cpu/Memory resourceRequirements/Limit on mover pods #707

Closed
kaovilai opened this issue Apr 4, 2023 · 5 comments · Fixed by #1072
Closed

Customize cpu/Memory resourceRequirements/Limit on mover pods #707

kaovilai opened this issue Apr 4, 2023 · 5 comments · Fixed by #1072
Labels
enhancement New feature or request
Milestone

Comments

@kaovilai
Copy link
Contributor

kaovilai commented Apr 4, 2023

Describe the feature you'd like to have.

Customize from default mover pod cpu/memory resourceRequirements/resourceLimits
What is the value to the end user? (why is it a priority?)

Ability to scale pod requirements to the workload of the pod, ie very large, or very small.
How will we know we have a good solution? (acceptance criteria)

cpu/memory resourceRequirements/resourceLimits can be set
Additional context

@kaovilai kaovilai added the enhancement New feature or request label Apr 4, 2023
@josephassiga
Copy link

Hello team, is there any way to advance this issue, it is hihgly rated by ou customer

@tesshuflower
Copy link
Contributor

I'll try to see if we can prioritize this in the v0.9 timeframe

@JohnStrunk
Copy link
Member

Since this seems to be getting some attention, I'll add some thoughts...

As we look at setting requests and limits, there are significant implications of doing so. In the case of requests, I think it's a net benefit since it provides a way for the user to effectively reserve both CPU and memory for the mover pods so that they can be scheduled to a node where they will run efficiently and not risk OOM-kill. The down-side is that this makes it harder to schedule the mover pods, given that they may already be restricted due to storage topology. A resource request may make them unschedulable.

In the case of limits, while it makes sense for us to add them by bringing in the appropriate go struct from kube, I don't think I would recommend that end users set them. By limiting CPU, it will cause replication to take longer with basically no up-side assuming resource requests for important workloads are properly set. Setting limits on memory will cause the replication to fail if the limit is hit. Unfortunately, we have no way to quantify the expected memory consumption of a given mover (looking at you, Restic), so even using this as a safety valve can be problematic.

@tesshuflower tesshuflower added this to the Version 0.9 milestone Jan 23, 2024
tesshuflower added a commit to tesshuflower/volsync that referenced this issue Jan 23, 2024
Resolves: backube#707

Signed-off-by: Tesshu Flower <tflower@redhat.com>
tesshuflower added a commit to tesshuflower/volsync that referenced this issue Jan 24, 2024
Resolves: backube#707

Signed-off-by: Tesshu Flower <tflower@redhat.com>
@igor-nikiforov
Copy link

igor-nikiforov commented Mar 20, 2024

We can't use volsync without this due memory LimitRanges forced on namespace.

@tesshuflower I see that #1072 was merged but changes still not available in latest helm chart release. Could you please advise any ETA when it will be available in helm chart? Thanks.

@tesshuflower
Copy link
Contributor

@igor-nikiforov good timing as we are actually trying to get our v0.9.0 release (which will contain this feature) done - I'm hoping this will happen today sometime, possibly tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

5 participants