-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YUNIKORN-2196 Optimize aggregated resources tracking feature to not add overhead to scheduling #739
base: master
Are you sure you want to change the base?
Conversation
Hi @wilfred-s @pbacsko This is a draft PR for us to move resource tracking from scheduling cycle, i want to know your opinions about if it's the right way for us to do this in event publish cycle? And more questions:
|
@zhuqi-lucas I think we should hold off with this for a while. We need to benchmark and see how expensive resource tracking is. In this case, I'm not sure that benchmarking is actually needed. As I can see, aggregation is called from three places: When does this happen? Allocation removal or placeholder replacement. It's not a very frequent operation. The overhead of tracking is very low, I think almost unmeasurable. My performance unit test does not trigger it because it does not send pod completed events, but I'd be surpised if the call stack for this were visible. I'd skip this one completely. |
@@ -68,6 +77,43 @@ func (sp *EventPublisher) Stop() { | |||
sp.stop.Store(true) | |||
} | |||
|
|||
func (sp *EventPublisher) AggregateAppTrackedResourceFromEvents(messages []*si.EventRecord) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very complicated for something which isn't even a problem in my opinion.
I'm a strong -1 for optimizing tracking this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm -1 on this.
We've talked about this recently. After checking&understanding the actual costs of the tracking, I would not touch the existing code at all. The new, event based aggregation is VERY complicated.
Hi @pbacsko , thank you for review, i agree the application tracking now does not have much impact on the scheduling performance.
|
I think we can simply use the existing events and push the whole aggregation out of YuniKorn. The only thing that seems to be missing is the node instance type. If that is available just listening to the app and node events would allow creating the summary outside of the scheduler. So instead of adding that inside the event system we should push this out. Keep the scheduler for scheduling. Make all this an add on... That is where my remarks came from when I looked at the aggregation for users and groups. |
That is the best option. A scheduler should schedule. It is not for providing statistics.
No, because whatever we come up with will not fit all use cases. The only thing we can think of, which has been discussed before, is splitting the streams into just one type i.e. only application events. Not even sure that it will help much. |
Thank you @wilfred-s , so i am thinking the passing flow for event, if we can extend the event to use for trackingService, we can just do: cc @pbacsko
So we don't need a storage for trackingService, we don't need to push events to shim, but if we want to have a storage for REST API can query aggregated result? Maybe require a design for this also, and we can expose some history aggregation result in UI, and we can provide the duration for history data? |
Anyway, i created a jira to extend the EventRecord first, so we can depend on that: |
What is this PR for?
Currently, all tracking calculations are done in the scheduling cycle, it will add overhead to scheduling, we need to rethink and optimize this!
We may use the events that are generated to allow calculating this outside of the scheduler.
What type of PR is it?
Todos
What is the Jira issue?
[YUNIKORN-2] Gang scheduling interface parameters
How should this be tested?
Screenshots (if appropriate)
Questions: