Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker API should provide safe defaults that increase performance in most cases #10137

Open
runningcode opened this issue Jul 31, 2019 · 5 comments
Labels
a:feature A new functionality in:workers

Comments

@runningcode
Copy link
Member

runningcode commented Jul 31, 2019

When enabling worker APIs in third party plugins, such as kapt with kapt.use.worker.api=true, We have noticed slower build speeds due to workers consuming extra memory. Our builds seen an increase in time spent garbage collecting due to the extra workers and end up being slower.

It would be nice if Gradle would have safe defaults for the number of workers that would increase performance in most scenarios. Many teams do not have dedicated build engineers or resources to dedicate to this so it would be nice if the default would improve performance in most projects. Teams with additional resources would then be free to further optimize these values.

Let me know what additional information I can/should provide here.

My understanding is that the current default is # of CPUs. Perhaps a safer default would be # of CPUs / 2? Just tossing ideas out there.

@runningcode runningcode changed the title Worker API should provide safe defaults. Worker API should provide safe defaults that increase performance in most cases Jul 31, 2019
@gavra0
Copy link
Contributor

gavra0 commented Jul 31, 2019

This could be handled automatically by Gradle, or plugin authors can provide CPU/memory hints for the worker actions. Using those resource usage hints, scheduling can be more efficient.

@stale
Copy link

stale bot commented Oct 6, 2020

This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. If you're interested in how we try to keep the backlog in a healthy state, please read our blog post on how we refine our backlog. If you feel this is something you could contribute, please have a look at our Contributor Guide. Thank you for your contribution.

@stale stale bot added the stale label Oct 6, 2020
@gildor
Copy link
Contributor

gildor commented Oct 7, 2020

It's still an important issue, now it requires multiple manual measurements and still not dynamic enough

@stale
Copy link

stale bot commented Apr 11, 2022

This issue has been automatically marked as stale because it has not had recent activity. Given the limited bandwidth of the team, it will be automatically closed if no further activity occurs. If you're interested in how we try to keep the backlog in a healthy state, please read our blog post on how we refine our backlog. If you feel this is something you could contribute, please have a look at our Contributor Guide. Thank you for your contribution.

@stale stale bot added the stale label Apr 11, 2022
@gildor
Copy link
Contributor

gildor commented Apr 12, 2022

It still looks like an important feature. It's hard now to measure such things and detect issues with memory and parallelism.

@stale stale bot removed the stale label Apr 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a:feature A new functionality in:workers
Projects
None yet
Development

No branches or pull requests

5 participants