-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accumulate resources in parallel #3172
Conversation
Welcome @flo-02-mu! |
Hi @flo-02-mu. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: flo-02-mu The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
aa855c7
to
d6e473a
Compare
868cad9
to
105debc
Compare
@monopole What do you think about this? |
@flo-02-mu Regardless of whether this proposal will be accepted, please add some tests to cover these new behaviors. |
In https://github.com/kubernetes-sigs/kustomize/pull/3172/files#diff-840240dda964a03a3ad4437883ccbfbc0b84099f695a083bc7642b7c14df46c6R263 I reused the existing tests but with 16 parallel accumulators. Do you have any specific tests in mind? |
Sorry. My fault. But I still want to have more test coverage. Also waiting for input from @monopole |
@Shell32-Natsu Some existing tests are now duplicated with parallel execution and a dedicated one is added, too. |
Before adding this complexity, we all need to see some benchmark tests |
Replace testing.T with testing.AB interface in test harness.
I gave it a try and added a benchmark test: https://github.com/kubernetes-sigs/kustomize/pull/3172/files#diff-840240dda964a03a3ad4437883ccbfbc0b84099f695a083bc7642b7c14df46c6R407
|
Please rebase. It will be good if you can create a distinct PR with a benchmark script so that we can have a base line to measure the performance benefits. It can be a simple script under |
# Conflicts: # api/internal/target/kusttarget_test.go # api/internal/target/maker_test.go
@Shell32-Natsu I added a script, but there is not much added value since it just executes go test. An additional test run now does the baseline benchmark without the parallel option set. |
Signed-off-by: Florian Mueller <f.l.o.mueller@web.de> # Conflicts: # api/krusty/options.go
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
@flo-02-mu: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
As suggested in #2950 it would be nice to load base resources in parallel. With this PR an additional parameter is introduced:
kustomize build --max_parallel_accumulate n
accumulates the resources in parallel go routines.The default value equals to 1, so that the existing behaviour remains.