Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BEAM-8944] Change to use single thread in py sdk bundle progress report #10387

Merged
merged 2 commits into from Dec 20, 2019

Conversation

y1chi
Copy link
Contributor

@y1chi y1chi commented Dec 16, 2019

Use UnboundedThreadWorkerExecutor in bundle progress report introduces performance complications (by using too many threads. Experiment shows that a typical streaming wordcount pipeline which can deal with 360 messages/(second, worker) is only capable of process around 200 messages/(second, worker) with same resource. This PR aim to mitigate the performance and cost change of existing pipeline by change the use of UnboundedThreadWorkerExecutor in bundle progress report to just single thread executor.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

Post-Commit Tests Status (on master branch)

Lang SDK Apex Dataflow Flink Gearpump Samza Spark
Go Build Status --- --- Build Status --- --- Build Status
Java Build Status Build Status Build Status Build Status
Build Status
Build Status
Build Status Build Status Build Status
Build Status
Build Status
Python Build Status
Build Status
Build Status
Build Status
--- Build Status
Build Status
Build Status
Build Status
--- --- Build Status
XLang --- --- --- Build Status --- --- ---

Pre-Commit Tests Status (on master branch)

--- Java Python Go Website
Non-portable Build Status Build Status
Build Status
Build Status Build Status
Portable --- Build Status --- ---

See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.

@y1chi y1chi force-pushed the BEAM-8944 branch 2 times, most recently from f2e8b98 to 684e380 Compare December 16, 2019 19:27
@y1chi y1chi changed the title [BEAM-8944] Change to use unbounded worker threads in python sdk only… [BEAM-8944] Change to use single thread in py sdk bundle progress report Dec 16, 2019
@y1chi
Copy link
Contributor Author

y1chi commented Dec 16, 2019

R: @angoenka @lukecwik

@lukecwik
Copy link
Member

Would it be better to have the runner request progress less frequently?

@y1chi
Copy link
Contributor Author

y1chi commented Dec 17, 2019

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

@y1chi
Copy link
Contributor Author

y1chi commented Dec 17, 2019

Run Python PreCommit

@lukecwik
Copy link
Member

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

0.1 secs is a lot and doesn't seem right.

@y1chi
Copy link
Contributor Author

y1chi commented Dec 18, 2019

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

0.1 secs is a lot and doesn't seem right.

I think this is where it is set, we can try to tune that down.
https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/fn/control/BeamFnMapTaskExecutor.java#L296

Single thread should be good enough for progress report in python sdk harness and shouldn't incur stuckness issues, and it could also help limiting its impact on the bundle processing critical path.

@lukecwik
Copy link
Member

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

0.1 secs is a lot and doesn't seem right.

I think this is where it is set, we can try to tune that down.
https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/fn/control/BeamFnMapTaskExecutor.java#L296

Single thread should be good enough for progress report in python sdk harness and shouldn't incur stuckness issues, and it could also help limiting its impact on the bundle processing critical path.

Yeah, thats the wrong constant to use since that is meant to be used in a tight read loop using a lock which is inappropriate for API calls. I believe the Dataflow service has used 30 seconds for progress updates between the server and the worker so we could do anywhere between 5 and 30 seconds and still be fine.

@y1chi
Copy link
Contributor Author

y1chi commented Dec 18, 2019

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

0.1 secs is a lot and doesn't seem right.

I think this is where it is set, we can try to tune that down.
https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/fn/control/BeamFnMapTaskExecutor.java#L296
Single thread should be good enough for progress report in python sdk harness and shouldn't incur stuckness issues, and it could also help limiting its impact on the bundle processing critical path.

Yeah, thats the wrong constant to use since that is meant to be used in a tight read loop using a lock which is inappropriate for API calls. I believe the Dataflow service has used 30 seconds for progress updates between the server and the worker so we could do anywhere between 5 and 30 seconds and still be fine.

it seems that it will slow down the bundle processing scheduling if I increase the period and the pipeline can become super slow. I'm guessing it will affect the time when a bundle process instruction can be considered finished.

@lukecwik
Copy link
Member

Would it be better to have the runner request progress less frequently?

I think that helps too. I believe right now JRH requests every 0.1 sec. Not exactly sure how the frequency is picked.

0.1 secs is a lot and doesn't seem right.

I think this is where it is set, we can try to tune that down.
https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/fn/control/BeamFnMapTaskExecutor.java#L296
Single thread should be good enough for progress report in python sdk harness and shouldn't incur stuckness issues, and it could also help limiting its impact on the bundle processing critical path.

Yeah, thats the wrong constant to use since that is meant to be used in a tight read loop using a lock which is inappropriate for API calls. I believe the Dataflow service has used 30 seconds for progress updates between the server and the worker so we could do anywhere between 5 and 30 seconds and still be fine.

it seems that it will slow down the bundle processing scheduling if I increase the period and the pipeline can become super slow. I'm guessing it will affect the time when a bundle process instruction can be considered finished.

Progress updates are independent of the bundle processing and it finishing since those are separate messages over the control channel so updating the progress update interval will only impact how "fresh" the progress data. See this call flow for processing a bundle and this call flow for progress updates.

@y1chi
Copy link
Contributor Author

y1chi commented Dec 19, 2019

Then I guess delayed update on

might have a throttling effect?

I've tried 2, 3, 5 secs, and the performance looks pretty poor for any of these periods.

@lukecwik
Copy link
Member

Then I guess delayed update on

might have a throttling effect?
I've tried 2, 3, 5 secs, and the performance looks pretty poor for any of these periods.

I see that there is throttling going on there but it is unnecessary now. The SDK is responsible for performing push back. The server needs to be configured to not use too much buffer space. At this point in time we may just want to leave this as is and wait for the UW to be released since it doesn't rely on this 0.1 second polling interval or go with your original plan of using a single thread with a TODO to remove it.

WDYT?

@y1chi
Copy link
Contributor Author

y1chi commented Dec 19, 2019

Then I guess delayed update on

might have a throttling effect?
I've tried 2, 3, 5 secs, and the performance looks pretty poor for any of these periods.

I see that there is throttling going on there but it is unnecessary now. The SDK is responsible for performing push back. The server needs to be configured to not use too much buffer space. At this point in time we may just want to leave this as is and wait for the UW to be released since it doesn't rely on this 0.1 second polling interval or go with your original plan of using a single thread with a TODO to remove it.

WDYT?

Then I guess delayed update on

might have a throttling effect?
I've tried 2, 3, 5 secs, and the performance looks pretty poor for any of these periods.

I see that there is throttling going on there but it is unnecessary now. The SDK is responsible for performing push back. The server needs to be configured to not use too much buffer space. At this point in time we may just want to leave this as is and wait for the UW to be released since it doesn't rely on this 0.1 second polling interval or go with your original plan of using a single thread with a TODO to remove it.

WDYT?

I fully agree that we should try to remove the throttling and lower the frequency in JRH, I opened https://issues.apache.org/jira/browse/BEAM-8998, meanwhile I think it's still better we use a single thread as a intermediate solution since user may take sometime to adopt UW. Also removing the throttling and lowering the frequency in JRH seems a bigger change, so needs sometime to test.
Also added TODO.

@angoenka
Copy link
Contributor

Thanks!
LGTM too.

@angoenka angoenka merged commit 794e58d into apache:master Dec 20, 2019
y1chi added a commit to y1chi/beam that referenced this pull request Dec 20, 2019
…ort (apache#10387)

* [BEAM-8944] Change to single thread executor in python sdk bundle progress report

(cherry picked from commit 794e58d)
y1chi added a commit to y1chi/beam that referenced this pull request Dec 20, 2019
…ort (apache#10387)

* [BEAM-8944] Change to single thread executor in python sdk bundle progress report

(cherry picked from commit 794e58d)
y1chi added a commit to y1chi/beam that referenced this pull request Dec 20, 2019
…ort (apache#10387)

* [BEAM-8944] Change to single thread executor in python sdk bundle progress report

(cherry picked from commit 794e58d)
@y1chi y1chi deleted the BEAM-8944 branch December 20, 2019 18:31
dpcollins-google pushed a commit to dpcollins-google/beam that referenced this pull request Dec 20, 2019
…ort (apache#10387)

* [BEAM-8944] Change to single thread executor in python sdk bundle progress report
mxm added a commit to lyft/beam that referenced this pull request Mar 31, 2020
vmarquez pushed a commit to vmarquez/beam that referenced this pull request Apr 1, 2020
…ort (apache#10387)

* [BEAM-8944] Change to single thread executor in python sdk bundle progress report
mxm added a commit to lyft/beam that referenced this pull request Apr 3, 2020
Revert "[BEAM-8944] Change to use single thread in py sdk bundle progress report (apache#10387)"

This reverts commit 1edcc10

Revert "[BEAM-8882] Fully populate log messages. (apache#10292)"

This reverts commit 6126a59

Revert "[BEAM-8733]  Handle the registration request synchronously in the Python SDK harness."

This reverts commit 26596c8

Revert "[BEAM-8151] Further cleanup of SDK Workers. (apache#10134)"

This reverts commit ae5b653

Revert "Setting all logging on the root logger as before."

This reverts commit b870d97

Revert "[BEAM-8661] Moving runners to have per-module logger (apache#10097)"

This reverts commit 49d6efd.

Revert "[BEAM-8151] Swap to create SdkWorkers on demand when processing jobs"

This reverts commit 33cc30e

Revert "[BEAM-8151, BEAM-7848] Swap to using a thread pool which is unbounded and shrinks when threads are idle."

This reverts commit 1b62310
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants