-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-33841][CORE][3.0] Fix issue with jobs disappearing intermittently from the SHS under high load #30842
Conversation
cc @HeartSaVioR |
ok to test |
cc. @tgravescs |
@vladhlinsky Thanks for the patch! We tend to describe what this PR is doing in the PR title, unlike JIRA issue title. Could you please update the title of the PR? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
Let's wait a couple of days to give others chance to review. Probably till the early next week.
PR title has been updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall changes make sense
@@ -460,9 +460,17 @@ private[history] class FsHistoryProvider(conf: SparkConf, clock: Clock) | |||
val newLastScanTime = clock.getTimeMillis() | |||
logDebug(s"Scanning $logDir with lastScanTime==$lastScanTime") | |||
|
|||
val notStale = mutable.HashSet[String]() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be nice to add a comment here describing notStale and why we need it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tgravescs thank you for the review!
PR has been updated with a comment.
Kubernetes integration test starting |
Kubernetes integration test status failure |
Test build #133019 has finished for PR 30842 at commit
|
Kubernetes integration test starting |
Kubernetes integration test status success |
Test build #133027 has finished for PR 30842 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @vladhlinsky .
Please make a PR to master
branch.
To prevent any accidental regression, we merge the patch from master
-> branch-3.1
-> branch-3.0
if you want to deliver this to 3.0.2.
This PR should not land to branch-3.0 until we have this at master
for Apache Spark 3.2 and branch-3.1
for Apache Spark 3.1.
BTW, cc @HyukjinKwon since he is the release manager for Apache Spark 3.1.0 and this seems to target branch-3.1, too. |
Hi @dongjoon-hyun, |
Thank you, @vladhlinsky ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. Thanks, @vladhlinsky and all.
Merged to branch-3.0 for Apache Spark 3.0.2.
…tly from the SHS under high load ### What changes were proposed in this pull request? Mark SHS event log entries that were `processing` at the beginning of the `checkForLogs` run as not stale and check for this mark before deleting an event log. This fixes the issue when a particular job was displayed in the SHS and disappeared after some time, but then, in several minutes showed up again. ### Why are the changes needed? The issue is caused by [SPARK-29043](https://issues.apache.org/jira/browse/SPARK-29043), which is designated to improve the concurrent performance of the History Server. The [change](https://github.com/apache/spark/pull/25797/files#) breaks the ["app deletion" logic](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R563) because of missing proper synchronization for `processing` event log entries. Since SHS now [filters out](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R462) all `processing` event log entries, such entries do not have a chance to be [updated with the new `lastProcessed`](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R472) time and thus any entity that completes processing right after [filtering](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R462) and before [the check for stale entities](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R560) will be identified as stale and will be deleted from the UI until the next `checkForLogs` run. This is because [updated `lastProcessed` time is used as criteria](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R557), and event log entries that missed to be updated with a new time, will match that criteria. The issue can be reproduced by generating a big number of event logs and uploading them to the SHS event log directory on S3. Essentially, around 800(82.6 MB) copies of an event log file were created using [shs-monitor](https://github.com/vladhlinsky/shs-monitor) script. Strange behavior of SHS counting the total number of applications was noticed - at first, the number was increasing as expected, but with the next page refresh, the total number of applications decreased. No errors were logged by SHS. 241 entities are displayed at `20:50:42`: ![1-241-entities-at-20-50](https://user-images.githubusercontent.com/61428392/102611539-c2138d00-4137-11eb-9bbd-d77b22041f3b.png) 203 entities are displayed at `20:52:17`: ![2-203-entities-at-20-52](https://user-images.githubusercontent.com/61428392/102611561-cdff4f00-4137-11eb-91ed-7405fe58a695.png) The number of loaded applications over time: ![4-loaded-applications](https://user-images.githubusercontent.com/61428392/102611586-d8b9e400-4137-11eb-8747-4007fc5469de.png) ### Does this PR introduce _any_ user-facing change? Yes, SHS users won't face the behavior when the number of displayed applications decreases periodically. ### How was this patch tested? Tested using [shs-monitor](https://github.com/vladhlinsky/shs-monitor) script: * Build SHS with the proposed change * Download Hadoop AWS and AWS Java SDK * Prepare S3 bucket and user for programmatic access, grant required roles to the user. Get access key and secret key * Configure SHS to read event logs from S3 * Start [monitor](https://github.com/vladhlinsky/shs-monitor/blob/main/monitor.sh) script to query SHS API * Run 8 [producers](https://github.com/vladhlinsky/shs-monitor/blob/main/producer.sh) for ~10 mins, create 805(83.1 MB) event log copies * Wait for SHS to load all the applications * Verify that the number of loaded applications increases continuously over time ![5-loaded-applications-fixed](https://user-images.githubusercontent.com/61428392/102617363-bf1d9a00-4141-11eb-9bae-f982d02fd30f.png) For more details, please refer to the [shs-monitor](https://github.com/vladhlinsky/shs-monitor) repository. Closes #30842 from vladhlinsky/SPARK-33841-branch-3.0. Authored-by: Vlad Glinsky <vladhlinsky@gmail.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Thanks @dongjoon-hyun for cc'ing me, and @vladhlinsky for the fix :-) |
What changes were proposed in this pull request?
Mark SHS event log entries that were
processing
at the beginning of thecheckForLogs
run as not stale and check for this mark before deleting an event log. This fixes the issue when a particular job was displayed in the SHS and disappeared after some time, but then, in several minutes showed up again.Why are the changes needed?
The issue is caused by SPARK-29043, which is designated to improve the concurrent performance of the History Server. The change breaks the "app deletion" logic because of missing proper synchronization for
processing
event log entries. Since SHS now filters out allprocessing
event log entries, such entries do not have a chance to be updated with the newlastProcessed
time and thus any entity that completes processing right after filtering and before the check for stale entities will be identified as stale and will be deleted from the UI until the nextcheckForLogs
run. This is because updatedlastProcessed
time is used as criteria, and event log entries that missed to be updated with a new time, will match that criteria.The issue can be reproduced by generating a big number of event logs and uploading them to the SHS event log directory on S3. Essentially, around 800(82.6 MB) copies of an event log file were created using shs-monitor script. Strange behavior of SHS counting the total number of applications was noticed - at first, the number was increasing as expected, but with the next page refresh, the total number of applications decreased. No errors were logged by SHS.
241 entities are displayed at
20:50:42
:203 entities are displayed at
20:52:17
:The number of loaded applications over time:
Does this PR introduce any user-facing change?
Yes, SHS users won't face the behavior when the number of displayed applications decreases periodically.
How was this patch tested?
Tested using shs-monitor script:
For more details, please refer to the shs-monitor repository.