-
Notifications
You must be signed in to change notification settings - Fork 5.8k
JDK-8255661: TestHeapDumpOnOutOfMemoryError fails with EOFException #3628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Welcome back rschmelter! A progress list of the required criteria for merging this PR into |
@schmelter-sap The following label will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
/label serviceability |
@schmelter-sap |
Dear Ralf(@schmelter-sap), BRs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Ralf,
your change looks good to me.
Thanks for fixing,
Richard.
void CompressionBackend::thread_loop(bool single_run) { | ||
// Register if this is a worker thread. | ||
if (!single_run) { | ||
void CompressionBackend::thread_loop() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could simplify CompressionBackend::thread_loop()
further:
void CompressionBackend::thread_loop() {
{
MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
_nr_of_threads++;
}
WriteWork* work = get_work();
while (work != NULL) {
do_compress(work);
finish_work(work);
work = get_work();
}
MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
_nr_of_threads--;
assert(_nr_of_threads >= 0, "Too many threads finished");
ml.notify_all();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW: why is ml.notify_all()
in line 275 needed at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
thanks for the review Lin and Richard.
The notify_all() is indeed not needed anymore. It was originally needed when the worker threads were newly created threads and we had to wait for them to finish at the end of the dump operation. But since we now use the GC work gang, this can be removed.
I will update the PR with your suggestions.
Best regards,
Ralf
@schmelter-sap This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 10 new commits pushed to the
Please see this link for an up-to-date comparison between the source branch of this pull request and the ➡️ To integrate this PR with the above commit message to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good.
/integrate |
@schmelter-sap Since your change was applied there have been 22 commits pushed to the
Your commit was automatically rebased without conflicts. Pushed as commit a29612e. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
This fixes a race condition in the CompressionBackend class of the heap dump code.
The race happens when the thread iterating the heap wants to write the data it has collected. If the compression backend has worker threads, the buffer to write would just be added to a queue and the worker threads would then compress (if needed) and write the buffer. But if no worker threads are present, the thread doing the iteration must do this itself.
The iterating thread checks the _nr_of_threads member under lock protection and if it is 0, it assume it would have to do the work itself. It then releases the lock and enters the loop of the worker threads for one round. But after the lock has been released, a worker thread could be registered and handle the buffer itself. Then the iterating thread would wait until another buffer is available, which will never happen.
The fix is to take the buffer to write out of the queue in the iterating thread under lock protection and the do the unlocking.
Progress
Issue
Reviewers
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/3628/head:pull/3628
$ git checkout pull/3628
Update a local copy of the PR:
$ git checkout pull/3628
$ git pull https://git.openjdk.java.net/jdk pull/3628/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 3628
View PR using the GUI difftool:
$ git pr show -t 3628
Using diff file
Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/3628.diff