Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JDK-8255661: TestHeapDumpOnOutOfMemoryError fails with EOFException #3628

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/hotspot/share/services/heapDumper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ class DumpWriter : public StackObj {
void finish_dump_segment();

// Called by threads used for parallel writing.
void writer_loop() { _backend.thread_loop(false); }
void writer_loop() { _backend.thread_loop(); }
// Called when finished to release the threads.
void deactivate() { flush(); _backend.deactivate(); }
};
Expand Down
34 changes: 15 additions & 19 deletions src/hotspot/share/services/heapDumperCompression.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -250,24 +250,17 @@ void CompressionBackend::deactivate() {
ml.notify_all();
}

// Wait for the threads to drain the compression work list.
// Wait for the threads to drain the compression work list and do some work yourself.
while (!_to_compress.is_empty()) {
// If we have no threads, compress the current one itself.
if (_nr_of_threads == 0) {
MutexUnlocker mu(_lock, Mutex::_no_safepoint_check_flag);
thread_loop(true);
} else {
ml.wait();
}
do_foreground_work();
}

_active = false;
ml.notify_all();
}

void CompressionBackend::thread_loop(bool single_run) {
// Register if this is a worker thread.
if (!single_run) {
void CompressionBackend::thread_loop() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could simplify CompressionBackend::thread_loop() further:

void CompressionBackend::thread_loop() {
  {
    MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
    _nr_of_threads++;
  }

  WriteWork* work = get_work();
  while (work != NULL) {
      do_compress(work);
      finish_work(work);
      work = get_work();
  }

  MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
  _nr_of_threads--;
  assert(_nr_of_threads >= 0, "Too many threads finished");
  ml.notify_all();
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW: why is ml.notify_all() in line 275 needed at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,

thanks for the review Lin and Richard.

The notify_all() is indeed not needed anymore. It was originally needed when the worker threads were newly created threads and we had to wait for them to finish at the end of the dump operation. But since we now use the GC work gang, this can be removed.

I will update the PR with your suggestions.

Best regards,
Ralf

{
MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
_nr_of_threads++;
}
Expand All @@ -276,7 +269,6 @@ void CompressionBackend::thread_loop(bool single_run) {
WriteWork* work = get_work();

if (work == NULL) {
assert(!single_run, "Should never happen for single thread");
MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);
_nr_of_threads--;
assert(_nr_of_threads >= 0, "Too many threads finished");
Expand All @@ -287,10 +279,6 @@ void CompressionBackend::thread_loop(bool single_run) {
do_compress(work);
finish_work(work);
}

if (single_run) {
return;
}
}
}

Expand Down Expand Up @@ -363,6 +351,16 @@ void CompressionBackend::free_work_list(WorkList* list) {
}
}

void CompressionBackend::do_foreground_work() {
assert(!_to_compress.is_empty(), "Must have work to do");
assert(_lock->owned_by_self(), "Must have the lock");

WriteWork* work = _to_compress.remove_first();
MutexUnlocker mu(_lock, Mutex::_no_safepoint_check_flag);
do_compress(work);
finish_work(work);
}

WriteWork* CompressionBackend::get_work() {
MonitorLocker ml(_lock, Mutex::_no_safepoint_check_flag);

Expand Down Expand Up @@ -405,9 +403,7 @@ void CompressionBackend::get_new_buffer(char** buffer, size_t* used, size_t* max
_unused.add_first(work);
}
} else if (!_to_compress.is_empty() && (_nr_of_threads == 0)) {
// If we have no threads, compress the current one itself.
MutexUnlocker mu(_lock, Mutex::_no_safepoint_check_flag);
thread_loop(true);
do_foreground_work();
} else {
ml.wait();
}
Expand Down
5 changes: 3 additions & 2 deletions src/hotspot/share/services/heapDumperCompression.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,7 @@ class CompressionBackend : StackObj {
void free_work(WriteWork* work);
void free_work_list(WorkList* list);

void do_foreground_work();
WriteWork* get_work();
void do_compress(WriteWork* work);
void finish_work(WriteWork* work);
Expand All @@ -221,8 +222,8 @@ class CompressionBackend : StackObj {
// Commits the old buffer (using the value in *used) and sets up a new one.
void get_new_buffer(char** buffer, size_t* used, size_t* max);

// The entry point for a worker thread. If single_run is true, we only handle one entry.
void thread_loop(bool single_run);
// The entry point for a worker thread.
void thread_loop();

// Shuts down the backend, releasing all threads.
void deactivate();
Expand Down