Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix possible OOME in ChannelInputStream #430

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,24 @@ public void receive(byte[] data, int offset, int len)
buf.putRawBytes(data, offset, len);
buf.notifyAll();
}

// For slow readers, wait until the buffer has been completely read; this ensures that the buffer will be cleared
// in #read and the window position will be reset to 0. Otherwise, if the buffer is read slower than incoming data
// arrives, the buffer might continuing growing endlessly, finally resulting in an OOME.
// Note, that the buffer may still double its size once (provided that the maximum received chunk size is less
// than chan.getLocalMaxPacketSize).
for (; ; ) {
synchronized (buf) {
if (buf.wpos() >= chan.getLocalMaxPacketSize() && buf.available() > 0) {
buf.notifyAll();
Thread.yield();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would the yield be appropriate here? As the documentation specifies this is not something you typically call. Furthermore this is a busy wait construct, which is not pretty... Can't we solve this with a lock or latch.
Without a load of tests I'll also not gladly accept this PR..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could replace the yield by a very short sleep as a 32K buffer becomes quickly filled when having a high bandwidth -- maybe 1ms? To me, this seemed the most simple solution, but I'm quite new to SSHJ.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is exactly the reason why I think that limiting it on the current starting buffer size is a bad idea. We should limit it on some configurable amount, else you get a very staggered pattern...

Did you experience something in production or is this something hypothetical...?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is exactly the reason why I think that limiting it on the current starting buffer size is a bad idea. We should limit it on some configurable amount, else you get a very staggered pattern...

That's true; I'm actually seeing this kind of pattern.

Did you experience something in production or is this something hypothetical...?

We are getting such OOME's reported quite frequently from our users (Git client) and I'm perfectly able to reproduce this when cloning a Git repository from my local VM.

}
else {
break;
}
}
}

// Potential fix for #203 (window consumed below 0).
// This seems to be a race condition if we receive more data, while we're already sending a SSH_MSG_CHANNEL_WINDOW_ADJUST
// And the window has not expanded yet.
Expand Down