New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavior of ConnectionStream.waitForData is unclear #1326

Closed
mfrischknecht opened this Issue Nov 7, 2015 · 16 comments

Comments

Projects
None yet
3 participants
@mfrischknecht
Contributor

mfrischknecht commented Nov 7, 2015

I have been looking at ConnectionStream.waitForData() on TCPConnection objects and found that the API documentation is not clear on some corner cases. After testing the behavior with a simple test, it seems that the according behavior differs between backends1.

Naively looking at the method, I would expect about the following behavior:

  1. If the method is passed a timeout of 0L (the default value), it will wait indefinitely (since InputStream.dataAvailableForRead already delivers the information without the wait)
  2. If the connection is "empty" and gets closed, it will return false immediately (before the timeout occurs, not blocking up the fiber)
  3. If the connection contains unread data (whether it's still open or not), it returns true immediately

An implementation like this would basically allow for the most convenient loop that allows waiting for data:

while(connection.waitForData(timeout))
{
    ubyte[] buffer = new ubyte[](connection.leastSize);
    connection.read(buffer);
    //Do stuff...
}
//Connection was closed or timed out
//All available data has been read

On a similar note, the maximum timeout of the method is not specified. Libasync seems to use Duration.max as the default value, whereas Libevent will throw exceptions stating that int.max.seconds is the maximum.

1 On my Linux box, I was only able to build the Libevent and Libasync versions. Libev results in an error about a missing deimos module. Also, the behavior on Windows would be interesting to know, too.

Here's the behavior I observed:

Libevent
  1. Is not the case (waitForData(0L) returns immediately)
  2. True
  3. True
Libasync
  1. True
  2. True
  3. The test fails for a reason I can't explain. The socket I use won't transfer its payload although I use the flush() method explicitly (is this worth another issue?).

Edit:

win32
  1. True
  2. True
  3. True

s-ludwig added a commit that referenced this issue Nov 11, 2015

Partially fix waitForData semantics. See #1326.
Documents the expected behavior and fixes the behavior of the libevent and win32 backends. libev and libasync not yet tested at this point.

s-ludwig added a commit that referenced this issue Nov 11, 2015

Fixed LibevTCPConnection.waitForData and implemented rudimentary TCP …
…client support. See #1326.

TCP client support is just enough to run some automated tests involving TCP connections (numerical IPv4 address resolution and blocking active TCP connection procedure).
@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Fixed now on all platforms. There is a test and documented behavior now, too. Still waiting for Travis to pass.

Member

s-ludwig commented Nov 11, 2015

Fixed now on all platforms. There is a test and documented behavior now, too. Still waiting for Travis to pass.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Looking good!

Member

s-ludwig commented Nov 11, 2015

Looking good!

@s-ludwig s-ludwig closed this Nov 11, 2015

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Just noticed that I'm not able to reproduce the libasync issue (#1330). Can you have a look at the test?

Member

s-ludwig commented Nov 11, 2015

Just noticed that I'm not able to reproduce the libasync issue (#1330). Can you have a look at the test?

@s-ludwig s-ludwig reopened this Nov 11, 2015

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

I could reproduce it on Windows, does it pass on linux without the fix?

Contributor

etcimon commented Nov 11, 2015

I could reproduce it on Windows, does it pass on linux without the fix?

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

At least the test that I added passes for me on both, Windows and Linux. Did you do anything else for reproducing it?

Member

s-ludwig commented Nov 11, 2015

At least the test that I added passes for me on both, Windows and Linux. Did you do anything else for reproducing it?

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

Strange, the waitForData should return false after the 10s timeout because the buffer is destroyed when the connection is closed. So, the test passing is actually a failure in some sort.

Contributor

etcimon commented Nov 11, 2015

Strange, the waitForData should return false after the 10s timeout because the buffer is destroyed when the connection is closed. So, the test passing is actually a failure in some sort.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Oh, now I see. I overlooked the test from the beginning. I'll add a "half-closed with pending data" test to the test suite, too.

Member

s-ludwig commented Nov 11, 2015

Oh, now I see. I overlooked the test from the beginning. I'll add a "half-closed with pending data" test to the test suite, too.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

Could you enable log in libasync.types and put it in a gist?

edit: nevermind, just saw your last comment

Contributor

etcimon commented Nov 11, 2015

Could you enable log in libasync.types and put it in a gist?

edit: nevermind, just saw your last comment

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

I've added the test now and libasync fails for me now as expected. I also understand the fix in #1330 now, but the destruction of the buffer is still in the wrong place, as finalize is an OutputStream operation, but the read buffer belongs to the InputStream part. IMO, the destruction should happen in close() and possibly also in read when it detects that the last bytes have been read.

Member

s-ludwig commented Nov 11, 2015

I've added the test now and libasync fails for me now as expected. I also understand the fix in #1330 now, but the destruction of the buffer is still in the wrong place, as finalize is an OutputStream operation, but the read buffer belongs to the InputStream part. IMO, the destruction should happen in close() and possibly also in read when it detects that the last bytes have been read.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

IMO, the destruction should happen in close() and possibly also in read when it detects that the last bytes have been read.

I think that would be close, because that's actually called by both the client and the server when really finished

Contributor

etcimon commented Nov 11, 2015

IMO, the destruction should happen in close() and possibly also in read when it detects that the last bytes have been read.

I think that would be close, because that's actually called by both the client and the server when really finished

@s-ludwig s-ludwig closed this in #1330 Nov 11, 2015

s-ludwig added a commit that referenced this issue Nov 11, 2015

Merge pull request #1330 from etcimon/patch-29
(Libasync) Destroy the readBuffer on finalize - fixes #1326
@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Okay, works for me now on Linux, but results in an InvalidMemoryOperationError on Windows during shutdown. Removing the calls to close() makes the errors go away, but I currently have no clue what causes it. Maybe something within libasync.

Member

s-ludwig commented Nov 11, 2015

Okay, works for me now on Linux, but results in an InvalidMemoryOperationError on Windows during shutdown. Removing the calls to close() makes the errors go away, but I currently have no clue what causes it. Maybe something within libasync.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

It closes the sockets in the destructor if it wasn't closed already. When the socket closes, it tries to resume tasks that have yielded and make them throw an exception. I think the onClose in LibasyncTCPConnection.~this might be preventative and unnecessary, removing that would fix the issue.

Contributor

etcimon commented Nov 11, 2015

It closes the sockets in the destructor if it wasn't closed already. When the socket closes, it tries to resume tasks that have yielded and make them throw an exception. I think the onClose in LibasyncTCPConnection.~this might be preventative and unnecessary, removing that would fix the issue.

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

Hm, okay it's actually the receive buffer of the TCPConnection instances on the listening/server side. I've fixed it by replacing the call to if (inbound) onClose() in onConnect with a call to close() (which destroys the buffer). Do you see potential issues with that solution?

Member

s-ludwig commented Nov 11, 2015

Hm, okay it's actually the receive buffer of the TCPConnection instances on the listening/server side. I've fixed it by replacing the call to if (inbound) onClose() in onConnect with a call to close() (which destroys the buffer). Do you see potential issues with that solution?

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

I thought the server called close in the vibe.http.server handler but if it doesn't, this is the best place to do it

Contributor

etcimon commented Nov 11, 2015

I thought the server called close in the vibe.http.server handler but if it doesn't, this is the best place to do it

@s-ludwig

This comment has been minimized.

Show comment
Hide comment
@s-ludwig

s-ludwig Nov 11, 2015

Member

BTW, maybe instead of using FixedRingBuffer.freeOnDestruct it is a better idea to add a .deletePayload() method and remove the ~this() member altogether. I unfortunately missed this issue back when this was added.

Member

s-ludwig commented Nov 11, 2015

BTW, maybe instead of using FixedRingBuffer.freeOnDestruct it is a better idea to add a .deletePayload() method and remove the ~this() member altogether. I unfortunately missed this issue back when this was added.

@etcimon

This comment has been minimized.

Show comment
Hide comment
@etcimon

etcimon Nov 11, 2015

Contributor

Yes, I can't remember what prevented me from manually allocating the buffer with vibe.utils.memory. I replaced this by the memutils.circularbuffer in my fork which is manually allocated

Contributor

etcimon commented Nov 11, 2015

Yes, I can't remember what prevented me from manually allocating the buffer with vibe.utils.memory. I replaced this by the memutils.circularbuffer in my fork which is manually allocated

s-ludwig added a commit that referenced this issue Nov 11, 2015

Replace FixedRingBuffer.freeOnDestruct functionality by .dispose.
This fixes an issue with LibasyncTCPConnection where FixedRingBuffer would cause an InvalidMemoryOperationError. See #1326.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment