Incomplete transfers when using TLS with large data. #273

Closed
Zapotek opened this Issue Oct 21, 2011 · 9 comments

Comments

Projects
None yet
2 participants

Zapotek commented Oct 21, 2011

This issue has been discussed in detail in this thread: http://groups.google.com/group/eventmachine/browse_thread/thread/6035227e4173e312/b3ce863874249575

Unfortunately, Aman hasn't been able to reproduce it and I can't move on with my development with the situation as it is right now.
This code causes it: https://gist.github.com/1303058

I'm on: Linux zonster 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:56:25 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
Ubuntu 11.10

I've been able to reproduce it in a clean 11.04 Ubuntu installation as well.
The size of the data is a factor so try increasing the range of 'obj'.

If I remove start_tls the code works as expected, as it is right now the client will never print 'SUCCESS' because the buffer in the ObjectProtocol module will never get filled.

I'm posting this issue hoping that someone will be able to reproduce, investigate and hopefully fix it.

Regards,
Tasos L.

Contributor

tmm1 commented Oct 21, 2011

Not sure why I am unable to repro this.

Does it also happen for you with older EM releases (beta.4 and beta.3)?

Contributor

tmm1 commented Oct 21, 2011

Looking back at b237c03 (the fix for #233), it might have inadvertently introduced this issue. Can you revert that and see if it still happens?

Zapotek commented Oct 21, 2011

Damn...why didn't any of us think of this before?
It works now.

tmm1 added a commit that referenced this issue Oct 21, 2011

Zapotek commented Dec 5, 2011

Any progress on this and #233 or an estimate about the v0.1 milestone?
Or any ideas on how to bypass this?

Would using the pure Ruby version get around the problem?

Contributor

tmm1 commented Dec 5, 2011

The commit that was causing this issue was reverted on master, so this should no longer be an issue.

The work-around for #233 is to avoid sending big chunks of data over ssl using one send_data call, and instead send smaller chunks over time using a next_tick or a tick_loop.

AFAIK, the pure ruby reactor does not support ssl.

Zapotek commented Dec 5, 2011

Shouldn't that functionality be added to send_data along with a reasonable max chunk size?

Zapotek commented Dec 5, 2011

When using next_tick the first chunk (which contains the size) gets lost and receive_object never gets called.
Could this have been the issue all along?

I'm just calling send_data in a loop now and it seems to work.
Do I need to assign an index to the chunks I send or will they always be delivered in order?

This is the code:

def send_object( obj )
    data = serializer.dump( obj )
    packed = [data.bytesize, data].pack( 'Na*' )

    while( packed )
        if packed.bytesize > MAX_CHUNK_SIZE
            send_data( packed.slice!( 0, MAX_CHUNK_SIZE ) )
        else
            send_data( packed )
            break
        end
    end
end
Contributor

tmm1 commented Dec 5, 2011

Your code is correct.

When using next_tick, however, you need to keep track of the chunks and maintain a queue in send_object such that object chunks don't interweave.

Zapotek commented Dec 5, 2011

Fair enough, closing the issue then; thanks for all the help Aman.

Ah, by the way, someone really needs to put that sort of information in the docs.

@Zapotek Zapotek closed this Dec 5, 2011

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment