Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config. variable max_xmpp_buffer_size not respected in current HEAD #23

Closed
dhruvbird opened this issue Apr 2, 2012 · 13 comments
Closed
Assignees

Comments

@dhruvbird
Copy link
Collaborator

Probably _on_data() & _on_stanza() should update a counter and maintain the buffer under max_xmpp_buffer_size.

@ghost ghost assigned satyamshekhar Apr 2, 2012
@satyamshekhar
Copy link
Collaborator

Done. This actually limits the xmpp stanza size. We should also put in a limit for the buffer size - size of pending_stanzas.

@dhruvbird
Copy link
Collaborator Author

Thanks :)

On Tue, Apr 3, 2012 at 1:47 AM, satyamshekhar
reply@reply.github.com
wrote:

Done. This actually limits the xmpp stanza size. We should also put in a limit for the buffer size - size of pending_stanzas.


Reply to this email directly or view it on GitHub:
#23 (comment)

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

@dhruvbird
Copy link
Collaborator Author

Just wondering - suppose the limit on the packet size is 100 bytes and
the server sends 200 bytes in one go, won't that trigger this check?
Post-parsing though, each stanza might actually be <= 100 bytes in size?

On Tue, Apr 3, 2012 at 9:16 AM, Dhruv Matani dhruvbird@gmail.com wrote:

Thanks :)

On Tue, Apr 3, 2012 at 1:47 AM, satyamshekhar
reply@reply.github.com
wrote:

Done. This actually limits the xmpp stanza size. We should also put in a limit for the buffer size - size of pending_stanzas.


Reply to this email directly or view it on GitHub:
#23 (comment)

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

@satyamshekhar
Copy link
Collaborator

Yup, this check will not terminate that the stream in that case..

@dhruvbird
Copy link
Collaborator Author

I am guessing that the check you have implemented is for buffer size limit.

On Tue, Apr 3, 2012 at 2:56 PM, satyamshekhar
reply@reply.github.com
wrote:

Yup, this check will not terminate that the stream in that case..


Reply to this email directly or view it on GitHub:
#23 (comment)

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

@satyamshekhar
Copy link
Collaborator

No, this only checks the stanza size. _current_stanza_size is reset in function _on_stanza. To implement buffer size limit we will have to limit the size of pending_stanzas right?

@dhruvbird
Copy link
Collaborator Author

What I meant to say is that the current check does more than just the
stanza size - it disallows valid stanzas < the limit if the server
decided to push multiple stanzas at one go right?

So in essence, it is limiting the amount of buffer space we allocate
for incoming packets for a certain stream.

On Tue, Apr 3, 2012 at 3:05 PM, satyamshekhar
reply@reply.github.com
wrote:

No, this only checks the stanza size. _current_stanza_size is reset in function _on_stanza. To implement buffer size limit we will have to limit the size of pending_stanzas right?


Reply to this email directly or view it on GitHub:
#23 (comment)

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

@satyamshekhar
Copy link
Collaborator

Oh, now I get what you are saying.. Actually, not necessarily.. this depends on the network/node.js.. it might emit the data into diff chunks.. The size of the chunks is not decided by the remote server, is it?

@dhruvbird
Copy link
Collaborator Author

On Tue, Apr 3, 2012 at 3:17 PM, satyamshekhar
reply@reply.github.com
wrote:

Oh, now I get what you are saying.. Actually, not necessarily.. this depends on the network/node.js.. it might emit the data into diff chunks.. The size of the chunks is not decided by the remote server, is it?

Exactly! So node itself might actually accumulate multiple packets and
push them out right?

Actually, your last observation makes me thing that it is not entirely
possible to reliably limit the stanza size (or the buffer size) unless
it has been completely parsed or the last chunk of the stanza remains.
Consider when the stanza limit is 100 bytes and we have received 90
bytes. The server (or node) then sends 100 bytes, 5 of which end the
current stanza and the remaining start a new one. Our check fails
here.

However, we can have a config variable which limits the amount of data
we are willing to receive and hold and process on behalf of the server
(this is essentially the buffer size). Again, this would be best
effort.


Reply to this email directly or view it on GitHub:
#23 (comment)

   -Dhruv Matani.
http://dhruvbird.com/

"What's the simplest thing that could possibly work?"
-- Ward Cunningham

@satyamshekhar
Copy link
Collaborator

Exactly! So node itself might actually accumulate multiple packets and
push them out right?

It can accumulate packets in case current tick takes time.. What we can do is move the check after we write the incoming data to the parser.. that will limit the stanza size right?

We can have another check that limits the chunk size.. Like you already suggested..

@sonnyp
Copy link
Contributor

sonnyp commented Apr 5, 2012

Sorry, wrong issue number :-/

@dhruvbird
Copy link
Collaborator Author

@satyamshekhar @astro has pushed a feature to node-expat that allows one to query the number of bytes read in the stream till now. I've tried to fix the stanza limit restriction. Please could you review it for correctness. Caveat: It is possible that stanzas > the limit are accepted at times, but never will stanzas <= the limit be rejected. The logic is somewhat like a bloom filter.

@satyamshekhar
Copy link
Collaborator

looks good. (y)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants