New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZNC having issues with too long of a message #38
Comments
Thanks for reporting the bug! I will test on a large Slack team and let you know. Unfortunately I will be on business trip for the next two week, so apologies in advance for the high latency in responding here.
Oh no, please keep reporting all the bugs you find! |
I know this is a somewhat fundamental change to the application, but do you need any help with testing a fix for this? This is literally the last hurdle I have to deal so I can stop using the web interface. |
hey @Syloq , I don't think this is going to be hard to fix. I've come back from business trips two days ago, so I can have a look at it this week |
@Syloq , do you have an example of how chunking should happen? |
As per RFC2812,
|
@insomniacslk RFC2812 would be a standard way of handling these. ZNC and most other IRCd's handle it with different limits. Most clients, from my understanding, will handle the multiple messages individually as they come in. So you could send the same numeric each time with different data. The clients should be able to handle that just fine. |
That's what I assumed at first, but I imagine there are commands that cannot be invariantly be broken into multiple lines |
That's possibly very true, however, I'd imagine those numerics are only when connecting to the server itself. I don't have specifics on that behavior. I do believe is that there are a few numerics that are typically very long, like the 353 I mentioned initially, are done in such a way that clients will be able to handle the massive amount of messages. |
I am thinking of something like "TOPIC", where someone sets a long topic whose numeric (332) exceeds the 512 bytes. If we blindly split, you would get two TOPIC lines, and the client believes that the topic has been changed more than once, and will take only the last chunk. Am I missing something? |
If we can come up with a whitelist of numerics that can be split, I'd rather do that as a stop-gap. I guess we can start with the ones you faced, and add per need |
That's a good point. It sounds like you'd have to enumerate which numerics are eligible for chunking. Not a great way of handling that for sure. And then in that light, you'd have to truncate some that are longer than the 510 characters. |
I'll start with 353 for now, let me know if you find more candidates for chunking |
I think 353 and 352 will be the best candidates. The first is a /names reply the other is a /who reply. |
Thanks. I also assume that the safe way to split is by spaces. I see a few edge cases:
|
Yes. That would be a standard way of dealing with it.
I'm not sure what limits slack has for this. But truncating silently might be a good idea because these messages are really only meant to be sent when joining or leaving a channel. IIRC.
If you could split on a "\w" that would be best, but I know performance hits can occur when using regex. |
Today I was trying to connect to a slack where a channel that I was forced to belong to had 450+ users in it. The problem is that my ZNC bouncer was disconnecting because a command was being sent to the client that exceeded its limits. Here is an example of the messages that are being sent:
Sending numeric reply: :irc-slack-name-here 353 myUsername = #general :(list of users that is 450+ characters)
According to this bug report that the server shouldn't send over 512 total bytes per message in total (according to the RFC). I believe that ZNC (further on in that bug) allows up to 1024 total bytes per message.
I believe the solution would be to chunk the data (512 byte chunks) sent to the client so that they don't freak out in any manner. The method in question is here.
Sorry for the multiple bugs today.
The text was updated successfully, but these errors were encountered: