Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some data frame opcodes not being handled at all #137

Open
arqunis opened this issue Jul 24, 2017 · 13 comments
Open

Some data frame opcodes not being handled at all #137

arqunis opened this issue Jul 24, 2017 · 13 comments

Comments

@arqunis
Copy link

arqunis commented Jul 24, 2017

To be specific, serenity (the crate i'm the current maintainer of) has sometimes after a while when booting up, lots of spam regarding unknown/invalid data frame opcodes.
But the thing is, the other api wrappers for discord are working just fine without anything interrupting them with data frame opcodes.

So by my assumption, there has to be some opcodes not being handled by this crate at all.

@arqunis arqunis changed the title Data frame opcodes not being completely handled right Some data frame opcodes not being handled at all Jul 24, 2017
@illegalprime
Copy link
Collaborator

which crates did you say are working? and can you give examples of the websocket traffic or how to get it? I'm fairly sure we've handled all the opcodes. Thanks!

@arqunis
Copy link
Author

arqunis commented Jul 30, 2017

1. I wasn't specifically meaning other crates, but libraries in other languages and their respective websocket they're using. As they don't seem to suffer from this and it being a websocket problem only.
2. Regarding the repro steps, well, there aren't any, as it happens sometimes. Or sometimes a bit too frequent even after restarting many times.
3. And last but not least, the traffic.

Although, could you by chance at least add the unknown/invalid opcodes into the errors themselves? As it would help a bit identifying this, since it affects my users as well.

@illegalprime
Copy link
Collaborator

@acdenisSK yeah that sounds super reasonable, I'll get crackin, thanks! After I push this update can you paste the new logs to see which opcodes are not being handled?

@illegalprime
Copy link
Collaborator

a tcpdump or some wireshark data about the issue would also be cool, so I can replay it on my machine.

@arqunis
Copy link
Author

arqunis commented Aug 11, 2017

The wireshark data (May contain some unnecessary fuzz but should contain the data you're looking for).

Also something to note, this doesn't seem to be an issue when i use ethernet instead of wifi. But still not sure about the culprit.

@Roughsketch
Copy link

So I've looked into this a bit since my bots that use serenity sometimes encounter this stream of errors. I added some logging to this library, and I've found that when it breaks it seems to be parsing data as a websocket frame. By that I mean that a standard websocket frame as described here is just arbitrarily filled with text data from a previous packet, but is still parsed as a frame.

For example, debug printing a frame I get this:

WS Data: DataFrame { 
    finished: false, 
    reserved: [false, true, false], 
    opcode: Binary, 
    data: [101, 114, 109, 105, 115, 115, ...] 
}

When parsing data as ASCII, I get the following string:

ermissions":70634560,"name":"Amigo","mentionable":true,"managed":false,"id":"323539490628960258","hoist":false,"

When you take into account the way a websocket packet is framed, you can add on " (fin + reserved + opcode = 34) and a p because the data's length is 112. This gives us the whole text data which is:

"permissions":70634560,"name":"Amigo","mentionable":true,"managed":false,"id":"323539490628960258","hoist":false,"

So it looks like somewhere along the line, the length of a packet is being read in wrong. At least that would be my guess.

@Roughsketch
Copy link

I can now reproduce this consistently through a virtual machine by limiting the network speed. It consistently fails on large payloads with a "Resource temporarily unavailable" error. What I take from this is that it tries to grab a big payload all at once, but it hasn't actually received the entire payload. This causes it to skip since the read would be blocking. The problem with this is that before it tried to read the data, it already had finished reading the header. This means that in the next call, it tries to read the beginning of the data as another header and then failures pile up from there.

The bug is on this line: https://github.com/cyderize/rust-websocket/blob/master/src/dataframe.rs#L89

I don't know which way would be best for handling this sort of error since I am not familiar with your codebase or standards. Some thoughts are using lazy static to store a header that was read but not used, or using a reader that can peek ahead.

It should also be noted that although it is most common for it to fail catastrophically on large payloads, the same bug can occur in the read_header method as well. I haven't look at other places in the project, but I think it's safe to say any section that can potentially return an error after reading bytes can have this bug since there is no method to recover those lost bytes.

@illegalprime
Copy link
Collaborator

Thank you for the research @Roughsketch I'll take a look and reproduce.
Are you using the sync or async version?

@ghost
Copy link

ghost commented Dec 4, 2017

We work on the same project, Roughsketch was testing on the sync version.

@kpp
Copy link
Contributor

kpp commented May 2, 2019

@zeyla @acdenisSK @Roughsketch did you fix it it your fork https://github.com/serenity-rs/rust-websocket ?

@Roughsketch
Copy link

I believe it was fixed at least for our use case. From what I remember I just made it buffer data instead of throwing out parts of packets.

@kpp
Copy link
Contributor

kpp commented May 2, 2019

What about uuid?

@Roughsketch
Copy link

What about it? It's been a long time so I don't remember all the details, but from what I can see on the commits the UUID is used to keep track of the ReaderState of each connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants