New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple example how to receive big WebSocket (wss) messages. #1255
Comments
You seem to be in the grip of some misunderstanding. None of the examples, or lws itself does that. Lws passes up whatever arrived as it arrived to the user code. You can understand if what just arrived was the last part of a ws message by There is no upper limit to the size of ws messages and the ws protocol does not have to declare the message size upfront (the ws fragment size doesn't have to be related to the ws message size). So we leaves it up to your user code if you need to reassemble the pieces or not, since a ws message larger than the physical memory in a given system is completely possible. You will get ALL the pieces in the correct sequence and an indication which was the last piece. |
That problem is not at receiver side. The problem is that for some reason server cannot send more than 4kb of data at one lws_write (i'm still trying to figure out, why). The possible solution is to split sending data in <4kb chunks and do transmission in several LWS_CALLBACK_SERVER_WRITEABLE events, one piece at a time, calling lws_callback_on_writable after each lws_write until all chunks are sent. You will also probably need to mark the end of message somehow (since you're transmitting json, you can use null-terminator, for example) and look for the correct chunk order (in case of sending more than one messages at a time). |
Those are two separate issues really...
... for receive the ws-level fragmentation is removed by lws, chunks for rx payload just appear at the callback in sequence, and you can use
Lws gets told by the kernel with POLLOUT that the socket could accept some more write payload, but there is no way to find out how much until you try the write. As for what it sends at one time, on a real system not all on localhost, if you send a lot the tcp window will quickly fill. Each time lws tries to write a buffer to the socket, the entire length is copied into kernel memory by the syscall... if you keep writing 100MB and the kernel is only accepting 2KB based on the tcp window or its own dynamic memory situation, that is a huge inefficiency. So lws restricts the amount it will try to send at once...the default is 4KiB but you can control it from the protocol struct
|
I have already read hat before, but for some reason it is not the case from my experience. Client is just getting stuck on waiting for next data piece. Maybe I messed up some settings (not only me, as you can see from this issue), I don't know. This is not really a problem as you learned about it (handling chunk queue for wsi is very easy), but was frustrating at first. |
I added a -b switch to the example If you can get a clue what it depends on to make trouble, or find a way to reproduce it on a minimal example, it'd be welcome. |
... just a thought, what kind of event loop are you using? |
Currently i'm using this one, for every working thread. tss is just some service structure with context (struct lws_context*) and some thread info.
But my first issue encounter was with single-threaded lws_service() loop based on some old examples. |
Hello. Can you provide simple example, how to receive big WebSocket (wss) messages?
I have little experience in C, and after doing research, for me is not clear how to receive
, and not just a part of it and then next part of another message.
I use "minimal-ws-client-rx" example.
I set
pt_serv_buf_size
to the65536
inlws_context_creation_info
. And i set 65536 value in protocol definition.Maybe i need just make some changes in logger to show full message?
Example what i need:
{"key1": "value1", "key2", "value2"}
but bigger then 4096.{"key1": "value1", "key2", "value2"}
mesage, not just{"key1": "value1", "k
I think your explanation will be helpful for MANY people. Thank you.
The text was updated successfully, but these errors were encountered: