Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple example how to receive big WebSocket (wss) messages. #1255

Closed
agiUnderground opened this issue Apr 25, 2018 · 7 comments
Closed

Simple example how to receive big WebSocket (wss) messages. #1255

agiUnderground opened this issue Apr 25, 2018 · 7 comments

Comments

@agiUnderground
Copy link

Hello. Can you provide simple example, how to receive big WebSocket (wss) messages?

I have little experience in C, and after doing research, for me is not clear how to receive

whole WebSocket message

, and not just a part of it and then next part of another message.

I use "minimal-ws-client-rx" example.
I set pt_serv_buf_size to the 65536 in lws_context_creation_info. And i set 65536 value in protocol definition.

Maybe i need just make some changes in logger to show full message?

Example what i need:

  • Server send message like this {"key1": "value1", "key2", "value2"} but bigger then 4096.
  • We need to receive WHOLE {"key1": "value1", "key2", "value2"} mesage, not just {"key1": "value1", "k

I think your explanation will be helpful for MANY people. Thank you.

@lws-team
Copy link
Member

and not just a part of it and then next part of another message.

You seem to be in the grip of some misunderstanding. None of the examples, or lws itself does that.

Lws passes up whatever arrived as it arrived to the user code. You can understand if what just arrived was the last part of a ws message by lws_is_final_fragment(wsi).

There is no upper limit to the size of ws messages and the ws protocol does not have to declare the message size upfront (the ws fragment size doesn't have to be related to the ws message size). So we leaves it up to your user code if you need to reassemble the pieces or not, since a ws message larger than the physical memory in a given system is completely possible. You will get ALL the pieces in the correct sequence and an indication which was the last piece.

@LabunskyA
Copy link
Contributor

That problem is not at receiver side. The problem is that for some reason server cannot send more than 4kb of data at one lws_write (i'm still trying to figure out, why).

The possible solution is to split sending data in <4kb chunks and do transmission in several LWS_CALLBACK_SERVER_WRITEABLE events, one piece at a time, calling lws_callback_on_writable after each lws_write until all chunks are sent.

You will also probably need to mark the end of message somehow (since you're transmitting json, you can use null-terminator, for example) and look for the correct chunk order (in case of sending more than one messages at a time).

@lws-team
Copy link
Member

Those are two separate issues really...

  1. These two minimal examples show how to use ws fragmentation easily...

https://github.com/warmcat/libwebsockets/tree/master/minimal-examples/ws-client/minimal-ws-client-pmd-bulk

https://github.com/warmcat/libwebsockets/tree/master/minimal-examples/ws-server/minimal-ws-server-pmd-bulk

... for receive the ws-level fragmentation is removed by lws, chunks for rx payload just appear at the callback in sequence, and you can use lws_is_final_fragment(wsi) to find out if the current chunk ends the ws message.

  1. For send chunking, actually it is possible to do one big lws_write(). It will try to write it on the socket, and if the kernel did not accept it all, it will malloc up a buffer and send the remainder itself in the background, suppressing any more user WRITABLE callbacks until it is done.

Lws gets told by the kernel with POLLOUT that the socket could accept some more write payload, but there is no way to find out how much until you try the write.

As for what it sends at one time, on a real system not all on localhost, if you send a lot the tcp window will quickly fill. Each time lws tries to write a buffer to the socket, the entire length is copied into kernel memory by the syscall... if you keep writing 100MB and the kernel is only accepting 2KB based on the tcp window or its own dynamic memory situation, that is a huge inefficiency.

So lws restricts the amount it will try to send at once...the default is 4KiB but you can control it from the protocol struct

size_t tx_packet_size;
/**< 0 indicates restrict send() size to .rx_buffer_size for backwards-
 * compatibility.
 * If greater than zero, a single send() is restricted to this amount
 * and any remainder is buffered by lws and sent afterwards also in
 * these size chunks.  Since that is expensive, it's preferable
 * to restrict one fragment you are trying to send to match this
 * size. */

@LabunskyA
Copy link
Contributor

and send the remainder itself in the background, suppressing any more user WRITABLE callbacks until it is done

I have already read hat before, but for some reason it is not the case from my experience. Client is just getting stuck on waiting for next data piece. Maybe I messed up some settings (not only me, as you can see from this issue), I don't know. This is not really a problem as you learned about it (handling chunk queue for wsi is very easy), but was frustrating at first.

@lws-team
Copy link
Member

I added a -b switch to the example minimal-ws-server-pmd-bulk that causes it to send the whole test data is one big lws_write(), it doesn't have any problem, although I tested it on localhost.

If you can get a clue what it depends on to make trouble, or find a way to reproduce it on a minimal example, it'd be welcome.

@lws-team
Copy link
Member

... just a thought, what kind of event loop are you using?

@LabunskyA
Copy link
Contributor

LabunskyA commented Apr 26, 2018

Currently i'm using this one, for every working thread. tss is just some service structure with context (struct lws_context*) and some thread info.

while (!tss->interrupt) {
int exit_code = lws_service_tsi(tss->context, 50, (int) tss->thread_id);
if (exit_code > 0)
break;
}

But my first issue encounter was with single-threaded lws_service() loop based on some old examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants