New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dmq_usrloc: sync with multi contacts per message #1054
dmq_usrloc: sync with multi contacts per message #1054
Conversation
2aab10c
to
75eee5f
Compare
I think, I should add a check for maximum packet length. to make sure we will never try to send a datagram larger than 60K. 150 contacts may be too much in some cases because each contacts as a max length of 1024, we should deal with this automatically. |
I am reviewing the patch currently but as for limiting to 60KB, I'm not sure that for this application it is necessary. It will be difficult to be 100% accurate, anyway, since you only have control over the body - therefore, the only option is to assume a sensible size for the rest of the message and set that aside in your calculation. I think Kamailio handles fragmentation just fine and since this is really just a few packets on startup/initial sync, I don't think we need to be concerned about it (although it may be worth a mention in the readme). Others may have a different opinion of course. |
This is now deployed on cluster running on AWS, fragmentation is taking place and everything is performing much better, no more transactions storm. OK to update the readme, changing the example to 50 contacts and adding a comment about considering 1024 Bytes / contact in order to stay bellow 65536 UDP send |
@jchavanton: the documentation has to be edited in docbook xml files located in |
1226c0f
to
a474232
Compare
ok, corrected the documentation, I squashed to avoid too many commits |
The overhead per contact is 188 chars, this is quite a lot, we may select an alternate format later. I think we could validate the size of the packet while building it to automatically send > 60000 for example. |
a474232
to
f51f4df
Compare
Are there other changes you'd like to make to this PR? |
Hi Charles, I am implementing the size check, I think it would be nightmarish not to know if sometimes I am verifying that the calculation matches and I will test and commit later today. Thanks for your review so far |
I added |
8ed3190
to
7cb707d
Compare
Hi @charlesrchance I am not planning any other modifications on this PR, the performance of sync is now acceptable for our need at this point. |
One thing that could help would be to increase the retransmission timer only for the sync traffic, I did not see any obvious way to do this. I guess this is off topic and I was planning to discuss this in the mailing list. |
@charlesrchance I guess this is ready to be merged if you are fine with everything so far. |
Returning after an extended Easter break - therefore, have not tested the recent changes, but it looks ok from source. If it has been tested already and everyone else is happy then it can be merged. Otherwise, I will do it tomorrow. |
I did test the latest version in my lab, not in prod yet. |
The lab test I did was load testing with 4 nodes, before going to prod we do integration testing, in this case see no reason why they would fail. |
This was tested with 100K contacts and 4 servers (all running on the same host), when restarting a server we get a full sync from the 3 nodes in a few seconds :