New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dispatcher algorithm 11, no more than 25 hosts in group for proper distribution #2698
Comments
See the docs of the modules: Note the remark about exact percentage and redistribution of rest. If you want to discuss more, then write to sr-users@lists.kamailio.org mailing list. If proves to be a bug in the code, then an issue can be opened here. |
Reopen as requested on mailing list
|
I remember Daniel already answered similar questions, from what I remember, the algo will redistribute the weight to a ratio of stored in 100 slots and fill the remaining slots with the first one. So there may be a slight offset. What you are reporting in this issue is that the distribution it not done well when there is more than 25 hosts ? I find this statement a bit confusing : compare to : If this is the case, it seems valuable to be able to support more than 25 hosts or at least issue a warning. |
good explanation from Daniel on the expected limitations :
We may be able to minimize the limitation of 100 slots be at least distributing the remainders.
|
Not sure but it seems like without much refactoring we may be able to fix the loss of precision by setting the unused positions to -1/disabled
|
@jchavanton: let's not use the issue tracker for discussing the documented behaviour of the stable branches. I pointed to the docs in my comment closing earlier this issue and provided more details on sr-users mailing lists. If the devs use the issue tracker as a discussion forum, then we cannot request the other people not to do it. If you want to discuss improvements/changes in behaviour, then use sr-dev mailing list or eventually open a feature request with appropriate details. If @henningw reopened it because he believes it is an issue, not related to what is explicitly documented, disregarding my comment, then probably he can provide more details here. |
@miconda - i just reopened because @jchavanton mentioned that it is indeed an issue and should be tracked on the issue tracker. |
Hello. I'd like to introduce the point of view of user, while there is accurate feature description, it is hard to take into account the part after the point when you have 25 or more hosts. Let there are 100 calls. Than in case of 8 hosts and equal rweight value distribution will be more or less equal, simply because the difference between 100 and 96 (percentage is 12.5 so 12*8) is not big deal. But it is completely different case when there are 25 or 50 hosts: in this case percentage is 3.8 that gives 78 calls to distribute and 18 to go to single host. |
@henningw: here I already gave a resolution about why is this behaviour and closed the issue. On mailing list it was a believe that it might be a problem, as a response to a message sent there due to my comment here. If you hurried up to act on the response on the mailing list, you should have replied that it was already an issue opened, with some comments and closed, giving the link to it. Not to reopen without minimum consideration of what was commented here. Do not reopen the issues which are closed if you do not have any idea about what they are. If you have technical reasons to believe that a closed issues should be reopened, then do it, adding the appropriate details. Now the discussion is split in two places, it is hard to track, and here is not discussing about a bug in the c code. |
@E1isIvan: you have to send your remarks about current documented behaviour to sr users mailing list, so the discussion can be followed in a single thread. The current behaviour is what it is expected and the algorithm is intended for specific use cases and matches those needs. One can eventually complain that this algorithm is not working with even distribution for more than 100 destinations, obviously not by its design. New algorithms can be contributed if someone needs new type of routing. |
Description
We tried to load share 50+ media servers using dispatching algorithm 11 and faced an unexpected behavior when most calls go to very first host in the group.
Troubleshooting
I tried to start Kamailio like this:
sudo kamailio -f /etc/kamailio/kamailio.cfg -E -d 5 -u 995
but errors were the same when dispatcher group had more than 25 and less than 25 hosts
It was tried different combinations of dispatcher list host group configuration:
100 sip:10.60.27.123:7000 10 rweight=50
100 sip:10.60.27.123:7000 10 weight=50;rweight=50
100 sip:10.60.27.123:7000 10 rweight=50,maxload=80
100 sip:10.60.27.123:7000 0 10 rweight=50
100 sip:10.60.27.123:7000 0 10 weight=50;rweight=50
100 sip:10.60.27.123:7000 0 10 rweight=50,maxload=80
When making 100 calls the quarter of them go to the first host in the group.
Reproduction
modparam("dispatcher", "list_file", "/etc/kamailio/dispatcher.list")
modparam("dispatcher", "flags", 2)
modparam("dispatcher", "ds_ping_method", "OPTIONS")
modparam("dispatcher", "ds_probing_threshold", 3)
modparam("dispatcher", "ds_inactive_threshold", 10)
modparam("dispatcher", "ds_probing_mode", 3)
modparam("dispatcher", "ds_ping_interval", 10)
modparam("dispatcher", "ds_ping_reply_codes", "501,403,404,400,200")
modparam("dispatcher", "ds_ping_from",DS_PING_FROM_PARAM)
modparam("dispatcher", "use_default", 0)
if ( ds_is_from_list("101")) {
sl_send_reply("100","My calls");
ds_select_dst("100", "11");
return;
}
Dispatcher group must have more than 25 hosts, if equal or less than 25 than it is Ok:
100 sip:10.60.27.123:7000 0 10 rweight=50
100 sip:10.60.27.123:7001 0 10 rweight=50
100 sip:10.60.27.123:7002 0 10 rweight=50
100 sip:10.60.27.123:7003 0 10 rweight=50
100 sip:10.60.27.123:7004 0 10 rweight=50
100 sip:10.60.27.123:7005 0 10 rweight=50
100 sip:10.60.27.123:7006 0 10 rweight=50
100 sip:10.60.27.123:7007 0 10 rweight=50
100 sip:10.60.27.123:7008 0 10 rweight=50
100 sip:10.60.27.123:7009 0 10 rweight=50
100 sip:10.60.27.123:7010 0 10 rweight=50
100 sip:10.60.27.123:7011 0 10 rweight=50
100 sip:10.60.27.123:7012 0 10 rweight=50
100 sip:10.60.27.123:7013 0 10 rweight=50
100 sip:10.60.27.123:7014 0 10 rweight=50
100 sip:10.60.27.123:7015 0 10 rweight=50
100 sip:10.60.27.123:7016 0 10 rweight=50
100 sip:10.60.27.123:7017 0 10 rweight=50
100 sip:10.60.27.123:7018 0 10 rweight=50
100 sip:10.60.27.123:7019 0 10 rweight=50
100 sip:10.60.27.123:7020 0 10 rweight=50
100 sip:10.60.27.123:7021 0 10 rweight=50
100 sip:10.60.27.123:7022 0 10 rweight=50
100 sip:10.60.27.123:7023 0 10 rweight=50
100 sip:10.60.27.123:7024 0 10 rweight=50
To emulate call load and multiple media servers, SIPp scenarios were used.
Debugging Data
Log Messages
SIP Traffic
No specific traffic, regular INVITE messages having international numbers.
Possible Solutions
Additional Information
kamailio -v
The text was updated successfully, but these errors were encountered: