-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slowdown because of server reflexive address allocation #51
Comments
@masterada You made the original change so I am just tagging you here to make sure you are aware of the fix proposal. In case you have ideas or opinions. I would love to take in as much information as possible. |
Thanks. Currently i think we collect reflexive candidates sequentially, meaning if 10 interfaces time out that's 30 sec total. Instead we should do that parallel so we can bring that time down to 3 sec. I think we should wait for all candidates though, because waiting for the first only might break some setup. Whats worse, it might result in some setups occasionally not working (if 1 interface works as server reflexive, but it's not always retuned). If we need to do better than 3 sec, that's what trickle ice is for (not waiting for server reflexive candidates at all, but sending them over signaling whenever they are ready). I will have some time tomorrow (had a busy 2 weeks :( ), i can do this (parallel gathering) if you would like. |
@masterada I would prefer if we maybe work towards trickle?! I am doing "get default and continue, the rest collect in goroutine" right now. Maybe we could merge that and then hook up some kind of callback to get the trickle notify the OnIceCandidate?! Sort of a midway solution towards properly working trickle? |
Wihout trickle gathering the rest of server reflexive candidates is useless, as server reflexive candidates are ignored as local candidates when creating the checklist. They are only used by the other peer as its remote candidates, but only if he gets them (aka trickle). Once trickle is implemented there will be no point is handling default interface specially. So if you want a quick fix i think you should simply only use the default interface only, no need for the background gathering as its results cant be used. |
With trickle, is the peerConnection responsible for notifying the remote peer (and initiating re-negotiations and so on) or all we have to make sure is that OnIceCandidate is invoked every time there is a new candidate and nil when the gathering is complete? If it's the former, I can implement that. Should be fairly simple. I would just add callbacks to the Agent and hook up peer connection to those callbacks. |
At least thats what i remember, but will have to read some more to be sure. |
sending the candidates between the peers must be solved by the user (via signaling) |
Okay. Thank you for your help. I will mess around with this tomorrow and try to get all three parts working. |
@masterada sleeping on this one thought. Is it worth sending a request if we know it isn't route-able? I think in the 'worst case' we are using a bunch of interfaces that don't have a gateway (so just blocking for nothing) Trickle ICE is awesome though! I assume it is going to take us some time, I can't do anything but TURN for the next few days (user has a June 1st deadline) but thank you for contributing, and just work on what you enjoy/find interesting. Don't want to burn anyone out! |
@Sean-Der would you be okay with getting trickle in with a default flag as false. This way, for the next forseeable whatever, It will solve my issue because I will set the trickle flag to on but will not influence any other users. I will make the false flag cause the operation to work sync as it does now. Will just add the custom flag inside of the |
The whole point of ice is not to make such assumptions. For example, an aws ec2 instance may have tens of network interfaces, and even though usually the primary interface is the one facing the internet, there is no guarantee for that. You can for example add an elastic ip address to it which is not the default interface, or you may want your pion server to work inside a lan with a non-default interface (with the STUN server also running inside lan), or you might be running multiple softwares on it and don't want the default interface to be reachable from the outside word, or any other number of reasons. So in the long run we should definietly iterate all interfaces (or all host candidates to be exact, because if the server reflexive candidates base address is the same as a host address, it can be eliminated during ice pairing which save time when selecting the working candidate pair, see: https://tools.ietf.org/html/rfc8445#section-6.1.2.4 ). That said, using only the primary interface will probably cover 99% of usecases, so until we have trickle i think we can just go back to using the primary interface as you suggested. |
@masterada I am almost done with trickle. With complete asynchronous gathering. I am just fixing up the existing unittests to "wait" until gathering is complete to pass connection establishment. Let me make a PR in a few and we can go from there. |
Update GatherCandidates to not accept any arguments, it pulls everything from AgentConfig instead Relates to #51
When OnCandidate handler isn't defined and trickle is enabled return an error Relates to #51
When OnCandidate handler isn't defined and trickle is enabled return an error Relates to #51
Update GatherCandidates to not accept any arguments, it pulls everything from AgentConfig instead Relates to #51
When OnCandidate handler isn't defined and trickle is enabled return an error Relates to #51
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
Resolves pion/ice#51 Co-authored-by: Konstantin Itskov <konstantin.itskov@kovits.com>
What happened?
Between v0.2.7 and v0.2.8 there was a change in the way server reflexive addresses are allocated. Rather than collecting only one (default) address, the gathering process now actually goes through all of the local interfaces which takes a significant load time.
Here is the part that causes this:
https://github.com/pion/ice/blob/v0.2.8/gather.go#L131
Proposal
Create an asynchronous process that dispatches a goroutine to collect the server reflexive addresses but only waits for one (default one) to come back before continuing with the execution.
Requirements
The text was updated successfully, but these errors were encountered: