Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: immediately queue STUN request after forming pair #476

Conversation

thomaseizinger
Copy link
Collaborator

In my testing, I've found this to improve connection setup latency by about 0.7 seconds. There might be a better way of solving this but I wanted to open a PR to start the discussion.

The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next handle_timeout to the IceAgent.

@thomaseizinger
Copy link
Collaborator Author

The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next handle_timeout to the IceAgent.

I could also solve this in my code by calling handle_timeout. This way though, all users of str0m benefit from this improvement.

@thomaseizinger
Copy link
Collaborator Author

The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next handle_timeout to the IceAgent.

I could also solve this in my code by calling handle_timeout. This way though, all users of str0m benefit from this improvement.

Actually, I don't think I can because handle_timeout has a guard to only work in increments of 50ms and I might be adding candidates in smaller window than that.

@algesten
Copy link
Owner

algesten commented Mar 6, 2024

Interesting!

In the rest of str0m the order of actions is always:

  1. Make changes to session.
  2. poll_timeout
  3. handle_timeout

Depending on the change in 1, the timeout in 2 might be 0 to schedule immediate sending of things in 3. Notice that it's expected the library user doesn't delay step 2-3 if they made changes to the session.

Looking at the agent.rs, I wonder if we can follow the same pattern? poll_timeout gives us next_binding_attempt for every pair, and it seems like we should be able to make this have no timeout for new pairs.

@thomaseizinger
Copy link
Collaborator Author

In the rest of str0m the order of actions is always:

1. Make changes to session.

2. `poll_timeout`

3. `handle_timeout`

Depending on the change in 1, the timeout in 2 might be 0 to schedule immediate sending of things in 3. Notice that it's expected the library user doesn't delay step 2-3 if they made changes to the session.

I guess that is what we are currently doing "wrong". We create a sleep_until future upon poll_timeout and don't call handle_timeout earlier than that.

I guess that could be change to reset this future any time state changes within our node. That is probably the more correct approach to handle these timeouts.

I'll experiment with that.

@algesten
Copy link
Owner

algesten commented Mar 6, 2024

Yeah, and the final tweak would be to lower that 50 to 0 in for this case.

@thomaseizinger
Copy link
Collaborator Author

I guess this is generally a downside of SANS-IO :(

There are plenty of opportunities to use the API the "wrong" way.

@algesten
Copy link
Owner

algesten commented Mar 6, 2024

Definitely!

I'm speculating that there might be a pattern or something that could be "discovered" to unlock that. If only 10% of the effort spent on perfecting async in Rust was spent on it, we probably solve it already :)

@thomaseizinger
Copy link
Collaborator Author

thomaseizinger commented Mar 6, 2024

Definitely!

I'm speculating that there might be a pattern or something that could be "discovered" to unlock that. If only 10% of the effort spent on perfecting async in Rust was spent on it, we probably solve it already :)

Instead a separate poll_timeout, it might be more intuitive to return it as part of poll_event, like Event::WakeIn.

@thomaseizinger
Copy link
Collaborator Author

I've found a combination of firezone/firezone#4022 and #477 to achieve the same effect which seems like a cleaner solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants