Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 460 #467

Merged
merged 1 commit into from
Mar 13, 2019
Merged

Issue 460 #467

merged 1 commit into from
Mar 13, 2019

Conversation

trivigy
Copy link
Member

@trivigy trivigy commented Mar 3, 2019

This is a temporary solution. We should not have this lying around normally. Once v2.0 is out and fixes the underlying issue with specific ipv6 issues for the server reflexive candidates, we should remove the support for this.

Resolves #460

configuration.go Outdated Show resolved Hide resolved
@trivigy trivigy force-pushed the ipv6-timeout branch 2 times, most recently from a9b04a8 to fac3173 Compare March 3, 2019 02:15
@Sean-Der
Copy link
Member

Sean-Der commented Mar 3, 2019

Reading this it looks like the only real solution is to implement trickle ICE, because we could still have an issue if we had busted ipv4 :( it doesn't look like we are that far away either.

We could do these short term, I don't think they are adding tech debt though (and could still provide value after trickle is done) @backkem @maxhawkins @trivigy

  • Add a configurable timeout to srflx candidates and put it here

  • Add an option to select ICECandidateTypes and users can pass an array of UDP4 UDP6 TCP4 or TCP6 I would prefer that to a boolean (would give more people flexibility)

@trivigy
Copy link
Member Author

trivigy commented Mar 3, 2019

@Sean-Der @maxhawkins Is there an example that shows how to pass SettingEngine from the user side? All I know is via setting the Configuration. If I can see how to do it, I will make the changes quickly. This bug is a priority for me as my software does a high churn of new connections. 5sec per each put the software at a complete halt.

@Sean-Der
Copy link
Member

Sean-Der commented Mar 3, 2019

Hey @trivigy

We use the MediaEngine in examples/save-to-disk and this patch shows using the SettingEngine this is where we are putting all our non-standard behavior so far.

diff --git a/examples/save-to-disk/main.go b/examples/save-to-disk/main.go
index 24fac9e..56d1e46 100644
--- a/examples/save-to-disk/main.go
+++ b/examples/save-to-disk/main.go
@@ -41,8 +41,11 @@ func main() {
        m.RegisterCodec(webrtc.NewRTPOpusCodec(webrtc.DefaultPayloadTypeOpus, 48000))
        m.RegisterCodec(webrtc.NewRTPVP8Codec(webrtc.DefaultPayloadTypeVP8, 90000))

+       s := webrtc.SettingEngine{}
+       s.SetEphemeralUDPPortRange(1000, 1500)
+
        // Create the API object with the MediaEngine
-       api := webrtc.NewAPI(webrtc.WithMediaEngine(m))
+       api := webrtc.NewAPI(webrtc.WithMediaEngine(m), webrtc.WithSettingEngine(s))

        // Everything below is the pion-WebRTC API! Thanks for using it ❤.

@trivigy
Copy link
Member Author

trivigy commented Mar 4, 2019

@Sean-Der correct me if I am wrong. The API is basically a webrtc builder class meant for customization? So I put the non-standard stuff into the SettingEngine{} and API, from there initialize those, and they have a NewPeerConnection func to act as a PeerConnection producer with the customizations? If I got this correct then it's a fairly easy change.

@Sean-Der
Copy link
Member

Sean-Der commented Mar 4, 2019

yep that is exactly right @trivigy!

It is just a container for any non-standard behavior we want to add.

@coveralls
Copy link

coveralls commented Mar 4, 2019

Pull Request Test Coverage Report for Build 2518

  • 61 of 69 (88.41%) changed or added relevant lines in 6 files are covered.
  • 12 unchanged lines in 5 files lost coverage.
  • Overall coverage decreased (-0.03%) to 80.54%

Changes Missing Coverage Covered Lines Changed/Added Lines %
internal/ice/networktype.go 12 14 85.71%
internal/ice/util.go 11 14 78.57%
settingengine.go 0 3 0.0%
Files with Coverage Reduction New Missed Lines %
internal/ice/transport.go 2 60.0%
internal/ice/util.go 2 56.45%
internal/ice/agent.go 2 85.77%
lossy_stream.go 2 81.13%
dtlstransport.go 4 78.57%
Totals Coverage Status
Change from base Build 2477: -0.03%
Covered Lines: 4143
Relevant Lines: 5144

💛 - Coveralls

icegatherer.go Show resolved Hide resolved
// The conditions of invalidation written below are defined in
// https://tools.ietf.org/html/rfc8445#section-5.1.1.1
if !IPv4Supported && !IPv6Supported {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worked out the logic below with a binary conditional table. This still maintains the logic you wanted with the last line being ips = append(ips, ip) but all the other statements work through eliminating the ips that do not fit the supported passed list.

networktype.go Show resolved Hide resolved
peerconnection.go Outdated Show resolved Hide resolved
@trivigy trivigy force-pushed the ipv6-timeout branch 2 times, most recently from 5f16ef8 to f2d51ac Compare March 8, 2019 21:13
@trivigy trivigy changed the title Fix an issue with ipv6 srflx candidates gathering issue-460 Mar 8, 2019
@trivigy trivigy changed the title issue-460 Issue 460 Mar 8, 2019
@Sean-Der Sean-Der force-pushed the ipv6-timeout branch 2 times, most recently from a8495ae to 022eed4 Compare March 13, 2019 00:13
@trivigy trivigy force-pushed the ipv6-timeout branch 2 times, most recently from 6ce6fe8 to 468b1b2 Compare March 13, 2019 01:25
- Fix an issue with ipv6 srflx candidates gathering.
- Add SetNetworkTypes config to SettingEngine to control what network
  types are allowed to be connected.

Resolves #460
@backkem backkem removed the review label Mar 13, 2019
@trivigy trivigy deleted the ipv6-timeout branch March 13, 2019 01:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Slow connection times
5 participants