-
Notifications
You must be signed in to change notification settings - Fork 1.6k
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression: connection failed #78
Comments
Oof, sorry about that @bogdan-d Will debug+fix after work, thanks! |
Hey @bogdan-d I pushed that master, just to be safe. Can you test again and see if that fixes it for you, I haven't had a chance to confirm that is it. |
@Sean-Der Can't build master because of moved ice to internals. peerConnection.OnICEConnectionStateChange = func(connectionState ice.ConnectionState) {
fmt.Printf("Connection State has changed %s \n", connectionState.String())
switch connectionState {
case ice.ConnectionStateCompleted:
streamer.SetVideoOut(h264track.Samples)
streamer.SetAudioOut(opusTrack.Samples)
case ice.ConnectionStateFailed:
streamer.SetVideoOut(nil)
streamer.SetAudioOut(nil)
}
} |
Tried commit 12e7e87 - still the same |
@bogdan-d I reverted moving ICE to internals Are both the client+server using STUN servers? Before pion-WebRTC would just accept any ICE traffic (because candidate parsing wasn't done) but now it filters. So make sure the IPs that each are communicating via are in the SDP. It worked for me when I tired, but will try a different couple scenarios. |
Hey @bogdan-d I updated the examples, that should be more useful. I just tested and the following scenario works using master. Would like to fix up your scenario, I am hoping that the ICE security checks (and lack of STUN on one side) is what is breaking you.
|
Hey @Sean-Der // Create a new RTCPeerConnection
peerConnection, err := webrtc.New(webrtc.RTCConfiguration{
ICEServers: []webrtc.RTCICEServer{
{
URLs: []string{"stun:stun.l.google.com:19302"},
},
},
}) let pc = new RTCPeerConnection({iceServers: [{urls: "stun:stun.l.google.com:19302"}]}); |
Here are SDP's of both request and responce
|
Hey @bogdan-d the request doesn't have any candidates so pion-WebRTC is going to not allow any inbound traffic. This worked before because we just allowed all traffic (ice-lite just isn't that smart) can you update your requester to only send the SDP after |
Thanks, that's fixed the issue. One problem is that collecting ICE candidates takes long time (about 30 seconds), but that's not related to the issue |
Wohoo! Glad that fixed it. Is it slow on Pion-WebRTC, or your browser? Can you maybe drop slow ICE candidates (pion-WebRTC doesn't support TURN so you can drop those) or maybe choose better STUN servers? trickle-ice is on our roadmap, but not in the near future :( thanks again for using pion-WebRTC and dealing with all the breakage! I know it is frustrating, but hopefully will get better quickly. |
Thanks a lot for your help! It's slow in the browser, and not depend on STUN server - discovering all ICE candidates is always 39 seconds (using Chrome). You can try check it here https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/ |
Here is a trick I've found to solve that let timeout;
pc.onicecandidate = e => {
console.log("New candidate", e.candidate);
if(timeout === undefined) {
// wait 1 second for additional ice candidates
timeout = setTimeout(()=>{
let xhr = new XMLHttpRequest();
xhr.open("POST", window.location.href + "stream", true);
xhr.setRequestHeader('Content-Type', 'text/plain');
xhr.onload = function () {
if (this.status === 200) {
pc.setRemoteDescription(new RTCSessionDescription(
{
type: 'answer',
sdp: atob(this.responseText)
}
));
} else {
alert("Failed to create new session")
}
};
xhr.onerror = function() {
document.getElementById("message").innerHTML = "Failed to load " + window.location.href+"stream";
};
xhr.send(btoa(pc.localDescription.sdp));
}, 1000)
}
}; |
One more problem related to the issue: now I don't see |
It looks like deadlock, after connected state there are no other state chanes, even |
Hey @bogdan-d I pushed a small fix to master (unrelated to this issue though I believe) I am not able to reproduce the deadlock, I just ran the There was a deadlock in ICEAgent, but it was fixed. I don't have any idea except |
Also @bogdan-d |
Hey @bogdan-d just checking in, anything I can do here to help? |
Here it looks like deadlock - semacquire, 6 minutes |
Hey @bogdan-d sorry for the delay, just hit with so many different issues. Do you have an easy reproduce? I don't understand how the lock in the portsManager could be the issue. @kc5nra @trivigy @backkem any ideas? We could make that an atomic really easily, but would rather fully understand the issue. |
Hey @Sean-Der, I haven't looked into it in depth yet, was on vacation. But also have no idea why it's blocking. Maybe it was related to go 1.11beta3, will try build master with new stable release |
PortsManager + IceAgent are both protected by a mutex, and they both have the ability to call and lock on other. In a reproducible deadlock the IceAgent would send a message via the ports manager, and at the same time the portsManager was attempting to query the IceAgent for valid candidates. This is fixed by moving the portsManager to a ReadLock. The places where we lock don't have this interaction Resolves #78, Resolves #108
PortsManager + IceAgent are both protected by a mutex, and they both have the ability to call and lock on other. In a reproducible deadlock the IceAgent would send a message via the ports manager, and at the same time the portsManager was attempting to query the IceAgent for valid candidates. This is fixed by moving the portsManager to a ReadLock. The places where we lock don't have this interaction Resolves #78, Resolves #108
PortsManager + IceAgent are both protected by a mutex, and they both have the ability to call and lock on other. In a reproducible deadlock the IceAgent would send a message via the ports manager, and at the same time the portsManager was attempting to query the IceAgent for valid candidates. This is fixed by moving the portsManager to a ReadLock. The places where we lock don't have this interaction Resolves #78, Resolves #108
PortsManager + IceAgent are both protected by a mutex, and they both have the ability to call and lock on other. In a reproducible deadlock the IceAgent would send a message via the ports manager, and at the same time the portsManager was attempting to query the IceAgent for valid candidates. This is fixed by moving the portsManager to a ReadLock. The places where we lock don't have this interaction Resolves pion#78, Resolves pion#108
The latest working commit c87c3ca
Current master doesn't establish connection at all, I think issue related because of latest ICE changes.
Tested with chrome browser.
Network map:
client -> NAT -> server with real ip
The text was updated successfully, but these errors were encountered: