Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continued TCP-ICE Improvements #245

Open
4 of 6 tasks
Sean-Der opened this issue Jul 15, 2020 · 8 comments
Open
4 of 6 tasks

Continued TCP-ICE Improvements #245

Sean-Der opened this issue Jul 15, 2020 · 8 comments
Labels
enhancement New feature or request

Comments

@Sean-Der
Copy link
Member

Sean-Der commented Jul 15, 2020

These are a list of all the things we can do to continue improving the tcp-ice implementation that landed.

  • Move to internal (a small root package that isn't littered with protocol details is great)
  • Take a net.Listener instead of having global state
  • Expose a net.PacketConn based API Expose a net.PacketConn based API, Agent
  • Implement Active
  • Implement simultaneous-open
  • Explore if we can remove some concurrency from the API/reduce LoC
  • Fix ICE TCP Priority. Right now passive TCP host candidates will have higher priorities than prflx/srflx so TCP will be used even when UDP connectivity is possible
@jeremija
Copy link
Member

jeremija commented Jul 15, 2020

I think the spec is also pretty specific about some situations when the inbound connections need to be dropped. I think currently we'll accept any TCP connection with the right ufrag, but when receiving active (inbound) TCP connections, we should only accept the ones for which we already have candidates with the IP address (TCP port is irrelevant in this case because browsers will just set it to 9).

We shouldn't drop the TCP connection during ICE restarts so that's another thing to think about.

Also, I think the way priorities are calculated might need to be changed for TCP candidates.

jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Jul 21, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Aug 1, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Aug 1, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
jeremija added a commit that referenced this issue Aug 1, 2020
This addresses a few points issue of #245:

 - Take a net.Listener instead of having global state
 - Expose a net.TCPMux based API

Also, the unused closeChannel was removed from tcp_mux.go

Closes #253.
@jeremija
Copy link
Member

jeremija commented Aug 9, 2020

We should also provide a way to figure out which network type is currently in use by an ICE Agent. For example, there's no need for having a Jitter Buffer when TCP is in use.

Perhaps there could be a function:

func (a *Agent) SelectedNetworkType() (t NetworkType, ok bool` {
  pair := a.getSelectedPair()
  if pair == nil {
    return t, false
  }

  return pair.local.NetworkType(), true
}

And then we could expose this information in

func (pc *PeerConnection) SelectedNetworkType() (t NetworkType, ok bool) {
  nt, ok := pc.iceTransport.gatherer.getAgent().SelectedNetworkType()
  if !ok {
    return t, ok
  }

  t, err := NewNetworkType(nt)
  return t, err != nil
}

@jech
Copy link
Contributor

jech commented Aug 13, 2020

func (pc *PeerConnection) SelectedNetworkType() (t NetworkType, ok bool) {

How would the client know when this information becomes invalid and must be recomputed? Can it only change when ICE transitions to the failed state, or can it change at any time?

(I agree that this kind of information is useful, for example I'd like to ignore NACK requests on TCP connections since the data will get resent by the transport anyway. OTOH, I'd argue that you still need a jitter buffer with TCP. Due to retransmissions and head-of-line blocking, the amount of jitter is higher with TCP than with UDP. And while TCP will not reorder packets, there's still the possibility of an intermediary that does reordering. Case in point, an SFU might be proxying from UDP to TCP, so the traffic carried on the TCP connection is already reordered due to the UDP leg. Sorry for the digression.)

@jeremija
Copy link
Member

Those are all very good points!

How would the client know when this information becomes invalid and must be recomputed? Can it only change when ICE transitions to the failed state, or can it change at any time?

This is what I'm not sure about. If it only changes on ICE state changes, perhaps it's better to convey this information via the OnICEConnectionStateChange?

@jech
Copy link
Contributor

jech commented Oct 2, 2020

It's easier for the client to query the sender/receiver/transceiver than to keep track of the value last passed to the callback. This is analoguous to the ICE connection state, which is passed to the callback but also available directly from a PeerConnection method.

@dgunay
Copy link

dgunay commented Oct 7, 2021

@Sean-Der I am unblocked on my issue (was able to get Wowza to enable UDP ICE for me), but seeing as it's October and I try and do Hacktoberfest every year, this seems like a great place to contribute. I'd like to try and help implement active TCP-ICE.

I've only been grappling with low level WebRTC stuff for a few weeks now so I'm not sure where to get started. If you have any guidance on parts of the code I should familiarize myself with before breaking ground, it would be much appreciated.

And thanks for spearheading this project by the way, WebRTC looks scary and complicated from my perspective as an outsider to the technology but your efforts and attitude are super inspiring.

@Sean-Der
Copy link
Member Author

Sean-Der commented Oct 8, 2021

@dgunay that is great news!

I think the best place to start would be Active TCP Candidates. Simultaneous Open is something we could add later.

I would start with just opening a TCP Socket in addRemoteCandidate and connects to the other side. The full version will generate a local TCP Candidate always, but only give it a socket and dial when a remote exists.

TestTCPMux_Recv is a test that shows the listening side. You can copy this and then implement the connecting side.

@Sean-Der
Copy link
Member Author

Sean-Der commented Oct 8, 2021

Fire any/all questions at me in Slack. Would love to help you get started :)

@stv0g stv0g added the enhancement New feature or request label Apr 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants