Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

webrtc: run onDone callback immediately on close #2729

Merged
merged 3 commits into from
Mar 25, 2024

Conversation

sukunrt
Copy link
Member

@sukunrt sukunrt commented Mar 8, 2024

No description provided.

@sukunrt sukunrt force-pushed the webrtc-cleanup-quickly branch 2 times, most recently from 138f0a2 to da66248 Compare March 9, 2024 08:37
Copy link
Collaborator

@MarcoPolo MarcoPolo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple small things. Can you remind me why we want to run onDone callback immediately on close. What was the old behavior?

@@ -98,6 +100,50 @@ func getDetachedDataChannels(t *testing.T) (detachedChan, detachedChan) {
return <-answerChan, <-offerRWCChan
}

// checkDataChannelClosed checks if the datachannel has been closed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is wrong, can you please update it to describe checkDataChannelOpen?

// It sends empty messages on the data channel to check if the channel is still open.
// The control message reader goroutine depends on exclusive access to datachannel.Read
// so we have to depend on Write to determine whether the channel has been closed.
func checkDataChannelOpen(t *testing.T, dc *datachannel.DataChannel) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit rename these to assertDataChannel{Open,Closed}. They don't return a value they assert some state and fail the test otherwise.

@sukunrt
Copy link
Member Author

sukunrt commented Mar 15, 2024

We modified the specs to do a synchronous datachannel close after finding out that chrome implementation of datachannels drop enqueued messages if the datachannel is closed. see libp2p/specs#575 (comment)

When implementing this spec change, I implemented this such that the stream resource manager scope is closed only after reading the FIN_ACK. This required a rather ugly hack in swarm_stream by introducing a asyncCloser interface. See this comment: #2615 (comment)

After implementing this, I realized that the datachannel close procedure itself was a synchronous close where SCTP layer doesn't drop references to the stream till it gets a datachannel close message from the other end. So calling onDone in a separate go routine immediately after data channel close didn't help anything since the reference to the data channel would be kept for 1RTT. I removed the AsyncClose bit for the same reason and decided to close the stream scope early. I should have done this when I changed #2615. See this comment: #2615 (comment)

@MarcoPolo
Copy link
Collaborator

Does this mean that the resource manager is off from the true resource usage? For example, if a client closes a stream and the on close is called immediately, the resource manager frees that stream, but that memory is actually still reserved by pion until we finish the close. Is that correct?

@sukunrt
Copy link
Member Author

sukunrt commented Mar 19, 2024

Yes, it'll be freed only on datachannel close which happens 1RTT after we call datachannel.Close. If we want to keep the resource manager usage in sync we have to do it after datachannel close and not after we receive FIN_ACK from peer.

@MarcoPolo
Copy link
Collaborator

Is it significantly harder to call the onDone callback after we close the datachannel? (and thus free the resource token in the resource manager) That seems to me the most correct solution because it protects against a hypothetical memory exhaustion attack where a server can have a client think it has freed its resource usage to a stream but actually still have that memory reserved. Unless I'm misunderstanding something and we get this protection some other way?

@sukunrt sukunrt requested a review from MarcoPolo March 22, 2024 06:48
@sukunrt sukunrt merged commit 6bb53b2 into master Mar 25, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants