Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize reads with buffering #480

Merged
merged 1 commit into from
Mar 13, 2019
Merged

Optimize reads with buffering #480

merged 1 commit into from
Mar 13, 2019

Conversation

kixelated
Copy link
Contributor

Increases the Read performance to acceptable levels. Packet loss went
from roughly 8% to 0.5% with this change.

@coveralls
Copy link

coveralls commented Mar 5, 2019

Pull Request Test Coverage Report for Build 2523

  • 55 of 73 (75.34%) changed or added relevant lines in 9 files are covered.
  • 14 unchanged lines in 6 files lost coverage.
  • Overall coverage decreased (-0.1%) to 80.523%

Changes Missing Coverage Covered Lines Changed/Added Lines %
internal/ice/candidate.go 5 6 83.33%
internal/ice/agent.go 8 10 80.0%
internal/mux/endpoint.go 6 8 75.0%
peerconnection.go 0 2 0.0%
internal/mux/mux.go 16 21 76.19%
track.go 14 20 70.0%
Files with Coverage Reduction New Missed Lines %
internal/ice/candidate.go 1 92.18%
internal/mux/mux.go 1 90.48%
internal/ice/transport.go 2 55.56%
track.go 2 76.07%
internal/ice/agent.go 4 85.26%
dtlstransport.go 4 78.57%
Totals Coverage Status
Change from base Build 2519: -0.1%
Covered Lines: 4093
Relevant Lines: 5083

💛 - Coveralls

Copy link
Member

@Sean-Der Sean-Der left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM! You just have some CI issues :(

Does this maintain the lossyStream behavior? If someone isn't pulling it won't block the entire pipeline?

@kixelated
Copy link
Contributor Author

It won't block the pipeline; in fact it will buffer all data. In fact this would cause a memory leak. I can set a max buffer size; it just kind of sucks because there are 3 buffers in the read pipeline (should all of them have a max size?).

@Sean-Der
Copy link
Member

Sean-Der commented Mar 5, 2019

What do you think of only buffering after we have had a single read?

We have applications where people get Tracks, but never read them. I think this is fine behavior.

@kixelated
Copy link
Contributor Author

It's a little custom. You can't drop the packets because you don't know if they're actually going to read. If you revert to the blocking behavior, then it's this strange mode switch triggered by a single read. What if they stop reading from the track later?

I would rather just set up a small buffer, like 1MB or something, and start dropping packets once we go over it. This class is really meant to avoid the orchestration and coordination of two goroutines to pass messages. I might still be able to find a way to do it without channels.

@Sean-Der
Copy link
Member

Sean-Der commented Mar 5, 2019

That sounds like a perfect compromise to me @kixelated!

The only thing I care about is enabling people to ignore Tracks if they don't care about them. We can also expose something in *api if someone is really memory hungry and this causes issues.

@orai-arturo
Copy link

orai-arturo commented Mar 5, 2019

That sounds like a perfect compromise to me @kixelated!

The only thing I care about is enabling people to ignore Tracks if they don't care about them. We can also expose something in *api if someone is really memory hungry and this causes issues.

Can we make the buffering configurable/API settable? If someone knows they're not going to read the data they can set it to 0 and otherwise they can tune it for their app.

@kixelated kixelated force-pushed the read-buffer branch 2 times, most recently from 1bb5aea to db4f1cb Compare March 7, 2019 21:00
@kixelated kixelated requested a review from Sean-Der March 7, 2019 21:00
@kixelated kixelated changed the title Buffer all reads instead of message passing Optimize reads with buffering Mar 8, 2019
@Sean-Der
Copy link
Member

Sean-Der commented Mar 8, 2019

@kixelated

Can you add some tests that assert that the pipeline properly handles overflow for SRTP? We shouldn't back up the pipeline if we have two tracks but ignore all RTP/RTCP for one.

Increases the Read performance to acceptable levels. Packet loss went
from roughly 8% to 0.5% with this change.
@kixelated kixelated merged commit 08a93d8 into master Mar 13, 2019
@backkem backkem removed the review label Mar 13, 2019
@Sean-Der Sean-Der deleted the read-buffer branch March 23, 2019 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants