New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Salsify: 95th percentile number of states stored? #80

Closed
BGR360 opened this Issue Feb 6, 2019 · 2 comments

Comments

Projects
None yet
3 participants
@BGR360
Copy link

BGR360 commented Feb 6, 2019

Hi guys! Not sure if I should put this issue in the salsify-results repo, let me know.

I am reviewing your Salsify paper in my Advanced Computer Networking class at UofM. In section 5.2 of the paper, you write:

In our AT&T LTE trace experiment, Salsify-2c sender kept 6 states on average, each 4 MiB in size [...] while the receiver kept 3 states at a time on average.

I would really like to know more about the worst case here. What is, for instance, the 95th percentile number of states stored? This seems relevant for evaluating the memory overhead of Salsify, especially when the network is unreliable and failures may be long and/or frequent. Does the increasing number of states being stored potentially create a memory usage problem?

I tried looking through the salsify-results repo inside the benchmarks/ATT-LTE-driving/salsify-2-ss/ benchmark (and elsewhere) and could not find any data that would allow me to determine more statistics about the number of stored states.

After looking at the source code for salsify-sender, it seems that it would be very easy to collect this info by adding a cout inside the fetch frames polling job (just print out the size of the encoders deque), but this would require me to get the software up and running on my computer (which I am willing to try, it does seem very well documented).

If, however, you guys already have this information available, I would love to hear it. Perhaps my intuition is wrong and the memory overhead of keeping states is negligible. If so, let me know!

P.S. Thank you so much for open-sourcing all of this data and code! This is how research should be done!

@keithw

This comment has been minimized.

Copy link
Contributor

keithw commented Feb 6, 2019

Hi Ben -- great question! I unfortunately don't think we have an answer for you without running the code ourselves. And I think that if you were on a system with memory pressure, you could probably arrange to cap the number of states that the sender and receiver have to keep around -- by moving Salsify in the direction of how existing systems do loss recovery in real-time video transmission.

For example, maybe instead of requiring the receiver to keep every state around until the sender gives permission to evict it, the sender would only be allowed to designate a limited number of outgoing states as "retained" states at a time. So if the cap is 4, then if the sender has sent 4 "retained" states to the receiver, it can't send another one until it tells the receiver to discard one of the 4 (whether it was received or not). The receiver would only keep the "retained" states and the most recent state around, and the sender would only be allowed to base its predictions on either the most recent state or one of the "retained" states.

This is sort of similar to what existing schemes do with altref frames, etc.

The truth is that Salsify is not particularly great on networks with random packet loss anyway (as opposed to long stretches of "burst" loss, where Salsify is pretty good -- you've probably seen the video), since for us a state is either received perfectly or not at all. So there's already a lot of room for improvement in how we handle packet loss, again by incorporating existing techniques in FEC and slice repair.

@BGR360

This comment has been minimized.

Copy link
Author

BGR360 commented Feb 6, 2019

Thank you for your swift reply!

@sadjad sadjad closed this Feb 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment