You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
This issue occurs whenever multiple segments are pushed in parallel or in quick succession with minimal time delay to B via the /live endpoint (i.e. HTTP push ingest). This can occur for a livestream if the client that is pushing to B (i.e. MistProcLivepeer in Catalyst) pushes really short segments really quickly or in parallel by simultaneously retrying segment N while pushing segment N + 1. This can occur for VOD file if the client is configured to push multiple segments in parallel to B (i.e. catalyst-api in Catalyst).
The issue can be repro'd by checking out the following branch and running a parallel segment push e2e test:
git checkout yf/fix-sender-nonce
go test -count=1 -v -run TestHTTPPushBroadcaster ./test/e2e
The test should fail with output that looks like this:
Notice the error logs before the test error trace with the invalid ticket senderNonce error.
The reason that the invalid ticket senderNonce error appears under these circumstances is because O currently expects the senderNonce field of tickets that it receives with a unique recipientRandHash value to be monotonically increasing and O enforces this by keeping track of the highest senderNonce value it has seen thus far and if the senderNonce of the current ticket exceeds this value the ticket will be rejected. This restriction works fine for a typical livestream where there is a time delay before each segment appears and can be submitted to B because B will create tickets with ascending senderNonce values and O will receive these tickets in order with segments. This restriction causes issues for a) livestreams for which segments are retried and for which there are really short segments with really short delays between segment appearance and b) VOD files where segments are all available up front and may be pushed in parallel. In these cases, even if B sets the senderNonce values in ascending order for all segments, O can receive those tickets in a different order.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Instead of tracking a single highest seen senderNonce in pm.Recipient, we could track a slice with fixed size of MAX_SENDER_NONCE [1]. If the index N in the slice is set to 1, the senderNonce value was seen before and if it is set to 0, the senderNonce value has not been seen before. So, even if O receives a ticket with senderNonce N + 1 before it receives a ticket with senderNonce N, the second ticket will not be rejected.
We cap the size of the slice to MAX_SENDER_NONCE to control the memory consumed. Each set of ticket params advertised by O with a unique recipientRandHash will expire and trigger a cleanup of any memory used to track senderNonce values here. O also returns a fresh set of ticket params with the response for each segment with a new unique recipientRandHash value so in practice, a B should end up generating senderNonce values for a given recpientRandHash value only until it receives the response for the segment it sends to O which should limit the # of senderNonce updates to the slice/map on O. One way to think about MAX_SENDER_NONCE is that it describes the max # of tickets that B can send O up front before B receives a response for a segment.
What would a reasonable value be for MAX_SENDER_NONCE? Maybe something like 20-50.
[1] An alternative could be use a map with an int counter to track the # of elements in the map?
The spec should be updated if this solution is implemented.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
This issue occurs whenever multiple segments are pushed in parallel or in quick succession with minimal time delay to B via the
/live
endpoint (i.e. HTTP push ingest). This can occur for a livestream if the client that is pushing to B (i.e. MistProcLivepeer in Catalyst) pushes really short segments really quickly or in parallel by simultaneously retrying segment N while pushing segment N + 1. This can occur for VOD file if the client is configured to push multiple segments in parallel to B (i.e. catalyst-api in Catalyst).The issue can be repro'd by checking out the following branch and running a parallel segment push e2e test:
The test should fail with output that looks like this:
Notice the error logs before the test error trace with the
invalid ticket senderNonce
error.The reason that the
invalid ticket senderNonce
error appears under these circumstances is because O currently expects the senderNonce field of tickets that it receives with a uniquerecipientRandHash
value to be monotonically increasing and O enforces this by keeping track of the highest senderNonce value it has seen thus far and if the senderNonce of the current ticket exceeds this value the ticket will be rejected. This restriction works fine for a typical livestream where there is a time delay before each segment appears and can be submitted to B because B will create tickets with ascending senderNonce values and O will receive these tickets in order with segments. This restriction causes issues for a) livestreams for which segments are retried and for which there are really short segments with really short delays between segment appearance and b) VOD files where segments are all available up front and may be pushed in parallel. In these cases, even if B sets the senderNonce values in ascending order for all segments, O can receive those tickets in a different order.Resources:
Describe the solution you'd like
A clear and concise description of what you want to happen.
Instead of tracking a single highest seen senderNonce in pm.Recipient, we could track a slice with fixed size of
MAX_SENDER_NONCE
[1]. If the index N in the slice is set to 1, the senderNonce value was seen before and if it is set to 0, the senderNonce value has not been seen before. So, even if O receives a ticket with senderNonce N + 1 before it receives a ticket with senderNonce N, the second ticket will not be rejected.We cap the size of the slice to
MAX_SENDER_NONCE
to control the memory consumed. Each set of ticket params advertised by O with a uniquerecipientRandHash
will expire and trigger a cleanup of any memory used to track senderNonce values here. O also returns a fresh set of ticket params with the response for each segment with a new unique recipientRandHash value so in practice, a B should end up generating senderNonce values for a given recpientRandHash value only until it receives the response for the segment it sends to O which should limit the # of senderNonce updates to the slice/map on O. One way to think aboutMAX_SENDER_NONCE
is that it describes the max # of tickets that B can send O up front before B receives a response for a segment.What would a reasonable value be for
MAX_SENDER_NONCE
? Maybe something like 20-50.[1] An alternative could be use a map with an int counter to track the # of elements in the map?
The spec should be updated if this solution is implemented.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: