-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question - behaviour on h264 / h265 packet loss #174
Comments
Hello and thank you for interest in uvgRTP. What you are describing is to my knowledge the default way uvgRTP works, giving all possible frames that did not lose fragmentation units to the user. The fragments belonging to frames that were not completed are deleted with garbage collection. There are plans for discarding frames lacking reference frames, but those have not been implemented yet and will be done via additional flags. As for the latency, there is no way to get information for the reception of the first fragment, but when all the fragments are received, the installed hook in media_stream is called immediately so this can be used to measure last fragment timing. I added a few lines of documention for packet loss. Hope this clarifies issues, |
Thanks, this sound really good. |
Are RTP H264 type 24 (aggregated NALUs) supported ? Aka multiple NALUs (e.g. SPS and PPS) in one rtp packet. |
Hm, I am having some weird issue, for some reason uvgRtp seems to always drop the PPS data in our case. To analyze, I have some logs - with our current custom / crap rtp decoder (which is what I want to replace with uvgRtp) decode works, and I am getting: ... Aka a SPS (size 39) then a PPS (size 9), and ... are key / non-key frames but with uvgRtp I am getting: I figured out uvgRtp does not always prefix with 0001 - this is why the SPS shows up size 39 with previous, size 38 with uvgRTP. But for some reason I am never getting the PPS from uvgRTP (or rather any frame sized 9 / 9-1=8 ) Any ideas what's ging on there ? |
Debugging a bit more, found out that I get the SPS and PPS when I use a gstreamer tx pipeline AND set Aka aggregation seems to be suported in uvgRTP (regarding my earlier question) but for some reason non-aggregated SPS and PPS r.n don't work |
Yeah, so for some reason I am not getting 2 successive RTP H264 type [1..23] (single) "frames" (if you count sps / pps as a frame) from uvgRTP as an rx. If I embed the SPS & PPS into an aggregation unit (e.g. using a gstreamer tx pipeline) I don't have this issue. |
Unfortunately, no, H264 neither STAP nor MTAP aggregate NALUs have not yet been implemented. There is a chance that the STAP reception works by coincidence due to similarity with H265. I added an issue (#176) for STAP support. The H264 format in general is the least tested inside our group, since we focus more on H265 and H266 formats.
Firstly, thank you for bringing this to our attention. H264 uses three byte start code (001) whereas H265/H266 use four byte (0001), at least that is how I understood it and that might explain the one byte difference.
It is possible that aggregate packet reception works, since the packet structure is essentially the same in h264 STAP as it is h265/h266 aggregate packet (code here).
I opened #177 for this bug, we can continue this discussion there. |
Hello,
What's the default behaviour when rtp streaming via udp on packet loss, and can it be controlled ?
E.g. in our application, we stream via a lossy unidirectional wifibroadcast link that is protected by FEC but allows for rtp packet loss.
If we lose parts of a h264 frame (e.g. one or more rtp fragmentation units of a frame) we drop it.
But if the next frame is complete, we feed it to the decoder anyways, which might / almost always results in artifacts, but this behaviour is actually preferred in our case.
Can we do that using uvgRtp ?
Also, is there a way to measure the following delay
"first fragment of a fragmented rtp frame is read from the udp port" -> rtp_receive_hook() is called and/or
"last fragment of fragmented rtp frame is read from the udp port" -> rtp_receive_hook() is called
Becasue we like to keep an exact track of the latency as frames travel through our display application.
Hope this isn't documented somehwere and I missed it,
Constantin @ OpenHD
The text was updated successfully, but these errors were encountered: