Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question - behaviour on h264 / h265 packet loss #174

Closed
Consti10 opened this issue Nov 16, 2022 · 7 comments
Closed

Question - behaviour on h264 / h265 packet loss #174

Consti10 opened this issue Nov 16, 2022 · 7 comments
Labels
question Further information is requested

Comments

@Consti10
Copy link

Consti10 commented Nov 16, 2022

Hello,

What's the default behaviour when rtp streaming via udp on packet loss, and can it be controlled ?

E.g. in our application, we stream via a lossy unidirectional wifibroadcast link that is protected by FEC but allows for rtp packet loss.

If we lose parts of a h264 frame (e.g. one or more rtp fragmentation units of a frame) we drop it.
But if the next frame is complete, we feed it to the decoder anyways, which might / almost always results in artifacts, but this behaviour is actually preferred in our case.

Can we do that using uvgRtp ?

Also, is there a way to measure the following delay
"first fragment of a fragmented rtp frame is read from the udp port" -> rtp_receive_hook() is called and/or
"last fragment of fragmented rtp frame is read from the udp port" -> rtp_receive_hook() is called

Becasue we like to keep an exact track of the latency as frames travel through our display application.

Hope this isn't documented somehwere and I missed it,
Constantin @ OpenHD

@jrsnen jrsnen added the question Further information is requested label Nov 16, 2022
@jrsnen
Copy link
Member

jrsnen commented Nov 16, 2022

Hello and thank you for interest in uvgRTP.

What you are describing is to my knowledge the default way uvgRTP works, giving all possible frames that did not lose fragmentation units to the user. The fragments belonging to frames that were not completed are deleted with garbage collection. There are plans for discarding frames lacking reference frames, but those have not been implemented yet and will be done via additional flags.

As for the latency, there is no way to get information for the reception of the first fragment, but when all the fragments are received, the installed hook in media_stream is called immediately so this can be used to measure last fragment timing.

I added a few lines of documention for packet loss.

Hope this clarifies issues,
Joni

@Consti10
Copy link
Author

Thanks, this sound really good.
I'l test it out and report back.

@Consti10
Copy link
Author

Consti10 commented Nov 16, 2022

Are RTP H264 type 24 (aggregated NALUs) supported ? Aka multiple NALUs (e.g. SPS and PPS) in one rtp packet.

@Consti10
Copy link
Author

Consti10 commented Nov 16, 2022

Hm, I am having some weird issue, for some reason uvgRtp seems to always drop the PPS data in our case.
TX: Simple streamer based on libcamera + uvgrtp
RX: QOpenHD application with custom decode.

To analyze, I have some logs - with our current custom / crap rtp decoder (which is what I want to replace with uvgRtp) decode works, and I am getting:

...
Got RTP H264 type [1..23] (single) payload size: 35
Got NALU 39
SPS found
Got RTP H264 type [1..23] (single) payload size: 5
Got NALU 9
PPS found
...

Aka a SPS (size 39) then a PPS (size 9), and ... are key / non-key frames

but with uvgRtp I am getting:
...
Got NALU 38
SPS found
...

I figured out uvgRtp does not always prefix with 0001 - this is why the SPS shows up size 39 with previous, size 38 with uvgRTP.
(And is not an issue)

But for some reason I am never getting the PPS from uvgRTP (or rather any frame sized 9 / 9-1=8 )

Any ideas what's ging on there ?

@Consti10
Copy link
Author

Debugging a bit more, found out that I get the SPS and PPS when I use a gstreamer tx pipeline AND set
aggregate-mode=1

Aka aggregation seems to be suported in uvgRTP (regarding my earlier question) but for some reason non-aggregated SPS and PPS r.n don't work

@Consti10
Copy link
Author

Consti10 commented Nov 16, 2022

Yeah, so for some reason I am not getting 2 successive RTP H264 type [1..23] (single) "frames" (if you count sps / pps as a frame) from uvgRTP as an rx. If I embed the SPS & PPS into an aggregation unit (e.g. using a gstreamer tx pipeline) I don't have this issue.

@jrsnen
Copy link
Member

jrsnen commented Nov 17, 2022

Are RTP H264 type 24 (aggregated NALUs) supported ? Aka multiple NALUs (e.g. SPS and PPS) in one rtp packet.

Unfortunately, no, H264 neither STAP nor MTAP aggregate NALUs have not yet been implemented. There is a chance that the STAP reception works by coincidence due to similarity with H265. I added an issue (#176) for STAP support.

The H264 format in general is the least tested inside our group, since we focus more on H265 and H266 formats.

Hm, I am having some weird issue, for some reason uvgRtp seems to always drop the PPS data in our case. TX: Simple streamer based on libcamera + uvgrtp RX: QOpenHD application with custom decode.

To analyze, I have some logs - with our current custom / crap rtp decoder (which is what I want to replace with uvgRtp) decode works, and I am getting:

... Got RTP H264 type [1..23] (single) payload size: 35 Got NALU 39 SPS found Got RTP H264 type [1..23] (single) payload size: 5 Got NALU 9 PPS found ...

Aka a SPS (size 39) then a PPS (size 9), and ... are key / non-key frames

but with uvgRtp I am getting: ... Got NALU 38 SPS found ...

I figured out uvgRtp does not always prefix with 0001 - this is why the SPS shows up size 39 with previous, size 38 with uvgRTP. (And is not an issue)

But for some reason I am never getting the PPS from uvgRTP (or rather any frame sized 9 / 9-1=8 )

Any ideas what's ging on there ?

Firstly, thank you for bringing this to our attention.

H264 uses three byte start code (001) whereas H265/H266 use four byte (0001), at least that is how I understood it and that might explain the one byte difference.

Debugging a bit more, found out that I get the SPS and PPS when I use a gstreamer tx pipeline AND set aggregate-mode=1

Aka aggregation seems to be suported in uvgRTP (regarding my earlier question) but for some reason non-aggregated SPS and PPS r.n don't work

It is possible that aggregate packet reception works, since the packet structure is essentially the same in h264 STAP as it is h265/h266 aggregate packet (code here).

Yeah, so for some reason I am not getting 2 successive RTP H264 type [1..23] (single) "frames" (if you count sps / pps as a frame) from uvgRTP as an rx. If I embed the SPS & PPS into an aggregation unit (e.g. using a gstreamer tx pipeline) I don't have this issue.

I opened #177 for this bug, we can continue this discussion there.

@jrsnen jrsnen closed this as completed Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants