Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [ffmpeg] Data packets dropped when ffmpeg is used as a sender #1223

Closed
Arno500 opened this issue Mar 31, 2020 · 15 comments
Closed

[BUG] [ffmpeg] Data packets dropped when ffmpeg is used as a sender #1223

Arno500 opened this issue Mar 31, 2020 · 15 comments
Assignees
Labels
[third-party] Area: Issues with SRT in third-party projects Type: Bug Indicates an unexpected problem or unintended behavior

Comments

@Arno500
Copy link

Arno500 commented Mar 31, 2020

Describe the bug
When using ffmpeg directly to send an SRT stream with a bad connection (I'm simulating with clumsy, adding only outbound packet loss), packets doesn't get resend/rereceived or I don't know what. But I get these kind of errors on the receiving end:

[mpegts @ 0x55ded175c740] Continuity check failed for pid 256 expected 4 got 10
[mpegts @ 0x55ded175c740] Continuity check failed for pid 0 expected 15 got 0
[mpegts @ 0x55ded175c740] Continuity check failed for pid 4096 expected 15 got 0
[mpegts @ 0x55ded175c740] Packet corrupt (stream = 0, dts = 3315590).
[mpegts @ 0x55ded175c740] Continuity check failed for pid 257 expected 5 got 10
[mpegts @ 0x55ded175c740] Continuity check failed for pid 256 expected 14 got 5
[mpegts @ 0x55ded175c740] Packet corrupt (stream = 0, dts = 3323090).
[mpegts @ 0x55ded175c740] Continuity check failed for pid 256 expected 7 got 8
[mpegts @ 0x55ded175c740] Continuity check failed for pid 257 expected 8 got 13
[mpegts @ 0x55ded175c740] Packet corrupt (stream = 0, dts = 3324590).
[mpegts @ 0x55ded175c740] Continuity check failed for pid 17 expected 0 got 1

And the H264 stream is completly garbled, at only 1% packet loss.
Here is my flow:
ffmpeg ----------SRT with latency set to 3000----------> ffmpeg server

When using the srt-live-transmit app as a proxy
(ie: ffmpeg ----SRT----> srt-live-transmit ----SRT with latency set to 3000----> ffmpeg server) I can go up to 90% packet lost without ANY problem (with a high bandwidth of course, but it does recover the lost data).

To Reproduce the problem
Steps to reproduce the behavior:

  1. Run ffmpeg and output to an SRT listener server, specifying a high latency to allow for better packet recovery
  2. With some network simulator (I'm on Windows so I use clumsy) try to drop packets and increase the rate.
  3. Check the resulting stream and server console for dropped packets. My server is ffmpeg so I get the kind of error you can see above.

Expected behavior
Packets should recover themselves with the SRT protocol (provided you have enough bandwidth, I'm on FTTH and my server have an unlimited 300mbps fiber connection with low latency).
They do with srt-live-transmit and ffmpeg as a server, but not with ffmpeg as a sender. Either the latency setting is being ignored with FFMPEG (I get similar result when not specifying latency in srt-live-transmit, it defaults to something like 120ms (I though it should take server's one)), or something else I'm not aware of.

Desktop (please provide the following information):

  • OS: Windows 10
  • SRT Version / commit ID: 1.4.1

Additional context

  • The problem does not appear with either latest srt-live-transmit master, or 1.4.1.
  • The ffmpeg build I use (zeranoe) has 1.4.1 included in it.
  • I can also provide a server that does proxy SRT input (as "listener") to my throwaway Twitch channel, so I can provide you the link if you want to test (and with the output you see easily if packets are dropping).
@Arno500 Arno500 added the Type: Bug Indicates an unexpected problem or unintended behavior label Mar 31, 2020
@J-Rogmann
Copy link
Contributor

Ahoi Arno,

can you please share the commands for ffmpeg that you are using? On client and server side? This is not expected and we need to take a look at it.

Also the commands for the srt-live-transmit and ffmpeg-server combination might be interesting to compare. Feel free to anonymize IP addresses or passwords if needed.

best regards,
Justus

@Arno500
Copy link
Author

Arno500 commented Mar 31, 2020

Here are the commands used:

Direct ffmpeg to ffmpeg (dropped frames when adding packet drop):

  • client: bin\ffmpeg.exe -re -y -i "afile.mp4" -vcodec libx264 -b:v 6000k -r 60 -s 1920x1080 -acodec aac -ab 256k -ac 2 -ar 44100 -f mpegts "srt://myserver:4444?latency=3000"
  • server: ffmpeg -v debug -nostats -i srt://0.0.0.0:4444?mode=listener&latency=3000&transtype=live&linger=10&ffs=128000&rcvbuf=100058624 -c:v copy -c:a copy -f flv rtmp://live-cdg.twitch.tv/app/twitchkey

ffmpeg to srt-live-transmit to ffmpeg (perfect transmission over bad network):

  • client: bin\ffmpeg.exe -re -y -i "afile.mp4" -vcodec libx264 -b:v 6000k -r 60 -s 1920x1080 -acodec aac -ab 256k -ac 2 -ar 44100 -f mpegts "srt://127.0.0.1:5555"
  • srt-live-transmit: srt-live-transmit.exe srt://:5555 srt://myserver:4444?latency=3000
  • server (same as above): ffmpeg -v debug -nostats -i srt://0.0.0.0:4444?mode=listener&latency=3000&transtype=live&linger=10&ffs=128000&rcvbuf=100058624 -c:v copy -c:a copy -f flv rtmp://live-cdg.twitch.tv/app/twitchkey

@J-Rogmann
Copy link
Contributor

J-Rogmann commented Apr 1, 2020

Ahoi Arno,

you are using a file as input and ffmpeg will try to use all 1500 bytes of the MTU. However, SRT has some overhead, so the MTU cannot be stuffed to it's limit with data. Our sample application srt-live-transmit knows that and therefore streaming with it to your server works.

In ffmpeg you have to specify the pkt_size=1316. This is not a SRT option, but a parameter of ffmpeg.

Can you try following on you client side:
bin\ffmpeg.exe -re -y -i "afile.mp4" -vcodec libx264 -b:v 6000k -r 60 -s 1920x1080 -acodec aac -ab 256k -ac 2 -ar 44100 -f mpegts "srt://myserver:4444?latency=3000&pkt_size=1316"

If you wonder, why 1316: A TS packet is 188 bytes. With an MTU of 1500 you can stuff 8 packets in. Since SRT has a minimal overhead, we can only put 7 TS packets in.
1316 = 188 * 7 meaning 7 full TS packets.

Please see also following links to get some more information on ffmpeg and SRT:
https://github.com/Haivision/srt/blob/master/docs/live-streaming.md#transmitting-mpeg-ts-binary-protocol-over-srt

https://srtlab.github.io/srt-cookbook/apps/ffmpeg/

Please let me know, if this solved you problem.
best regards,
Justus

@mbakholdina mbakholdina added this to To Do in GitHub Issues Apr 1, 2020
@mbakholdina mbakholdina moved this from To Do to In progress in GitHub Issues Apr 1, 2020
@pkviet
Copy link

pkviet commented Apr 1, 2020

@J-Rogmann 1316 is already the default size in ffmpeg with srt in live type (which is the default).

@Arno500
Copy link
Author

Arno500 commented Apr 1, 2020

I tested forcing pkt_size to 1316 to no avail. It still doesn't recover packets.
My final goal is to use it with OBS over bad connections and use all the reliability power of SRT.
So at first I directly used the OBS output which doesn't feed a file to FFMPEG. (note I used a special OBS build provided by @pkviet with libsrt 1.4.1 to eliminate errors from old versions)

@pkviet
Copy link

pkviet commented Apr 1, 2020 via email

@J-Rogmann
Copy link
Contributor

this in indeed very stange. @Arno500 would it be possible, to create a packet capture of your stream, e.g. with tcpdump or wireshark? Please start capture first, then initiate stream, leave it running for 30-60 seconds and stop capture. This would give us some more insight, was is happening here.
I would like to find out, whether SRT is really loosing packets or if there is an error in the chain behind.

@pkviet I was privately using OBS 25.0.1 last Saturday to send some live streams and connect more than 50 friends across the globe for a virtual meetup and party. We streamed mostly in Germany but also had viewers and contributors from Tokyo and Buenos Aires. Worked very well and stable with SRT. Big Up.

@Arno500
Copy link
Author

Arno500 commented Apr 1, 2020

I uploaded the dump here : https://cloud.arnodubo.is/s/BzJXsAxMKoLbGTD
I started to simulate 10% packet drop around 90s (or 24.30s relative), and stopped around 116s (or 49.80). (by a quick observation it seems packets are retransmitted, but I still have high data corruption on the other end)

@boxerab
Copy link

boxerab commented Apr 3, 2020

LInux system packet loss can be simulated with traffic control tc command:

https://wiki.linuxfoundation.org/networking/netem

For example, the following gives 10% packet loss on interface eth0:

tc qdisc change dev eth0 root netem loss 10%

assuming you already have some rules in place for eth0

@maxsharabayko
Copy link
Collaborator

I uploaded the dump here : https://cloud.arnodubo.is/s/BzJXsAxMKoLbGTD
I started to simulate 10% packet drop around 90s (or 24.30s relative), and stopped around 116s (or 49.80). (by a quick observation it seems packets are retransmitted, but I still have high data corruption on the other end)

According to the dump, the negotiated latency value is 3 ms.
RTT is roughly 3 ms as well.
So you don't give SRT much time to retransmit packets. At least 2×RTT latency value should be configured. The recommended value is 4×RTT.
See SRT TSBPD latency.

I can assume ffmpeg expects latency in microseconds, so the following command sets 3ms latency instead of 3s.

ffmpeg -v debug -nostats -i srt://0.0.0.0:4444?mode=listener&latency=3000&transtype=live&linger=10&ffs=128000&rcvbuf=100058624 -c:v copy -c:a copy -f flv rtmp://live-cdg.twitch.tv/app/twitchkey

srt-live-transmit expects latency to be specified in milliseconds, therefore, the following case works (highest latency of peers is negotiated)

srt-live-transmit: srt-live-transmit.exe srt://:5555 srt://myserver:4444?latency=3000

@pkviet
Copy link

pkviet commented Apr 3, 2020

right the latency in ffmpeg is defined in microseconds:
see https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/libsrt.c#L121
and https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/libsrt.c#L305 (factor 1000 to get millisecs)
and here the translation to SRTO_LATENCY :
https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/libsrt.c#L325

@Arno500
Copy link
Author

Arno500 commented Apr 3, 2020

OH
It's perfect now. I think it needs to be specified in FFMPEG doc here (if anyone is a FFMPEG contributor or I can open an issue on their side or better, do it myself): https://www.ffmpeg.org/ffmpeg-protocols.html#srt
That's also pretty strange to have to specify it in microseconds instead of milliseconds. In network environments, I can't see a case where having such a high precision would matter... Especially if it's in ms in srt-live-transmit.

But thanks all for your help and your time!

@maxsharabayko maxsharabayko added the [third-party] Area: Issues with SRT in third-party projects label Apr 3, 2020
@maxsharabayko
Copy link
Collaborator

FFMpeg will keep latency in microseconds.

Citing Nicolas George:

... [latency] is a duration, and as such it should be AV_OPT_TYPE_DURATION and in
microseconds.

Updated SRT Cookbook on ffmpeg latency unit: link.

@Arno500
Copy link
Author

Arno500 commented Apr 8, 2020

Okay, that's fair.
I don't think it would be a bad idea to have a link to the cookbook on the main Readme of this repo. I think more people are interesting in "how to use it" rather than "how to build it" (even if the latter is crucial too).

@maxsharabayko
Copy link
Collaborator

This issue can be closed then.

  • ffmpeg is expecting latency in microseconds - they will not change it to ms.
  • Added a note to SRT cookbook regarding latency unit in ffmpeg.

Steps to consider:

GitHub Issues automation moved this from In progress to Done Apr 27, 2020
@maxsharabayko maxsharabayko self-assigned this Apr 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[third-party] Area: Issues with SRT in third-party projects Type: Bug Indicates an unexpected problem or unintended behavior
Projects
No open projects
GitHub Issues
  
Done
Development

No branches or pull requests

5 participants