Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Player glitches when using "satipc" #992

Open
lars18th opened this issue Sep 16, 2022 · 23 comments
Open

BUG: Player glitches when using "satipc" #992

lars18th opened this issue Sep 16, 2022 · 23 comments

Comments

@lars18th
Copy link
Contributor

Hi,

I've advanced a litte more with the trouble that I've described a lot of time ago: When connecting with multiple users at the same time, in some cases the already playing user observes a glitch in the play when the second client starts.

Here my configuration:

  • Two DVB-S2 enigma boxes with minisatip running on it as SAT>IP servers.
  • One PC running minisatip as a central SAT>IP server, serving to clients (it uses "satipc" with RTSP-over-TCP).
  • Different clients using the HTTP protocol (mainly VLC).

Here how yo reproduce the glitch:

  • Start minisatip in the PC.
  • After 30s. (when all tuners will close) connect the first client (VLC) to one tuner (could be any FTA channel).
  • After 15s. connect a second VLC instance to one FTA channel in a different transponder (then the second tuner will be started).
  • For 1sec. the first client shows a glitch in the play.
  • However, only small CC errors will appear in the stream. All streams are more or less clean, but the first client receives the stream with an interval of ~1000ms of missing data.
  • Note: If the second tuner is already open (in sleeping mode), then the glitch doesn't appear at all.

I see in the LOGs (with -v satipc) that the thread that is MANAGING data from the first SAT>IP client, is stalled for sometime when the thread of the second tuner makes the call to the other SAT>IP server. So more or less you see (1 is first satipc thread, 2 is the second one):

1111111111111111111111111111111122222222222222222222222222221212121212121212121212121212121212

But what you're expeting is something like :

1111111111111111111111111111111122121212121212121212121212121212121212121212121212121212121212

Therefore, I suspect that in the implementation of the "satipc" module, something is blocking the READING SOCKET, or the writing in this socket, I don't know. Any idea @catalinii ?

@catalinii
Copy link
Owner

hi @lars18th,

I did not see anything that can cause this. To get more info please add -l lock and you can see the messages related to locking maybe something comes up.
Feel free to upload the logs as well.
Thanks

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

I doing a deep trace of the logs. However, something is strange: For every NEW stream, I see that after the first PLAY command (the remote SAT>IP server is minisatip too), the "satipc" module send THREE consecutive DESCRIBE commands:
"DESCRIBE rtsp://192.168.1.90:554/stream=1 RTSP/1.0"
and this is true for all cases. So, you see: PLAY, response, DESCRIBE, response, DESCRIBE, response, DESCRIBE, response, OPTIONS, response... and from this point the regular behaviour of one OPTIONS every 30 seconds.

This has sense?

@catalinii
Copy link
Owner

if you can reproduce the describe issue, feel free to upload a fresh log as a separate issue.

@catalinii
Copy link
Owner

Feel free to provide the log adding -l lock to your previous command line. Also please indicate when this happens (timestamp).

@lars18th
Copy link
Contributor Author

lars18th commented Sep 27, 2022

Feel free to provide the log adding -l lock to your previous command line. Also please indicate when this happens (timestamp).

Hi @catalinii ,

Obviously something is going wrong:

	Line 149664: [27/09 18:55:02.729 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 9400 out of 376000 bytes read, 6 ms ago (68101 68095)
	Line 149674: [27/09 18:55:02.730 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 10716 out of 376000 bytes read, 7 ms ago (68102 68095)
	Line 149682: [27/09 18:55:02.730 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 10904 out of 376000 bytes read, 7 ms ago (68102 68095)
	Line 149692: [27/09 18:55:02.731 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 12220 out of 376000 bytes read, 8 ms ago (68103 68095)
	Line 149700: [27/09 18:55:02.732 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 13536 out of 376000 bytes read, 8 ms ago (68103 68095)
	Line 149710: [27/09 18:55:02.733 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 14852 out of 376000 bytes read, 9 ms ago (68104 68095)
	Line 149718: [27/09 18:55:02.733 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 16168 out of 376000 bytes read, 10 ms ago (68105 68095)
	Line 149737: [27/09 18:55:02.833 AD1]: read_dmx send=1, force_send=1, cnt 0 called for adapter 1 -> 16168 out of 376000 bytes read, 110 ms ago (68205 68095)
	Line 150046: [27/09 18:55:04.143 AD1]: read_dmx send=1, force_send=1, cnt 0 called for adapter 1 -> 188 out of 376000 bytes read, 1411 ms ago (69515 68104)
	Line 150060: [27/09 18:55:04.144 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 1316 out of 376000 bytes read, 1 ms ago (69516 69515)
	Line 150068: [27/09 18:55:04.145 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 2632 out of 376000 bytes read, 1 ms ago (69516 69515)
	Line 150076: [27/09 18:55:04.146 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 3948 out of 376000 bytes read, 2 ms ago (69517 69515)
	Line 150084: [27/09 18:55:04.146 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 4136 out of 376000 bytes read, 2 ms ago (69517 69515)
	Line 150092: [27/09 18:55:04.147 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 5452 out of 376000 bytes read, 3 ms ago (69518 69515)
	Line 150100: [27/09 18:55:04.147 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 5640 out of 376000 bytes read, 4 ms ago (69519 69515)
	Line 150108: [27/09 18:55:04.147 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 6956 out of 376000 bytes read, 4 ms ago (69519 69515)

I'm checking what happens between the first read glitch with 110ms of delay, and the second of 1411ms delay. All done reading with RTSP_over_TCP from a remote minisatip SAT>IP server (one Enigma STB box).

Any idea?

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

Full debug enabled (all modules in "-v", except dmx only in "-l"):

[28/09 14:56:45.500 AD1]: event on socket index 10 handle 8 type 5 spos 0 wpos 0 (poll fd: 8, events: 3, revents: 1)
[28/09 14:56:45.500 AD1]: socketworks.c:476 Locking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.500 AD1]: start satipc_tcp_read tcp_pos 0 tcp_len 0 tcp_size 1138860, buffer len 368104
[28/09 14:56:45.500 AD1]: read 1452 (from 1138860) from rtsp socket 8 (id 10) [24 00, 05 30]
[28/09 14:56:45.500 AD1]: found at pos 0, rtsp_len 1328, len 1452
[28/09 14:56:45.500 AD1]: ad 1 processed 1328, socket 8
[28/09 14:56:45.500 AD1]: found at pos 1332, rtsp_len 1328, len 1452
[28/09 14:56:45.500 AD1]: The curent chunk is not completely read @ pos 1316, tcp_pos 1332, required 1316 len 368104 tcp_len 1452, tcp_size 1138860, left to read 1212
[28/09 14:56:45.500 AD1]: satipc_tcp_read: returning 1316 bytes
[28/09 14:56:45.500 AD1]: Read OK 1316 (rlen:9212/total:376000) bytes from 8 [s: 10 m: 10] -> 0x7f3a00a2aee8 (buf: 0x7f3a00a29010) - iteration 8603 action 0x7f3a00ab4ecc
[28/09 14:56:45.500 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 9212 out of 376000 bytes read, 5 ms ago (45980 45975)
[28/09 14:56:45.500 AD1]: socketworks.c:509 Unlocking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.501 AD1]: event on socket index 10 handle 8 type 5 spos 0 wpos 0 (poll fd: 8, events: 3, revents: 1)
[28/09 14:56:45.501 AD1]: socketworks.c:476 Locking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.501 AD1]: start satipc_tcp_read tcp_pos 1332 tcp_len 1452 tcp_size 1138860, buffer len 366788
[28/09 14:56:45.501 AD1]: read 1416 (from 1137408) from rtsp socket 8 (id 10) [57 17, 95 D0]
[28/09 14:56:45.501 AD1]: found at pos 1332, rtsp_len 1328, len 2868
[28/09 14:56:45.501 AD1]: ad 1 processed 1328, socket 8
[28/09 14:56:45.501 AD1]: found at pos 2664, rtsp_len 200, len 2868
[28/09 14:56:45.501 AD1]: ad 1 processed 200, socket 8
[28/09 14:56:45.501 AD1]: satipc_tcp_read: returning 1504 bytes
[28/09 14:56:45.501 AD1]: Read OK 1504 (rlen:10716/total:376000) bytes from 8 [s: 10 m: 10] -> 0x7f3a00a2b40c (buf: 0x7f3a00a29010) - iteration 8604 action 0x7f3a00ab4ecc
[28/09 14:56:45.501 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 10716 out of 376000 bytes read, 5 ms ago (45980 45975)
[28/09 14:56:45.501 AD1]: socketworks.c:509 Unlocking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.585 main]: socketworks.c:476 Locking mutex 0x7f3a01aa5818
[28/09 14:56:45.585 main]: socketworks.c:509 Unlocking mutex 0x7f3a01aa5818
[28/09 14:56:45.585 main]: socketworks.c:476 Locking mutex 0x7f3a01ab4d28
[28/09 14:56:45.585 main]: socketworks.c:509 Unlocking mutex 0x7f3a01ab4d28
[28/09 14:56:45.600 signal]: socketworks.c:476 Locking mutex 0x7f3a01a9afb8
[28/09 14:56:45.600 signal]: socketworks.c:509 Unlocking mutex 0x7f3a01a9afb8
[28/09 14:56:45.600 signal]: socketworks.c:476 Locking mutex 0x7f3a01aa5698
[28/09 14:56:45.600 signal]: stream.c:1110 Locking mutex 0x7f3a00cfeb00
[28/09 14:56:45.600 signal]: BW 1677 KB/s, DMX 1679 KB/s, Buffered 0 MB, Total BW: 10 MB, ns/read 184, r: 83, w: 85 fw: 0, tt: 15 ms
[28/09 14:56:45.600 signal]: stream.c:1123 Unlocking mutex 0x7f3a00cfeb00
[28/09 14:56:45.600 signal]: utils.c:932 Locking mutex 0x7f3a00cff780
[28/09 14:56:45.600 signal]: utils.c:941 Unlocking mutex 0x7f3a00cff780
[28/09 14:56:45.600 signal]: socketworks.c:509 Unlocking mutex 0x7f3a01aa5698
[28/09 14:56:45.601 AD1]: event on socket index 10 handle 8 type 5 spos 0 wpos 0 (poll fd: 8, events: 3, revents: 1)
[28/09 14:56:45.601 AD1]: socketworks.c:476 Locking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.601 AD1]: start satipc_tcp_read tcp_pos 0 tcp_len 0 tcp_size 1138860, buffer len 365284
[28/09 14:56:45.601 AD1]: read 196 (from 1138860) from rtsp socket 8 (id 10) [24 01, 00 C0]
[28/09 14:56:45.601 AD1]: found at pos 0, rtsp_len 192, len 196
[28/09 14:56:45.601 AD1]: ad 1 processed 192, socket 8
[28/09 14:56:45.601 AD1]: satipc_tcp_read: returning 0 bytes
[28/09 14:56:45.601 AD1]: Read OK 0 (rlen:10716/total:376000) bytes from 8 [s: 10 m: 10] -> 0x7f3a00a2b9ec (buf: 0x7f3a00a29010) - iteration 8605 action 0x7f3a00ab4ecc
[28/09 14:56:45.601 AD1]: read_dmx send=1, force_send=1, cnt 0 called for adapter 1 -> 10716 out of 376000 bytes read, 105 ms ago (46080 45975)
[28/09 14:56:45.601 AD1]: stream.c:1193 Locking mutex 0x7f3a01ab6c48
[28/09 14:56:45.601 AD1]: stream.c:1086 Locking mutex 0x7f3a01a9b918
[28/09 14:56:45.601 AD1]: process_dmx start called for adapter 1 -> 10716 out of 376000 bytes read, 5 ms ago
[28/09 14:56:45.601 AD1]: Processing done sid 0 at pos 1, rtp header 0, rtp_pos 0, total_len 10716, rlen 10716
[28/09 14:56:45.601 AD1]: start writev handle 7, iiov 1, len 10716
[28/09 14:56:45.601 AD1]: writev returned 10716 handle 7, iiov 1, len 10716 (took 0 ms)
[28/09 14:56:45.601 AD1]: flush_stream: sent 10716 bytes for sid 0, handle 7, sock_id 7, seq 0 => 192.168.210.17:50778
[28/09 14:56:45.601 AD1]: stream.c:1088 Unlocking mutex 0x7f3a01a9b918
[28/09 14:56:45.601 AD1]: stream.c:1207 Unlocking mutex 0x7f3a01ab6c48
[28/09 14:56:45.601 AD1]: socketworks.c:509 Unlocking mutex 0x7f3a01ab4ba8
[28/09 14:56:45.601 AD1]: socketworks.c:476 Locking mutex 0x7f3a01ab4778
[28/09 14:56:45.601 AD1]: socketworks.c:478 Locking mutex 0x7f3a01ab6c48
[28/09 14:56:45.601 AD1]: socketworks.c:503 Unlocking mutex 0x7f3a01ab6c48
[28/09 14:56:45.601 AD1]: socketworks.c:509 Unlocking mutex 0x7f3a01ab4778

This is the complete log trace for the block when the first glitch appears (~100ms instead of the regular 3-5ms). And my first concern is this line:

28/09 14:56:45.601 AD1]: Read OK 0 (rlen:10716/total:376000) bytes from 8 [s: 10 m: 10] -> 0x7f3a00a2b9ec (buf: 0x7f3a00a29010) - iteration 8605 action 0x7f3a00ab4ecc

It really has sense to READ zero bytes from the RTSP socket?

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

After more testing the problem IS NOT related to the use of the RTSP_over_TCP. So, when using the regular RTSP/RTP, the same problem appears. Futhermore, see this part of the LOG:

[30/09 14:00:45.569 AD1]: Read OK 1316 (rlen:1316/total:376000) bytes from 9 [s: 11 m: 11] -> 0x7f8e419f5010 (buf: 0x7f8e419f5010) - iteration 12489 action 0x7f8e41a80ecc
[30/09 14:00:45.569 AD1]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 1316 out of 376000 bytes read, 1 ms ago (177412 177411)
[30/09 14:00:45.569 AD1]: socketworks.c:509 Unlocking mutex 0x7f8e42824d28
[30/09 14:00:45.572 signal]: socketworks.c:476 Locking mutex 0x7f8e4280afb8
[30/09 14:00:45.572 signal]: socketworks.c:509 Unlocking mutex 0x7f8e4280afb8
[30/09 14:00:45.653 main]: socketworks.c:476 Locking mutex 0x7f8e42815818
[30/09 14:00:45.653 main]: socketworks.c:509 Unlocking mutex 0x7f8e42815818
[30/09 14:00:45.653 main]: socketworks.c:476 Locking mutex 0x7f8e42824ea8
[30/09 14:00:45.653 main]: socketworks.c:509 Unlocking mutex 0x7f8e42824ea8
[30/09 14:00:45.669 AD1]: socketworks.c:476 Locking mutex 0x7f8e42824778
[30/09 14:00:45.669 AD1]: socketworks.c:478 Locking mutex 0x7f8e42826f48
[30/09 14:00:45.669 AD1]: socketworks.c:503 Unlocking mutex 0x7f8e42826f48
[30/09 14:00:45.670 AD1]: socketworks.c:509 Unlocking mutex 0x7f8e42824778
[30/09 14:00:45.670 AD1]: event on socket index 10 handle 10 type 1 spos 0 wpos 0 (poll fd: 10, events: 3, revents: 1)
[30/09 14:00:45.670 AD1]: socketworks.c:476 Locking mutex 0x7f8e42824ba8
[30/09 14:00:45.670 AD1]: Read OK 192 (rlen:192/total:10239) bytes from 10 [s: 10 m: 10] -> 0x7f8e38ff6300 (buf: 0x7f8e38ff6300) - iteration 27 action 0x7f8e41aa360b
[30/09 14:00:45.670 AD1]: satipc: satipc_rtcp_reply SES1 match at 60
[30/09 14:00:45.670 AD1]: satipc: Received signal status from the server for adapter 1, stength=197 status=1 quality=3
[30/09 14:00:45.670 AD1]: socketworks.c:509 Unlocking mutex 0x7f8e42824ba8
[30/09 14:00:45.750 AD3]: socketworks.c:476 Locking mutex 0x7f8e428252d8
[30/09 14:00:45.750 AD3]: socketworks.c:478 Locking mutex 0x7f8e42829c18
[30/09 14:00:45.750 AD3]: socketworks.c:503 Unlocking mutex 0x7f8e42829c18
[30/09 14:00:45.750 AD3]: socketworks.c:509 Unlocking mutex 0x7f8e428252d8
[30/09 14:00:45.753 main]: socketworks.c:476 Locking mutex 0x7f8e42824ea8
[30/09 14:00:45.753 main]: socketworks.c:509 Unlocking mutex 0x7f8e42824ea8
[30/09 14:00:45.853 main]: socketworks.c:476 Locking mutex 0x7f8e42824ea8
[30/09 14:00:45.853 main]: socketworks.c:509 Unlocking mutex 0x7f8e42824ea8
[30/09 14:00:45.870 AD1]: event on socket index 10 handle 10 type 1 spos 0 wpos 0 (poll fd: 10, events: 3, revents: 1)
[30/09 14:00:45.870 AD1]: socketworks.c:476 Locking mutex 0x7f8e42824ba8
[30/09 14:00:45.870 AD1]: Read OK 188 (rlen:188/total:10239) bytes from 10 [s: 10 m: 10] -> 0x7f8e38ff6300 (buf: 0x7f8e38ff6300) - iteration 28 action 0x7f8e41aa360b
[30/09 14:00:45.870 AD1]: satipc: satipc_rtcp_reply SES1 match at 60
[30/09 14:00:45.870 AD1]: satipc: Received signal status from the server for adapter 1, stength=0 status=0 quality=0
[30/09 14:00:45.870 AD1]: socketworks.c:509 Unlocking mutex 0x7f8e42824ba8

As you can see the problem is that the READING socket of the "satipc" module is not receiving signals. And then it is stalled for exactly 200ms (45.670 AD1: event on socket index 10 --> 45.870 AD1: event on socket index 10), that is the timeout value of the socket. And this is true on ALL conducted tests. Additionally, I see that the "iteration number" of the "Read OK" debug line has in this log a gap: from 12489 it changes to 27. I observed this only with the regular RTP transport. This new log is using RTP and not TCP as used in privious one.

And finally, I can assert that if the RTSP socket is already opened, then the problem doesn't appear. So, if you open the second client, and then close it... if you restart to any other channel (in a different transponder) before the "adapter" is closed, then in this case all is clean. Please, remember that the closing of the RTSP socket in the "satipc" moduel has a 30 seconds grace time. Therefore, when the socket is already opened nothing is goining wrong.

That said, could it then be that the creation of the client TCP socket to connect to the RTSP port causes a problem in the global socket structure?

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

More info: Compiling and running inside WSL (Ubuntu 21.10) then the same behaviour appears. Futhermore, running without threads (with "-T" parameter). So the problem is there for all environments.

To reproduce it is quite easy: open a local minisatip connecting to TWO different SAT>IP servers through the "satipc" module. First wait until all the tuners will be closed (around 30-40 seconds). Then open 1 client (http or rtsp, don't care) with any FTA channel and when the program is playing, open another client in a differnt transponder... you'll see EVERYTIME that the first client is interrupted for around 1-2 seconds.

I hope we can fix this serious bug.

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

Any reason for this redundant line?

minisatip/src/satipc.c

Lines 1056 to 1057 in ec9c4c8

set_socket_thread(ad->fe_sock, ad->thread);
set_socket_thread(ad->fe_sock, ad->thread);

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

Another bug, perhaps related (or not): when you start minisatip and not wait until all the tuners will be closed, then if you open one channel, all other tuners will not be closed as expected. I'm speaking about "remote adapters" configured with the "satipc" module.

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

If I remove the comments in lines 729 & 735 in "socketworks.c" then:

minisatip/src/socketworks.c

Lines 729 to 735 in ec9c4c8

// LOG("start select");
if ((rv = poll(pf, max_sock, select_timeout)) < 0) {
LOG("select_and_execute: select() error %d: %s", errno,
strerror(errno));
continue;
}
// LOG("select returned %d",rv);

[30/09 20:14:26.485]: read_dmx send=0, force_send=0, cnt 0 called for adapter 1 -> 3948 out of 376000 bytes read, 8 ms ago (47260 47252)
[30/09 20:14:26.485]: start select
[30/09 20:14:26.586]: select returned 1
[30/09 20:14:26.586]: select_and_execute: return poll 1, es 14, max_sock 14, select_timeout 100
[30/09 20:14:26.586]: event on socket index 9 handle 8 type 5 spos 0 wpos 0 (poll fd: 8, events: 3, revents: 1)
[30/09 20:14:26.586]: start satipc_tcp_read tcp_pos 0 tcp_len 0 tcp_size 1138860, buffer len 372052
[30/09 20:14:26.586]: read 172 (from 1138860) from rtsp socket 8 (id 9) [24 01, 00 A8]
[30/09 20:14:26.586]: found at pos 0, rtsp_len 168, len 172
[30/09 20:14:26.586]: ad 1 processed 168, socket 8
[30/09 20:14:26.586]: satipc_tcp_read: returning 0 bytes
[30/09 20:14:26.586]: Read OK 0 (rlen:3948/total:376000) bytes from 8 [s: 9 m: 9] -> 0x7fe6e5fd6f7c (buf: 0x7fe6e5fd6010) - iteration 2899 action 0x55ce884e5f99
[30/09 20:14:26.586]: read_dmx send=1, force_send=1, cnt 0 called for adapter 1 -> 3948 out of 376000 bytes read, 109 ms ago (47361 47252)
[30/09 20:14:26.586]: process_dmx start called for adapter 1 -> 3948 out of 376000 bytes read, 8 ms ago
[30/09 20:14:26.586]: Processing done sid 0 at pos 1, rtp header 0, rtp_pos 0, total_len 3948, rlen 3948
[30/09 20:14:26.586]: start writev handle 7, iiov 1, len 3948
[30/09 20:14:26.586]: writev returned 3948 handle 7, iiov 1, len 3948 (took 0 ms)
[30/09 20:14:26.586]: flush_stream: sent 3948 bytes for sid 0, handle 7, sock_id 6, seq 0 => 172.31.128.1:59504
[30/09 20:14:26.586]: start select
[30/09 20:14:26.686]: select returned 0
[30/09 20:14:26.686]: start select
[30/09 20:14:26.786]: select returned 1
[30/09 20:14:26.786]: select_and_execute: return poll 1, es 14, max_sock 14, select_timeout 100
[30/09 20:14:26.786]: event on socket index 9 handle 8 type 5 spos 0 wpos 0 (poll fd: 8, events: 3, revents: 1)
[30/09 20:14:26.786]: start satipc_tcp_read tcp_pos 0 tcp_len 0 tcp_size 1138860, buffer len 376000
[30/09 20:14:26.786]: read 172 (from 1138860) from rtsp socket 8 (id 9) [24 01, 00 A8]
[30/09 20:14:26.786]: found at pos 0, rtsp_len 168, len 172
[30/09 20:14:26.786]: ad 1 processed 168, socket 8
[30/09 20:14:26.786]: satipc_tcp_read: returning 0 bytes
[30/09 20:14:26.786]: Read OK 0 (rlen:0/total:376000) bytes from 8 [s: 9 m: 9] -> 0x7fe6e5fd6010 (buf: 0x7fe6e5fd6010) - iteration 2900 action 0x55ce884e5f99
[30/09 20:14:26.786]: start select
[30/09 20:14:26.887]: select returned 0
[30/09 20:14:26.887]: start select
[30/09 20:14:26.986]: select returned 1
[30/09 20:14:26.987]: select_and_execute: return poll 1, es 14, max_sock 14, select_timeout 100

Any idea why something is blocking the polling until the timeout?

@catalinii
Copy link
Owner

Hi,

Poll timeout means none of the handles submitted to poll call had anything to read. I would suggest doing a tcpdump and see if there is any traffic coming at that time to minisatip

1 similar comment
@catalinii
Copy link
Owner

Hi,

Poll timeout means none of the handles submitted to poll call had anything to read. I would suggest doing a tcpdump and see if there is any traffic coming at that time to minisatip

@lars18th
Copy link
Contributor Author

lars18th commented Oct 2, 2022

Hi @catalinii ,

For sure the problem is not the server, and the data is comming from it.

I discovered today that the problem could be related to the CA management. When using transponders without pid 1 (CAT) then the problem doesn't appears.

I'll provide more information.
Regards.

@lars18th
Copy link
Contributor Author

lars18th commented Oct 3, 2022

Hi @catalinii ,

After more testing the problem doesn't seem to be related to the CA management. However, the problem appears related to the TCP WINDOW SIZE. Even if the problem appears when using RTP transport as well... with the "tcpdump" tool running in the SAT>IP server (the E2 box) I see that the window size of the TCP packets changes when the second client connects to the central minisatip server. So, perhaps the problem is something related to this:

  • The stream from the SAT>IP server to the "satipc" will change when a new "satipc" session is stablished to another SAT>IP server.

However, I can't understand why? Any idea?

@catalinii
Copy link
Owner

If you can reproduce the issues on both tcp and udp then probably the issue is not related to the tcp window.

The window is just a way of determining how much data can be sent without acknowledging the received data and is meant to be dynamic. On a system with no packet loss the window should just increase up to a max value (and should not drop to a low value).

What is the satip server running on e2 box?

@lars18th
Copy link
Contributor Author

lars18th commented Oct 4, 2022

Hi @catalinii ,

I discovered the root cause of the glitch! 🎖️
And even I've fixed some other minor issues (I'll publish a patch soon), the problem is when using more than one minisatip devices connected to the same satellite wire. Let me to explain it:

  • When using TWO (or more) independent minisatip devices connected to the same LNB, when using the DiseqC parameters ("-0", "-d" and "-q") when you request to OPEN the tuner adapter then you'll send the DiseqC commands. And what happens then when another minisatip SAT>IP is using the wire? Yes, the sat-switch or the LNB could be interrupted for a second. This is true even if both devices are managed by the same remote SAT>IP "client" that knows that both tuners require to use the same voltage/tone values. So the problem is not generated by inconsistent band values, but because the fact that sending "new" diseqc commands generates a disturbance in the signal. And that continues to be true, even if the "new" values will be the same as the "running" values.

Therefore, I've this suggestion to circunvent this problem: One "direct" solution is to not use at all the DiseqC parameter "-0" and the rest. But if your infrastructure really needs it, then what we can do is to send an additional parameter to the "slave" device to open the tuner and tune to the frequency without sending any DiseqC command. For example, with this syntax:

  • /?src=...&diseqc=no

So, when receiving this "optional" parameter in the tune request the idea is then to "block" all diseqc commands. And then is quite easy that the central minisatip SAT>IP server (that uses the "satipc" module) will send this command if it's managing an slave tuner.

Or perhaps you prefer to use a more generic parameter like /?src=...&slave=on.
What you think?

@Jalle19
Copy link
Collaborator

Jalle19 commented Oct 4, 2022

Please, no more slave tuner related hacks 😄

@lars18th
Copy link
Contributor Author

lars18th commented Oct 5, 2022

Hi @Jalle19 ,

Please, no more slave tuner related hacks

My proposed solution will be completly transparent to all users. Think on this scenario:

                           +-----> minisatip A --(COAX)--+
central minisatip --(IP)---|                             |----- LNB
                           +-----> minisatip B --(COAX)--+

What I say is that the "central minisatip" could add to the request the parameter x_diseqc=off. The inclusion of this will be pure optional, and only used if you want to instruct to the "receiver" SAT>IP server to not send any DiseqC command. So the only changes are:

  • When receiving a request (SETUP or PLAY) if it includes the parameter x_diseqc= use it to decide to send/not-send the diseqc command.
  • In the "satipc" module, send the parameter x_disecq= if enabled by the user. Sending on for master tuners and off for slave tuners.

So fully transparent. For the receiver it's only a new optional parameter to parse. And for the server is only a new functionality that the user can enable optionally. What's the problem?

@catalinii
Copy link
Owner

This resolution is again very not standard and will become clutter as more and more people move to unicable which does not have this issue.

@lars18th
Copy link
Contributor Author

Hi @catalinii ,

I'm convinced that the only solution to solve this issue is to enable the option of sharing information between different minisatip instances. Therefore, in order to provide a method that it will be clean and safe, I suggest this strategy:

  • Incorporate a new header inside the RTSP and HTTP requests to pass information to the minisatip instance.

This method will be fully transparent because additional header lines will not generate troubles with any server. And the best is that this could be used not only for resolve this issue, but to support any future enhancement that we need. Example of this header:

  • XCmd-minisatip: disecq=off

And one example inside one RTSP request sended by the satipc module:

SETUP rtsp://192.168.178.21:554/?src=1&freq=10744&pol=h&ro=0.35&msys=dvbs&mtype=qpsk&plts=off&sr=22000&fec=56
&pids=0,400,401,402 RTSP/1.0
CSeq:1
Transport: RTP/AVP;unicast;client_port=11992-11993
XCmd-minisatip: disecq=off
Connection:close

So the idea is quite simple: the master could add different eXtendend CoMmanDs to requests, and the slave only need to process them. The best is that you can pass multiple commands for future uses: XCmd-minisatip: disecq=off;hack1=on;var=A

What you think about this generic data sharing between different minisatip instances?

@Jalle19
Copy link
Collaborator

Jalle19 commented Jan 22, 2023

#992 (comment) my stance on this remains unchanged

@lars18th
Copy link
Contributor Author

#992 (comment) my stance on this remains unchanged

Hi @Jalle19 . I appreciate your comment. However, in some cases (like in my environment) the master-slave mode is necessary. Remember: Using a central minisatip instance receiving streams from different (more than one) SAT>IP servers it's very common for a multi-frequency tuner support. And if these SAT>IP servers are cheap STBs with minisatip on it (with one or two tuners) then you're in the group that needs master-slave support. Not all people has FBC tuners drived by one unique minisatip instance.

However, the previous is not directly related to my last proposal: provide a method to send specific commands from one minisatip instance to another... when using the satipc module. I think this is a great enhancement, and it can be used to other functionalities. Perhaps you don't use the satipc module, but I only use it. And for me it's the reason to use the minisatip server. I understand that this is not the case of all users. And I don't want to impose any thing to support it that can generate troubles to other users. But the same I feel is true in the other way: if something doesn't create side effects for users with single instances then it will be not a problem to incorporate it when it helps to users using multiple minisatip instances. Do you think so?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants