Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API: Duplicated on_stop callback events by HTTP Callback. #3330

Closed
lbli opened this issue Dec 22, 2022 · 1 comment · Fixed by #3349
Closed

API: Duplicated on_stop callback events by HTTP Callback. #3330

lbli opened this issue Dec 22, 2022 · 1 comment · Fixed by #3349
Assignees
Labels
API HTTP-API, HTTP-Callback, etc. Bug It might be a bug. TransByAI Translated by AI/GPT.
Milestone

Comments

@lbli
Copy link

lbli commented Dec 22, 2022

Description(描述)
When pulling HLS streams, after the stream is stopped, an on_stop callback event will be generated. However, each time there will be an extra on_stop event. The client_id in the first on_stop request is empty, while the subsequent ones are not. For example, if one HLS stream is pulled and then stopped, two on_stop events will be generated. If five HLS streams are pulled and then stopped, six on_stop events will be generated.

1. SRS Version(版本):

6.0.6

1. SRS Log(日志):

[2022-12-22 18:27:03.425][INFO][4019][78712a64] Hybrid cpu=2.00%,876MB, cid=2,1, timer=62,0,0, clock=0,46,2,0,0,0,0,0,0, free=1, objs=(pkt:0,raw:0,fua:0,msg:203,oth:0,buf:0)
[2022-12-22 18:27:03.428][INFO][4019][] http: on_stop ok, client_id=, url=http://172.20.0.221:11985/api/rest/v1/live/onstop, request={"server_id":"vid-vp3mp16","action":"on_stop","client_id":"","ip":"172.24.0.74","vhost":"__defaultVhost__","app":"live","tcUrl":"http://172.24.0.75/live","stream":"livestream.m3u8","param":"","stream_url":"/live/livestream","stream_id":"vid-36ozq71"}, response={
  "code": 0
}
[2022-12-22 18:27:05.215][INFO][4019][g374f1g6] <- CPB time=135011551, okbps=0,0,0, ikbps=0,1563,0, mr=0/350, p1stpt=20000, pnt=5000
[2022-12-22 18:27:05.398][INFO][4019][54h6a99l] Process: cpu=2.00%,876MB, threads=2
[2022-12-22 18:27:06.851][INFO][4019][980l7266] HTTP #0 172.17.0.3:37882 GET http://172.24.0.75:9972/metrics, content-length=-1
[2022-12-22 18:27:06.851][INFO][4019][980l7266] TCP: before dispose resource(HttpConn)(0x600e001cee20), conns=2, zombies=0, ign=0, inz=0, ind=0
[2022-12-22 18:27:06.851][WARN][4019][980l7266][104] client disconnect peer. ret=1007
[2022-12-22 18:27:06.851][INFO][4019][p1037qm1] TCP: clear zombies=1 resources, conns=2, removing=0, unsubs=0
[2022-12-22 18:27:06.851][INFO][4019][980l7266] TCP: disposing #0 resource(HttpConn)(0x600e001cee20), conns=2, disposing=1, zombies=0
[2022-12-22 18:27:08.428][INFO][4019][78712a64] Hybrid cpu=2.00%,876MB, cid=2,1, timer=62,0,0, clock=0,46,2,0,0,0,0,0,0, free=1, objs=(pkt:0,raw:0,fua:0,msg:203,oth:0,buf:0)
[2022-12-22 18:27:08.431][INFO][4019][sr6043m0] http: on_stop ok, client_id=sr6043m0, url=http://172.20.0.221:11985/api/rest/v1/live/onstop, request={"server_id":"vid-vp3mp16","action":"on_stop","client_id":"sr6043m0","ip":"172.24.0.74","vhost":"__defaultVhost__","app":"live","tcUrl":"http://172.24.0.75/live","stream":"livestream.m3u8","param":"?hls_ctx=sr6043m0","stream_url":"/live/livestream","stream_id":"vid-36ozq71"}, response={
  "code": 0
}
[2022-12-22 18:27:10.403][INFO][4019][54h6a99l] Process: cpu=2.00%,876MB, threads=2
[2022-12-22 18:27:11.850][INFO][4019][j6a4ndc5] HTTP #0 172.17.0.3:37888 GET http://172.24.0.75:9972/metrics, content-length=-1
[2022-12-22 18:27:11.851][INFO][4019][j6a4ndc5] TCP: before dispose resource(HttpConn)(0x600e000acd20), conns=2, zombies=0, ign=0, inz=0, ind=0
[2022-12-22 18:27:11.851][WARN][4019][j6a4ndc5][104] client disconnect peer. ret=1007
[2022-12-22 18:27:11.851][INFO][4019][p1037qm1] TCP: clear zombies=1 resources, conns=2, removing=0, unsubs=0
[2022-12-22 18:27:11.851][INFO][4019][j6a4ndc5] TCP: disposing #0 resource(HttpConn)(0x600e000acd20), conns=2, disposing=1, zombies=0

1. SRS Config(配置):

http_server {
    enabled         on;
    listen          8080;
    dir             ./objs/nginx/html;
}
rtc_server {
    enabled on;
    listen 8000; # UDP port
    # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#config-candidate
    candidate $CANDIDATE;
}

exporter {
    enabled on;
    listen 9972;
    label cn-beijing;
    tag cn-edge;
}

vhost __defaultVhost__ {
    hls {
        enabled         on;

        #hls_path ./objs/nginx/html/raw;
        #hls_fragment 5;
        #hls_window 10;
        #hls_dispose 10;
        #hls_wait_keyframe on;
        #hls_m3u8_file [app]/[stream].m3u8;
        #hls_ts_file [app]/[stream]-[seq].ts;
        #hls_cleanup off;
    }

    http_hooks {
        enabled         on;
        on_publish      http://172.20.0.221:11985/api/rest/v1/live/onpublish;
        on_unpublish    http://172.20.0.221:11985/api/rest/v1/live/onunpublish;
        on_play         http://172.20.0.221:11985/api/rest/v1/live/onplay;
        on_stop         http://172.20.0.221:11985/api/rest/v1/live/onstop;
        #on_hls          http://172.20.0.221:11985/api/rest/v1/live/onhls;
    }

    http_remux {
        enabled     on;
        mount       [vhost]/[app]/[stream].flv;
    }
    rtc {
        enabled     on;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtmp-to-rtc
        rtmp_to_rtc off;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtc-to-rtmp
        rtc_to_rtmp off;
    }

    play{
        gop_cache_max_frames 2500;
    }
}

> Please describe how to replay the bug? (重现Bug的步骤)

1. Configure http_hooks in srs.conf and restart SRS.
2. Start pushing the stream: ./ffmpeg -threads 2 -fflags +genpts -stream_loop -1 -re -i /root/srs_dev/720_filter1.mp4 -c:v h264 -c:a aac -f flv rtmp://172.24.0.75:1935/live/livestream
3. Start pulling the stream: ./objs/sb_hls_load -c 1 -r http://172.24.0.75:8080/live/livestream.m3u8
4. Check the logs for relevant callbacks of on_stop.

Expect (Expected Behavior)
The number of on_stop callbacks should correspond to the number of pulling clients.

Additionally, during testing, it was found that for HLS pulling, the on_stop callback is generated after a certain time. When pulling 20 streams, the last on_stop callback is generated after 4 minutes. When pulling 50 streams, the last on_stop callback is generated after 6 minutes. This is likely related to the HLS protocol, which is an HTTP stateless protocol. Is there any optimization approach to reduce the time it takes for the on_stop callback to be generated after stopping the stream? Thank you.

TRANS_BY_GPT3

@winlinvip winlinvip added API HTTP-API, HTTP-Callback, etc. Bug It might be a bug. labels Dec 22, 2022
@winlinvip winlinvip added this to the 5.0 milestone Dec 22, 2022
@winlinvip
Copy link
Member

winlinvip commented Dec 22, 2022

It seems to be an inevitable problem.

TRANS_BY_GPT3

@winlinvip winlinvip changed the title hls拉流停止后,产生on_stop回调事件每次都会多一个,如hls拉两路,停止拉流后,产生3个on_stop API: Duplicated on_stop callback events by HTTP Callback. Dec 25, 2022
@duiniuluantanqin duiniuluantanqin linked a pull request Dec 30, 2022 that will close this issue
@duiniuluantanqin duiniuluantanqin linked a pull request Dec 30, 2022 that will close this issue
@winlinvip winlinvip linked a pull request Jan 1, 2023 that will close this issue
@winlinvip winlinvip added the TransByAI Translated by AI/GPT. label Jul 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API HTTP-API, HTTP-Callback, etc. Bug It might be a bug. TransByAI Translated by AI/GPT.
Projects
None yet
3 participants