Skip to content
This repository has been archived by the owner on Jan 19, 2021. It is now read-only.

new twitch module #28

Closed
k1ck3r opened this issue Jan 24, 2018 · 12 comments
Closed

new twitch module #28

k1ck3r opened this issue Jan 24, 2018 · 12 comments

Comments

@k1ck3r
Copy link

k1ck3r commented Jan 24, 2018

Hey man,

so far so good. again there are issues with the twitch module. there are two cases:

with option "Require Preview before going live" switch off:
ffmpeg processes are keep spawning, like it doesn't check if there is any current process running. stopping stream doesn't kill those ffmpegs

with option "Require Preview before going live" switch on:
it spawns only one process of ffmpeg, but then again, upon stream is stopped the process is keep running.

test it on CenOS 7.4 and Ubuntu 16.04, with both 'tactics' - with installer and by hand.
Hope you'll have time to look into this.

ps. would kindly ask you to point me where exactly is the function of creating the stream key, want to add username to the stream key, because currently testing and developing rtmp stat module.

@jprjr
Copy link
Owner

jprjr commented Jan 24, 2018

Regarding stream key: it's created here: https://github.com/jprjr/multistreamer/blob/master/lib/multistreamer/models/stream.lua#L169

That's really weird about spawning multiple ffmpeg processes. When Multistreamer spawns ffmpeg, it keeps a hold of that process - it doesn't let it fork or anything like that.

One question: how many nginx worker processes do you have? I usually just run 1, I haven't tried it with 2+ in a long time, could that be causing the problem?

@k1ck3r
Copy link
Author

k1ck3r commented Jan 25, 2018

using only one worker. the problem is only with twitch (again) - using only YT (or YT+FB) as upstream doesn't spawn any problems at all. and the issue started once you updated the module for twitch oath.

also i didn't explain well the case with "Require Preview before going live" switched on. it starts the stream, but soon enough (3-5 minutes) it kills the ffmpeg process, and starts it again.

have a vm, installed like one hour ago, just to prepare it, in case you'll be able to debug it on fresh installation.

@jprjr
Copy link
Owner

jprjr commented Jan 25, 2018

That was a pretty embarrassing bug.

While streaming, the nginx module makes an internal call to a "on-update" API endpoint every 30 seconds or so, each module uses that time to check that things are going OK, make any external API calls needed, etc.

I managed to completely break that functionality in the twitch module, so after 30 seconds of streaming it would spit out a 500 error, which would cause the nginx module to end the stream, kill ffmpeg, etc. I'm usually on top of keeping my production instance on the bleeding-edge, but I just logged in and noticed I was still on 11.0.2, which still used all the older twitch code.

I've pushed a new tag, 11.0.4, that should fix the bug.

@k1ck3r
Copy link
Author

k1ck3r commented Jan 26, 2018

hey man.
awesome job, thanks for the update! really appreciate your time, but can you give heads up, why the ffmpeg process is still up if the stream has ended? all of my tests are based every time on a fresh install using your official installer. still the issue stays with the undying ffmpeg process.

thanks again 🥇

@k1ck3r
Copy link
Author

k1ck3r commented Feb 3, 2018

hey again J

unfortunately - bad news. twitch streams dies after 30 minutes of online streaming - exactly 30 minutes, beside the undying ffmpeg process. even if the stream is stopped from dashboard - ffmpeg is still up.

@jprjr
Copy link
Owner

jprjr commented Feb 4, 2018

Hey hey, just giving you a notification to let you know I've seen this - I'm moving this week, so I'll likely get to take a look at it next week.

@k1ck3r
Copy link
Author

k1ck3r commented Feb 4, 2018

no worries man, i've found what causes it, and it's strongly profiled with redis version and new kernel anti-meltdown updates. maybe my issue, is to work over and over with latest packages avail.

@jprjr
Copy link
Owner

jprjr commented Feb 4, 2018 via email

@k1ck3r
Copy link
Author

k1ck3r commented Feb 5, 2018

if it's doable, i think will be good to have all in one place, because i've spent too much reading on redis, still it's kinda hard even to explain what can/can't do with redis.

my personal approach for your project is kinda more 'suitable' for the audience - eg instead of ffmpeg params via advanced menu, there could be ""simple"" PiP templates for ""friends"" (your description is more specific, meaning 'grant access to meta data info'). anyhow, can't really get into developing with lua, hence can't help you with improvements - for example - for me, adding the username in front of the stream-key is/was like pig in a mud, it's fun, it's dirty, but at the end, nothing worked for me :D

ps: will support you by any meanings, but if you could separate (for example auth modules for upstreams) to something more ... 3rd party ... php or even java, will be easier to maintain and trying to add different functionalities.

@jprjr
Copy link
Owner

jprjr commented Feb 5, 2018

The main function of redis is to share updates between different parts of the app.

You've basically got two kinds of connections going on - short-lived and long-lived connections. Like while streaming, you've got a long-lived connection from RTMP, you've got long-lived connections while viewing the chat (websockets), and long-lived connections via IRC.

All the various commands (like clicking "start stream", "stop stream") are short-lived connections, so those generate events via redis pubsub. The long-lived connections are bringing in updates via redis so they can respond appropriately. So if you stop streaming, the chat interface can be notified that the stream has ended. It's also used for multiprocess scenarios - a broadcast will go out saying "hey the user clicked 'stop'", and whichever worker actually "owns" the ffmpeg process will end it.

That's all I'm using redis for though, is pubsub notifications. The ngx_lua_ipc module should be pretty easy to use as a replacement. The only part I'm unsure of is the IRC code - I have a thread running that gets updates via redis pubsub and keeps a state of everything in memory, so that when somebody connects via IRC, I can just make a copy of that current state. Everything else has some other long-lived connection to deal with (the websocket for chat, tcp socket for irc), but that "keep the IRC updated" thread only has the redis connection.

I might have to change the IRC module to build the state when a user connects.

@k1ck3r
Copy link
Author

k1ck3r commented Feb 6, 2018

thank you so much for this 'short' lesson, actually wasn't sure how those connections are getting up/down checks, but now it's clear. from what i'm reading, i can guess, that you use quite powerful tool, for such small tasks. the think i can suggest you, to run completely native irc server (if you don't have any xp on this, could do some stretching from good-ol-days).

once the irc module is outside of redis's scope, well you decide. but having a native server below, some app/hook/on-demand requests/queries is quite simple. don't forget that you'll be able to make channels temp/perma, to create user/groups/roles which will again deliver some pretty good flexibility for the purpose. just sharing thoughts, because (again) the idea behind multistreamer is amazing.

what's your vision on network modules, could they be moved into some more flexible 'jail'? where you can, up/down easily 'staging' ones, something like Public Test Env where can be adapted to work with the platform. they way i'm going, is that if u can 'shard' or even 'orchestrate' them, will bring some flexibility.

two things really can't get out of my mind.

  1. get rid of ffmpeg - plausible? compiling custom 'thin' one? this thing is eating way too necessary resources for its purposes, it's just transporting one flow to another. by getting rid. i don't mean now, or tomorrow, but if this transport can be handle by something else... something more monitor'y' and jav'y' will extend UX overall. behind the curtains ffmpeg is fine, but i'm doing some nasty shit test, and spawning like 20-40 re-streamers to upstreams. it's a heavy.

  2. more caching. the youtube upstream module for YT is the real reporter. on db level, could be configured temp fields attached to a user session, where the module can collect and store some info. since i it's a session based, the upstreamer won't update some fields in order to break this apart. overall sticking some 'cached' user actions, will prevent some crappy browser dependent UX.

highly appreciate your time and effort, said that many times.

@jprjr
Copy link
Owner

jprjr commented Feb 25, 2018

Closing this issue since the original problem (twitch module crashing) has been fixed. I'll open a new issue for moving off redis.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants