New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use ffmpeg scene detection to improve chunked encoding #619
Comments
Wrong label... |
This is very similar to what I'm pondering on these days. I was thinking about composing a StaxRip embedded PowerShell script that generates an I-frame index list. I found an obvious downside - too long process time - in using But using the
What is difficult, though, is how we can put it to work in StaxRip using the generated That said, another big hurdle is in place. Currently StaxRip is using frame number info (evenly divided total frame numbers) to put it directly in each encoder's parameters that are used for chunk encoding. But in order to adopt this new tool, an overhaul of the code is inevitable since every chunk encoding should be done via Last but not least, there's a critical problem with this
I don't know if PySceneDetect is free of this kind of issues, but if not, then it's not reliable to use for general purposes. That's a big hurdle. 🤔 |
I wonder if the index file created by ffms2 and L-Smash-Works contains info about I-Frames (I guess so) and if the format of the index file is easy to understand. It could not only be useful for chunk encoding, but also for cutting without re-encoding. |
@stax76, that’s right. I’m wondering if the authors are willing to change the format. Hmm... |
Probably not. Vapoursynth is modern and powerful, generally has rich metadata support, so a source filter could provide this info so that it can be accessed with the vapoursynth API, maybe it's already supported, or it can be requested from ffms2, l-smash and dgdecnv. But reading it from the index file would be significantly faster, it would not require requesting all frames, maybe the index format isn't so complex. |
From my experience... Don't try to split and merge open-gop hevc streams, it will produce bad things in result. |
Yeah, esp. in stream copy. Since chunk encoding also involves stream copy cutting (either by the encoder itself at frame indexes, or via So at this point, another issue comes up. Can we extract only IDR frames which have good |
On second thought, frame index cutting by the encoder may not be a problem. OTOH, cutting by Therefore, it seems that timecode-based cutting for chunk encoding raises another issue in this regard. Hmm... |
Any update news on this. Presently I use a roundabout way of chunk at scene change.
I do wonder if this could be automatically processed? |
By splitting the frames evenly between chunks they will start/end in the middle of the scene, lowering compression efficiency and/or quality. I propose to add functionality to detect scene changes and split the chunks based on that. See here or here on how to do this.
For aomenc, first pass stats file could be parsed to get the keyframes for 100% accuracy thus in theory improving quality & parallelism at the same time (by not using multi threading options and encode in chunks instead). Av1an does that.
The text was updated successfully, but these errors were encountered: