-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] Job queue for timelapse creation to not have new job erase timelapse creation of old one #318
Comments
That would not solve the issue at hand. What if you managed to finish the second print while the processing of the first one was still running and then you started a third print? Simply creating a temporary folder per timelapse or something like that sadly isn't the golden bullet here either as the goal would be to try to minimize concurrent jobs... I have to think about that. Also: Could you try to make the bug report titles a bit more descriptive? Something like "Timelapse still rendering erased by new print" would have been great here :) |
You create the name of the mpg with filename + datetime stamp right? maybe rename the temp files that way. That should solve the concurrency problem. Then you can start a new print of the same file and it will not interfere with the old one. or not rename them but move them to a directory with filoename + datetime stamp. PS title adapted. |
That still wouldn't solve thought that, depending on how long a timelapse takes to get rendered and how many printjob, each one with it's own timelapse you can fit in that timeframe, there still might be way more that 1 rendering process running at a time, which -- at least on a Pi -- can really send performance to hell. |
Some queueing system could solve the concurrency encoder problem. Probably too much work to solve a problem that will only happen once in how many prints? But the renames or move to a dir with filename timestamp should already solve the erasure problem. |
I see that nothing new has been posted about this issue for a while and as a new user, ran into the problem myself. I would like to see the problem resolved and what I had in mind would be that OctoPrint could assign a JobID (just a simple number that is incremented each time a print is started) and a subdirectory under the temp directory created using the ID. The encoder would just process all the files in that directory and when done, delete the files and move onto the next directory. Would something like that be possible? |
@eboston what about providing a pull request ? This would be the fastest possible solution of this issue ;) |
@hungerpirat because I wasn't planning on implementing the change myself. I saw it had been discussed but no resolution was ever made. I wanted to see if there was a final decision not to implement it or if it was still in the determine feasibility state. |
We need a queue similar to gcode analysis is handled today (paused/aborted while printing) and turn the timelapse rendering into jobs, maybe with a warning that some file is still being rendered. |
* persistent notification on ongoing timelapse render job (#485) * non-colliding timelapse snapshot name generation to not delete existing snapshots when new print starts and timelapse has not yet been rendered, also only delete snapshots if timelapse rendered (#318) * list of unrendered timelapses, with option to delete files or to render timelapse
While we still need a queue for proper handling of timelapse render jobs, especially in order to prevent them to being done concurrently with print jobs, but at least new prints shouldn't kill the timelapse data from former ones. And even if a timelapse rendition fails, the files are not deleted until after a week, and a new list in the UI allows to re-render those or delete them. Example: That change will be included in version 1.2.9 and is currently available on the |
Rename images from temp* to processing* before calling ffmpg.
I just had all the images erased because I started a new print en did not see the encoding was still busy.
Octoprint is running on a raspi.
The text was updated successfully, but these errors were encountered: