Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Job queue for timelapse creation to not have new job erase timelapse creation of old one #318

Open
wdl1908 opened this issue Nov 25, 2013 · 9 comments

Comments

Projects
None yet
4 participants
@wdl1908
Copy link

commented Nov 25, 2013

Rename images from temp* to processing* before calling ffmpg.

I just had all the images erased because I started a new print en did not see the encoding was still busy.

Octoprint is running on a raspi.

@foosel

This comment has been minimized.

Copy link
Owner

commented Nov 25, 2013

That would not solve the issue at hand. What if you managed to finish the second print while the processing of the first one was still running and then you started a third print? Simply creating a temporary folder per timelapse or something like that sadly isn't the golden bullet here either as the goal would be to try to minimize concurrent jobs... I have to think about that.

Also: Could you try to make the bug report titles a bit more descriptive? Something like "Timelapse still rendering erased by new print" would have been great here :)

@wdl1908

This comment has been minimized.

Copy link
Author

commented Nov 25, 2013

You create the name of the mpg with filename + datetime stamp right? maybe rename the temp files that way. That should solve the concurrency problem. Then you can start a new print of the same file and it will not interfere with the old one. or not rename them but move them to a directory with filoename + datetime stamp.

PS title adapted.

@foosel

This comment has been minimized.

Copy link
Owner

commented Nov 25, 2013

That still wouldn't solve thought that, depending on how long a timelapse takes to get rendered and how many printjob, each one with it's own timelapse you can fit in that timeframe, there still might be way more that 1 rendering process running at a time, which -- at least on a Pi -- can really send performance to hell.

@wdl1908

This comment has been minimized.

Copy link
Author

commented Nov 25, 2013

Some queueing system could solve the concurrency encoder problem. Probably too much work to solve a problem that will only happen once in how many prints?

But the renames or move to a dir with filename timestamp should already solve the erasure problem.

@eboston

This comment has been minimized.

Copy link

commented Feb 1, 2015

I see that nothing new has been posted about this issue for a while and as a new user, ran into the problem myself. I would like to see the problem resolved and what I had in mind would be that OctoPrint could assign a JobID (just a simple number that is incremented each time a print is started) and a subdirectory under the temp directory created using the ID. The encoder would just process all the files in that directory and when done, delete the files and move onto the next directory. Would something like that be possible?

@hungerpirat

This comment has been minimized.

Copy link
Contributor

commented Feb 1, 2015

@eboston what about providing a pull request ? This would be the fastest possible solution of this issue ;)

@eboston

This comment has been minimized.

Copy link

commented Feb 1, 2015

@hungerpirat because I wasn't planning on implementing the change myself. I saw it had been discussed but no resolution was ever made. I wanted to see if there was a final decision not to implement it or if it was still in the determine feasibility state.

@foosel

This comment has been minimized.

Copy link
Owner

commented Aug 8, 2015

We need a queue similar to gcode analysis is handled today (paused/aborted while printing) and turn the timelapse rendering into jobs, maybe with a warning that some file is still being rendered.

@foosel foosel changed the title devel: Timelapse still rendering erased by new print [Request] Job queue for timelapse creation to not have new job erase timelapse creation of old one Aug 8, 2015

foosel added a commit that referenced this issue Feb 2, 2016

Big overhaul of timelapse handling
  * persistent notification on ongoing timelapse render job (#485)
  * non-colliding timelapse snapshot name generation to not delete
     existing snapshots when new print starts and timelapse has not
     yet been rendered, also only delete snapshots if timelapse rendered (#318)
  * list of unrendered timelapses, with option to delete files
    or to render timelapse
@foosel

This comment has been minimized.

Copy link
Owner

commented Feb 2, 2016

While we still need a queue for proper handling of timelapse render jobs, especially in order to prevent them to being done concurrently with print jobs, but at least new prints shouldn't kill the timelapse data from former ones. And even if a timelapse rendition fails, the files are not deleted until after a week, and a new list in the UI allows to re-render those or delete them.

Example:

image

That change will be included in version 1.2.9 and is currently available on the maintenance and devel branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.