[Bug] Error in octoprint.log from "delete_old_unrendered_timelapses" in timelapse.py #1326
What were you doing?
Rendering a timelapse. Repeatable (happens with every timelapse render). Render completes normally, timelapse is created successfully, and all snapshot files are deleted from the timelapse\tmp folder.
However, at the end of every timelapse render, errors are thrown (one for each snapshot file) into octoprint.log. The error message implicates the "delete_old_unrendered_timelapses" routine.
Example error below. The same sequence is logged for every snapshot file (i.e., if there are 500 snapshot files, there will be 500 of these errors logged).
What did you expect to happen?
I expected the render to complete without any errors from the delete_old_unrendered_timelapses routine. Please note that there are no "old" timelapse files. The timelapse\tmp folder was empty prior to the print, snapshots were created during the print, the snapshots were rendered and deleted successfully at end of print.
What happened instead?
For each snapshot file in the current render, a "file not found" error was logged in octoprint.log.
Branch & Commit or Version of OctoPrint
OctoPrint 1.2.11 (master branch)
Printer model & used firmware incl. version
Not printer or firmware specific. (Lulzbot Mini, Marlin 1.0.2)
Browser and Version of Browser, Operating System running Browser
Not a browser issue. (Chrome 50.0.2661.94 on Win7x64)
Link to octoprint.log
Link to contents of terminal tab or serial.log
Screenshot(s) showing the problem:
Not applicable. See logs.
I have read the FAQ.
So... I haven't been able to reproduce this at all myself, but looking closely at the described symptoms I got an idea what might be causing this and fixed that.
I think the issue was that before deleting the image of a just rendered timelapse the frontend already got signaled that the rendering was done, prompting a new fetch of the timelapse list, including unrendered ones, which in turn triggered the cleanup task. Depending on the timing of things it could then happen that the cleanup task was still collecting information about the unrendered (but actually just rendered) timelapse of the current print job while at the same time in another thread that one was being deleted, leading to attempts to look at files that had since vanished. At least that's behaviour I can definitely confirm being possible looking at the code.
I now signal the frontend AFTER cleaning up the timelapse, added a lock to synchronize cleanup task and actual deletion code (so that one can't run while the other is already active) and on top of things made errors on deletion/cleanup only show up in the log in debug mode.
That should hopefully have solved this.