Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible metadata corruption running encode job with multiple queued background AtomicParsley jobs #11

Open
renegaudet opened this issue Apr 5, 2023 · 2 comments
Assignees

Comments

@renegaudet
Copy link

When an encode job runs and an atomic parsley job gets queued to run at the end of the encode job, it appears to be possible that the encode job will pick up the wrong metadata if multiple background atomic parsley jobs are queued and running. Atomic Parsley jobs run in the background and can run at the same time as jobs in the main job queue. I have observed instances where the encode job is picking up metadata from the atomic parsley jobs on occasion. It is difficult to recreate but I have observed the issue multiple times. To create the issue, run an encode job and near the end of encoding and before atomic parsley runs, queue a number of atomic parsley jobs. It's not 100% recreatable, but it appears that part of the code is not multi-thread safe. The workaround is to not run background atomic parsley jobs while encode jobs are running.

@lart2150 lart2150 self-assigned this Aug 20, 2023
@lart2150
Copy link
Owner

I looked over the code and it looks like reading the metadata from the file should be thread safe as the metadata file has the filename prefixed. As far as I can tell AtomicParsley is thread safe even with --overWrite enabled as it takes the input filename and puts a random number between the base file name and extension.

In the job manager I do see some code that limits atomic jobs in the file tab to one at a time. https://github.com/lart2150/kmttg/blob/master/src/com/tivo/kmttg/main/jobMonitor.java#L207

I can also build a debug copy of kmttg with some extra logging if you are up for trying to reproduce it. The easy change is to just always limit atomic jobs to one at a time.

@renegaudet
Copy link
Author

I've been running with the job limit number set to one and have not seen the issue occur again. The workaround is sufficient and I don't want you to have to spend more time trying to debug it. Having said that, I'm willing to try and recreate the issue with additional debug code if you wish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants