Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory error in background job processing not handled #647

Closed
cposton opened this issue Mar 13, 2017 · 4 comments
Closed

Out of memory error in background job processing not handled #647

cposton opened this issue Mar 13, 2017 · 4 comments

Comments

@cposton
Copy link
Contributor

cposton commented Mar 13, 2017

In BackgroundJobQueue.run_pending_job, out of memory errors are not caught and handled (since java.lang.OutOfMemoryError exceptions are not caught by StandardError in Ruby) so the watchdog thread continues reporting that the job is running, but it will never complete because the actual job has been destroyed in memory. Restarting the backend will pick the job back up and likely result in the same error. If the job is cancelled then the service still needs to be restarted because jobs are no longer being processed.

In my testing, I was able to include handling for java.lang.OutOfMemoryError specifically (actually I was dropping down to java.lang.VirtualMachineError instead, but that was as far as I was comfortable going at the time) which would allow us to properly fail the job and continue processing.

That being said, there is a lot of discussion around concerning if out of memory errors should be "handled" so I didn't want to submit a PR without further discussion.

@lmcglohon
Copy link
Contributor

Agreed - not sure that we should be "handling" out-of-memory errors programmatically. Going to leave this open for further discussion.

@lmcglohon
Copy link
Contributor

@archivesspace/archivesspace-core-committers Let's discuss this on Monday, Aug. 14th.

@lmcglohon
Copy link
Contributor

@cposton I would like to close this issue. Our philosophy is that out-of-memory errors should not be handled by the ArchivesSpace application but should instead be managed by the institution installing ArchivesSpace. That being said, are you still having issues with this?

@cposton
Copy link
Contributor Author

cposton commented Jan 3, 2018

I understand and agree with the stated philosophy. We have since modified memory allocations and have not had to deal with the issue since as far as I am aware.

@cposton cposton closed this as completed Jan 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants