-
-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a queue for offline jobs #1240
Comments
Not spawning a new rails app each time a mail comes in could improve the resource usage on smaller hosts. |
Rails 4.2 (#2968) has a built-in framework for integrating background jobs. |
Rails 6.1 (#6011) has the ability to destroy associated records asynchronously which would help prevent timeouts when managing WDTK. |
To help 3rd party installs, @garethrees wanted to look at using a PG backed queue over Sidekiq/Redis - this seems like a good option: https://github.com/bensheldon/good_job |
https://github.com/que-rb/que was also something I'd seen that seems to be pretty well maintained. |
Our initial stab at this should be quite basic; mainly with the intention of getting the job infrastructure in place and running, with an initial concrete implementation of ActiveJob so that we actually gain some benefit from doing the setup work. I think we have 4 main tasks.
2 and 3 don't need too much discussion at this point, so I'll focus on 1 and 4. To reiterate, the main focus is getting the ActiveJob infrastructure in place and running so that over time we can more easily add jobs in future; we're not going to get everything migrated to a job in one shot. 1. Decide on a QueueAdapterAs previously mentioned, I'm keen to avoid additional infrastructure costs for re-users who don't have in-house sysadmins. Sidekiq requires Redis, which adds another thing to configure, secure, run and maintain. It's also worth noting that Sidekiq does not process the queue serially. Is this a problem? IDK. Here's a situation:
Would it be bad if the order of indexing ended up being?
Do either of the postgres-based backends ( 4. Add a concrete implementation of ActiveJobAssociation Destruction An easy-to-implement option would be to add Xapian Indexing The ideal improvement would be to run We already have I think the difficulty will be in how the job actually gets reindexed. All this logic is currently wrapped up in Expiring Events This is all sounding quite complicated, so perhaps instead we move class InfoRequest
def reindex_request_events
InfoRequest::ReindexRequestEventsJob.perform_later(self)
end
end
class InfoRequest::ReindexRequestEventsJob
def perform(info_request)
info_request.info_request_events.find_each(&:xapian_mark_needs_index)
end
end |
Fixed by #7575. Moving specific functionality into background jobs can be ticketed independently. |
Possible job types to include:
Possible advantages:
The text was updated successfully, but these errors were encountered: