New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keep tasks in queue when a dependant service is down #25
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Shouldn't be that hard with celery-autoretry, marking as good-first-issue. |
Actually, this could even be extended to letting the user know that some infrastructure is slow or borked and that's the reason packit is trying to build the RPM for 5 hours. |
I'd even extend this to not rely on fedora-messaging at all: once we submit a build, we have a new celery task, which would check the build periodically so that we wouldn't need to count on f-m. |
If possible, we shouldn't spam the PR with identical comments - we should only comment once. Franta says that we should post only a single comment after we try X times and fail to proceed. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
This issue has been marked as stale because it hasn't seen any Stale issues are closed after 14 days, unless the label is removed This is done in order to ensure that open issues are still relevant. Thank you for your contribution! 🦄 🚀 🤖 (Note: issues labeled with pinned, security, bug or EPIC are |
Let's say COPR is down and all the API calls are failing: if such a thing happens we should keep tasks in a queue and process them once we're able to succeed.
Dominika says celery has autoretry which could solve this. https://www.distributedpython.com/2018/09/04/error-handling-retry/
The text was updated successfully, but these errors were encountered: