Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Error is raised when retrying failed tasks for OPENSPP #92

Conversation

renceInbox
Copy link
Collaborator

@renceInbox renceInbox commented May 24, 2023

  • Remove the exception on retry. If ProtocolError was raised and added to retry, it will call an error saying that the error should be initialized with parameters
    self.retry(exc=e, countdown=30)
    • The first exception raised will be shown in the logs instead.
  • Added restart: Always on prod config for celery
  • Refactor celery task to update the status of Batch record to error_merging when the task failed instead on the first retry.

@renceInbox renceInbox requested a review from jeremi May 24, 2023 00:48
@renceInbox renceInbox self-assigned this May 24, 2023
Signed-off-by: Rommel Terrence Juanillo <terrence@newlogic.com>
@renceInbox renceInbox force-pushed the fix/205888-error-when-retrying-task branch from f183055 to 8b0dd82 Compare May 24, 2023 00:56
@@ -12,6 +13,38 @@
logger = logging.getLogger(__name__)


class OPENSPPCeleryTask(Task):
max_retries = 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@renceInbox this variable seems to be unused. If it is not unused, this setting should be provided through an environment variable.

@@ -109,14 +142,14 @@ def merge_cards(self, batch_id: int) -> None:
)
except Exception as e: # noqa Lets catch all errors error for debugging and retry
logger.info(f"Error raised on client. {str(e)}")
self.retry(exc=e, countdown=30)
return
raise self.retry(exc=e, countdown=30)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@renceInbox please make the retry interval (countdown) a setting provided through an environment variable.

client.update_queue_batch_record(batch_id=batch_id, data=data)
self.retry(exc=e, countdown=30)
return
raise self.retry(countdown=30)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@renceInbox same comment as above – the should be an env setting.

@kneckinator kneckinator merged commit 46da0e4 into fix/specify-ssl-context-for-xmlrpc May 24, 2023
4 checks passed
@kneckinator kneckinator deleted the fix/205888-error-when-retrying-task branch May 24, 2023 01:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants