You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your idea related to a problem? Please describe.
In its current state, the crawler step catches the failure if the crawler is already running and still goes ahead with a success flag. In alot of customer situations where a crawler is involved, a success state when crawler was already running where the step function task failed will not guarantee the next process gets the most recent data in the data catalog. In current state, even though we have alarms for the step function, it will hardly be in alarm state for crawler task as we are succeeding it by design.
Describe the solution you'd like A clear and concise description of what you want to happen.
We can add retries and backoff for the crawler task in step function so that it tries multiple time before failing it, also we should allow for it to fail if the crawler throws exception after all retries as that will allow CloudWatch to capture right metrics for alarm.
P.S. Don't attach files. Please, prefer add code snippets directly in the message body.
The text was updated successfully, but these errors were encountered:
Is your idea related to a problem? Please describe.
Describe the solution you'd like
A clear and concise description of what you want to happen.
P.S. Don't attach files. Please, prefer add code snippets directly in the message body.
The text was updated successfully, but these errors were encountered: