You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How do we manage long running background workflows that needs to process repository data stored on S3?
Each workflow could possibly consist of multiple tasks. There is no guarantee, that every task would be run on the same host and will be sharing the same temp working directory to put processed temporary files to.
dynamic watermarks (e.g. with user details + ip + timestamp) added to documents upon download, this actually consists of following separate tasks run in a workflow pipeline:
mirekys
changed the title
Running heavy data-processing background jobs in production environment
Heavy data-processing background jobs in production environment
Jan 20, 2021
How do we manage long running background workflows that needs to process repository data stored on S3?
Each workflow could possibly consist of multiple tasks. There is no guarantee, that every task would be run on the same host and will be sharing the same temp working directory to put processed temporary files to.
Use cases, e.g.:
Questions
Possible solutions
The text was updated successfully, but these errors were encountered: