New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rework conveyor heartbeat handling #5252
Comments
rcarpa
added a commit
to rcarpa/rucio
that referenced
this issue
Mar 1, 2022
Right now heartbeats are updated in the database at each iteration (each activity for multi-activity daemons). This forces us to use a big "older_than" value to avoid the race condition when a big bulk size and slow access to transfertool makes hearbeats expire. At the same time, this puts strain on database with un-needed heartbeat updates when the daemons are lightly loaded. Rework this behavior to perform hearbeat_handler.live() more frequently (each submission, for example), but modify live() to only perform a database update if enough time has passed from last update.
rcarpa
added a commit
to rcarpa/rucio
that referenced
this issue
Mar 1, 2022
Right now heartbeats are updated in the database at each iteration (each activity for multi-activity daemons). This forces us to use a big "older_than" value to avoid the race condition when a big bulk size and slow access to transfertool makes hearbeats expire. At the same time, this puts strain on database with un-needed heartbeat updates when the daemons are lightly loaded. Rework this behavior to perform hearbeat_handler.live() more frequently (each submission, for example), but modify live() to only perform a database update if enough time has passed from last update.
rcarpa
added a commit
to rcarpa/rucio
that referenced
this issue
Mar 8, 2022
Right now heartbeats are updated in the database at each iteration (each activity for multi-activity daemons). This forces us to use a big "older_than" value to avoid the race condition when a big bulk size and slow access to transfertool makes hearbeats expire. At the same time, this puts strain on database with un-needed heartbeat updates when the daemons are lightly loaded. Rework this behavior to perform hearbeat_handler.live() more frequently (each submission, for example), but modify live() to only perform a database update if enough time has passed from last update.
bari12
added a commit
that referenced
this issue
Mar 14, 2022
…ndling Transfers: rework heartbeat handling. Closes #5252
bari12
pushed a commit
that referenced
this issue
Mar 14, 2022
Right now heartbeats are updated in the database at each iteration (each activity for multi-activity daemons). This forces us to use a big "older_than" value to avoid the race condition when a big bulk size and slow access to transfertool makes hearbeats expire. At the same time, this puts strain on database with un-needed heartbeat updates when the daemons are lightly loaded. Rework this behavior to perform hearbeat_handler.live() more frequently (each submission, for example), but modify live() to only perform a database update if enough time has passed from last update.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Motivation
Right now heartbeats are updated in the database at each iteration (each activity for multi-activity daemons). This forces us to use a big "older_than" value to avoid the race condition when a big bulk size and slow access to transfertool makes hearbeats expire.
At the same time, this puts strain on database with un-needed heartbeat updates when the daemons are lightly loaded.
Modification
An idea for improvement is to perform a hearbeat_handler.live() more frequently (each submission, for example), but modify live() to only perform a database update if enough time had passed from last update. The frequency of updates and older_than should probably defined as a function of sleep_time
The text was updated successfully, but these errors were encountered: