Introduce latency buckets for S3#69783
Conversation
|
This is the first iteration to improve the situation with S3 timeouts. I'm not sure about the naming tbh. Also, I should probably use column compression for I split the changes into 3 commits, hopefully it's easier to review this way |
|
This is an automated comment for commit 21f43dd with description of existing statuses. It's updated for the latest CI running ❌ Click here to open a full report in a separate page
Successful checks
|
1ab412a to
f4b60d4
Compare
|
I have a question. |
|
Dear @tavplubix, @CheSema, this PR hasn't been updated for a while. You will be unassigned. Will you continue working on it? If so, please feel free to reassign yourself. |
|
Dear @CheSema, this PR hasn't been updated for a while. You will be unassigned. Will you continue working on it? If so, please feel free to reassign yourself. |
There's a check in the |
66cd32c to
bd5e2b7
Compare
There was a problem hiding this comment.
Just curios. Do you know how to find this logs after a fail?
I do not know.
There was a problem hiding this comment.
There's a pytest.log in tests/integration directory when you run the test
CheSema
left a comment
There was a problem hiding this comment.
I general I just happy with it.
Use latency buckets to track first byte read/write and connect times for S3 requests. That way we can later use gathered data to calculate approximate percentiles and adapt timeouts.
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Introduce latency buckets and use them to track first byte read/write and connect times for S3 requests. That way we can later use gathered data to calculate approximate percentiles and adapt timeouts.
Documentation entry for user-facing changes
CI
CI Settings (Only check the boxes if you know what you are doing):