-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for adjusting the server WebSocket writer limit #9572
Conversation
Downstream is currently manually patching to increase the limit because the WebSocket has too much back pressure on reconnect as significant number of large messages get sent right after the connection is established. These sometimes take a while to get delivered because of latancy over the internet and too much back pressure causes downstream application to hit timeouts.
CodSpeed Performance ReportMerging #9572 will not alter performanceComparing Summary
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## master #9572 +/- ##
=======================================
Coverage 98.60% 98.60%
=======================================
Files 113 113
Lines 35290 35301 +11
Branches 4191 4191
=======================================
+ Hits 34797 34808 +11
Misses 331 331
Partials 162 162
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Backport to 3.11: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply e36c5f0 on top of patchback/backports/3.11/e36c5f0ffeeea0076736c5756d63f5933c91eb8d/pr-9572 Backporting merged PR #9572 into master
🤖 @patchback |
(cherry picked from commit e36c5f0)
What do these changes do?
Downstream is currently manually patching to increase the limit because the WebSocket has too much back pressure on reconnect as significant number of large messages get sent right after the connection is established. These sometimes take a while to get delivered because of latency, and too much back pressure causes downstream application to hit a timeout and disconnect prematurely.
#1367 added the limit but there has never been a way to adjust it in the public API.
Are there changes in behavior for the user?
no
Is it a substantial burden for the maintainers to support this?
no