-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an examples that demonstrates p2c rr behavior #39
Conversation
The new _demo_ example sends a million simulated requests through each load balancer configuration and records theobserved latency distributions. Furthermore, this fixes a critcal bug in Balancer, where we did not properly iterate through not-ready nodes.
FWIW, I'm pretty sure that the bug you mentioned above also causes some test failures in
|
Note that the panics I mentioned in #39 (comment) both occurred at I vote we try and fast-track getting this merged --- I'd like to test Conduit against this branch, but changing a |
Update: this appears to fix the panic I mentioned, but the same tests now appear to hang:
It's unclear whether or not that's a bug in EDIT: doing some sleuthing and it looks like a Conduit bug, but this still remains to be determined. |
tower-balance/src/lib.rs
Outdated
// Iterate through the not-ready endpoints from right to left to prevent removals | ||
// from reordering services in a way that could prevent a service from being polled. | ||
for idx in self.not_ready.len()-1..0 { | ||
for offset in 1..n { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can probably be....
for idx in (1..n).iter().rev
tower-balance/src/lib.rs
Outdated
for offset in 1..n { | ||
let idx = n - offset; | ||
|
||
for idx in (0..n-1).rev() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@olix0r I don't think this change is correct --- I stopped seeing the integer overflow in Conduit tests when building with this branch as of commit fb08434, but after pointing back at master as of 777888d, I'm getting the overflow again:
running 3 tests
test outbound_times_out ... ignored
thread 'support proxy' panicked at 'thread 'attempt to subtract with overflowsupport proxy', ' panicked at '/Users/eliza/.cargo/git/checkouts/tower-b098c32cf5a1bcca/777888d/tower-balance/src/lib.rsattempt to subtract with overflow:', 154/Users/eliza/.cargo/git/checkouts/tower-b098c32cf5a1bcca/777888d/tower-balance/src/lib.rs::24154
:note: Run with `RUST_BACKTRACE=1` for a backtrace.
24
test outbound_reconnects_if_controller_stream_ends ... FAILED
test outbound_asks_controller_api ... FAILED
failures:
---- outbound_reconnects_if_controller_stream_ends stdout ----
thread 'outbound_reconnects_if_controller_stream_ends' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Inner(Error { kind: Io(Error { repr: Kind(BrokenPipe) }) }) }', src/libcore/result.rs:906:4
---- outbound_asks_controller_api stdout ----
thread 'outbound_asks_controller_api' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Inner(Error { kind: Io(Error { repr: Kind(BrokenPipe) }) }) }', src/libcore/result.rs:906:4
failures:
outbound_asks_controller_api
outbound_reconnects_if_controller_stream_ends
test result: FAILED. 0 passed; 2 failed; 1 ignored; 0 measured; 0 filtered out
An usize overflow can occur in `Balance::promote_to_ready` when `self.not_ready` has length 0. See my comment here: #39 (comment) Signed-off-by: Eliza Weisman <eliza@buoyant.io>
An usize overflow can occur in `Balance::promote_to_ready` when `self.not_ready` has length 0. See my comment here: #39 (comment) Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The `..` syntax creates a _half-open_ range (see https://doc.rust-lang.org/std/ops/struct.Range.html), so all that messing about with `n-1` in #39 and #40 was never actually necessary. This actually fixes the Conduit test I mentioned in #39 (comment); it no longer hangs. Signed-off-by: Eliza Weisman <eliza@buoyant.io>
The new demo example sends a million simulated requests through each load balancer configuration and records the observed latency distributions.
Furthermore, this fixes a critical bug in
Balancer
, where we did not properly iterate through not-ready nodes.