Skip to content

[query] fix backoff code#13713

Merged
danking merged 1 commit intohail-is:mainfrom
danking:fix-backoff-code
Sep 27, 2023
Merged

[query] fix backoff code#13713
danking merged 1 commit intohail-is:mainfrom
danking:fix-backoff-code

Conversation

@danking
Copy link
Contributor

@danking danking commented Sep 26, 2023

CHANGELOG: Fixes #13704, in which Hail could encounter an IllegalArgumentException if there are too many transient errors.

I need to do the multiplication in 64-bits so that it does not wrap around to a large negative value. Then I can use math.min with the maxDelayMs to get us back into 32-bits.

I'm just pushing through a bunch of bugs to get Wenhan unblocked today.

CHANGELOG: Fixes hail-is#13704, in which Hail could encounter an IllegalArgumentException if there are too many transient errors.

I need to do the multiplication in 64-bits so that it does not wrap around to a large negative value. Then I can use `math.min` with the maxDelayMs to get us back into 32-bits.
@danking
Copy link
Contributor Author

danking commented Sep 27, 2023

@ehigham bump, this is blocking Wenhan

@danking danking merged commit 481cfc2 into hail-is:main Sep 27, 2023
danking pushed a commit to danking/hail that referenced this pull request Oct 4, 2023
CHANGELOG: Reduce latency on simple pipelines by as much as 50% by reducing decoding time.

Force count essentially tests decoding because it forces decoding but then just increments a counter
by one. Analysis of profile results indicates that the array inplace decoder was perhaps 50% of
time, but exactly what part of decoding was unclear.

I attempted many different things. I eventually settled on loop unrolling as the primary
benefit. After team meeting, I applied @patrick-schultz 's advice to use bit twiddling to further
improve the speed.

---

I assessed the latency using `time python3` on this file:

```python
import hail as hl
hl.init(master='local[1]')
hl._set_flags(write_ir_files='1')
hl.read_matrix_table('/Users/dking/projects/hail-data/foo.mt')._force_count_rows()
```

`foo.mt` is a subset of the `variant_data` from a VDS with ~80k samples, ~300k variants, stored in
~1.6GiB.

1. This PR: 34s, 33s
2. no twiddling: 43s, 43s hail-is/hail@main...danking:hail:unroll-64
3. no twiddling & 8 element blocks: 37s, 38s hail-is/hail@main...danking:hail:unroll-8
4. `main` (`481cfc201b [query] fix backoff code (hail-is#13713)`): 68s, 69s

In YourKit, I observe that (1) reads 50-70MB/s with one core whereas (4) reads 15-35MB/s.

I also assessed the 10-core latency and JIT effects:

- (1) starts at ~12s, warms to ~6s (+- 0.5s). Peak bandwidth 490MB/s.
- (4) starts at ~17s and warms up to ~11s (+- 2s). Peak bandwidth ~250MB/s.

I suspect, with this PR, the multi-core speed is fast enough to saturate any of our file
stores (including my laptop, which I think taps out just around ~500MB/s).

Big thanks to everyone who contributed, particularly @patrick-schultz, whose suggestion to use
bit-twiddling, squeezeed another 10% off the 8 element blocks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[query] automatic retry code in Java is broken

3 participants