You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 8, 2018. It is now read-only.
The failure modes are thought-through, but not empirically validated. We should build a basic testing harness for failure modeling, in the style of a simplified Jepsen. Also, it would be good to establish a means to verify overall system throughput (MBps) and latency (ingest-to-query).
The text was updated successfully, but these errors were encountered:
OK Log system write throughput should be approximately equal to disk write capacity on the ingest nodes, and should scale linearly. That is, if you have 10 ingest nodes that can each do 250 MBps of sustained writes, and if you have a large collection of well-balanced producers connected to them, the overall write throughput as observed by the system should be 250 MBps x 10 = 2.5 GBps.
Latency is bimodal: in high-throughput environments, it's controlled by network speed; in low-throughput environments, it's approximately equal to ingest segment flush max age plus store segment target age. That is, if ingest flush max age is 3s, and store segment target age is 3s, then ingest-to-query should be approximately 6s (worst case).
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The failure modes are thought-through, but not empirically validated. We should build a basic testing harness for failure modeling, in the style of a simplified Jepsen. Also, it would be good to establish a means to verify overall system throughput (MBps) and latency (ingest-to-query).
The text was updated successfully, but these errors were encountered: