Performance and cost efficiency on AWS Graviton #4020
trinity-1686a
started this conversation in
General
Replies: 1 comment 1 reply
-
@trinity-1686a can you share also the index config and the node config? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I ran some small benchmarks of ingestion on different types of amazon instances, including their Graviton (ARM) cpus. Here are the results summarized:
I choose C5 as the reference point for comparison as that's what's used in most posts on quickwit.io. All instances are xlarge (4 cores, 8GB ram).
The results seems to indicate the best bang for the buck with Quickwit is to use either C6a or C7g instances. C6g is sadly much slower, but as it's the cheapest, it can be useful if you workload can fit in a single node. The C7a and C7i are the fastest available instances, but they are less cost efficient (you can run 5 C7g for the price of 4 C7i or 3.5 C7a), so it's not really worth it for something that distributes well such as Quickwit.
tldr: you should probably use either c6a or c7g for indexing your data with Quickwit.
Note on the protocol: I ingested the hdfs dataset using v0.6.4, with the ingest API (single pipeline workload), and without merges. You can't make a good estimate of $ per TB ingested from that, merges would slow things down, and multiple pipelines would speed them up. But those effects should have the same impact for any instance, so relative comparison makes sens.
(edit: there was some misconfiguration doing the benches, so I reran the numbers and updated the conclusions)
Beta Was this translation helpful? Give feedback.
All reactions