Skip to content
This repository has been archived by the owner on Sep 11, 2019. It is now read-only.

benchmark considerations #1

Open
cantsin opened this issue Oct 19, 2014 · 1 comment
Open

benchmark considerations #1

cantsin opened this issue Oct 19, 2014 · 1 comment

Comments

@cantsin
Copy link
Contributor

cantsin commented Oct 19, 2014

We are essentially benchmarking the nsq clients here against a specific nsq configuration. We need to ensure consistent environmental runs; with that in mind, here are some thoughts on what we should be mindful of.

  • like nsq's bench/bench.py, use fresh ec2 machines or the equivalent
  • print out the nsq configuration (e.g., only one nsqlookupd and one nsq) so we know we are comparing identical nsq configurations
  • nsq "warmup" -- this needs to be defined. is it the first few messages? seconds?
  • status of ephemeral topics/channels (topics only in latest nsq)
  • go version for nsq
  • nsq client versions
  • memory, cpu, bandwidth load (even if to make sure we're not hitting any ceilings)
  • nsqd data files (.dat) files
  • hopefully even after multiple runs, we have identical performance profiles.
@pharaun
Copy link
Contributor

pharaun commented Oct 19, 2014

👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants