Testing new experiments

Srikanth edited this page Sep 17, 2013 · 3 revisions
  1. Read all of the source code.
  2. Run the package on a router in the lab for at least a week.
  3. Run in a few friendly homes for at least a week.
  4. Do you install cron jobs?
  5. Does your package bin run as a daemon or periodically?
  • If periodic, what's the upper bound on one run duration? Do you take care of overlapping instances?
  • If daemon, what's the upper bound on CPU/memory usage? What about memory leaks?
  1. Does your package clean up after itself when removed?
  2. Compute the much data the tool will upload. You might find http://sites.gtnoise.net/~sburnett/bismark-status/uploads.html useful.
  • How much data does your tool upload in normal usage?
  • How much data does your tool upload in the worst case scenario. You must have a reasonable upper bound on data usage. What's considered reasonable depends on the household and the number of active experiments, but generally uploading less than 5 MB per day is reasonable. More than 5 MB may still be reasonable, but could limit the set of experiments run on the router; we should talk.
  1. Compute how much auxiliary traffic the tool will upload:
  • in normal usage.
  • in the worst case scenario. You must have a reasonable upper bound.
  1. Does your tool generate traffic in proportion to the uplink capacity?
  • Consider how your tool will behave in a household with a 768/128 Kbps DSL connection.
  • What if latency to Georgia Tech is 3000 ms?
  1. Does your tool increase memory or CPU usage? You might find dp5:~sburnett/bismark-health/health.sqlite useful. You can also install collectd and configure a collectd server to monitor at finer granularity.