- Read all of the source code.
- Run the package on a router in the lab for at least a week.
- Run in a few friendly homes for at least a week.
- Do you install cron jobs?
- Does your package bin run as a daemon or periodically?
- If periodic, what's the upper bound on one run duration? Do you take care of overlapping instances?
- If daemon, what's the upper bound on CPU/memory usage? What about memory leaks?
- Does your package clean up after itself when removed?
- Compute the much data the tool will upload. You might find http://sites.gtnoise.net/~sburnett/bismark-status/uploads.html useful.
- How much data does your tool upload in normal usage?
- How much data does your tool upload in the worst case scenario. You must have a reasonable upper bound on data usage. What's considered reasonable depends on the household and the number of active experiments, but generally uploading less than 5 MB per day is reasonable. More than 5 MB may still be reasonable, but could limit the set of experiments run on the router; we should talk.
- Compute how much auxiliary traffic the tool will upload:
- in normal usage.
- in the worst case scenario. You must have a reasonable upper bound.
- Does your tool generate traffic in proportion to the uplink capacity?
- Consider how your tool will behave in a household with a 768/128 Kbps DSL connection.
- What if latency to Georgia Tech is 3000 ms?
- Does your tool increase memory or CPU usage? You might find
dp5:~sburnett/bismark-health/health.sqlite useful. You can also install
collectd and configure a
collectd server to monitor at finer granularity.