Note: "cold start" means a clean start of Brackets after a reboot. This is to get a timing that's unaffected by OS caching—not so much to focus on the use case where people are truly cold-starting Brackets, but rather to get a true timing for the first time the user accesses a given set of files, or if files they've already accessed have been flushed from the OS cache.
Note: This doesn't include typing speed, since we'll be handling that separately as part of our typing speed performance instrumentation/optimizations.
Question: Do we want to include other kinds of edits, scrolling performance, etc.? I feel like those are better handled in more generic performance stories that cross both inline and regular editors, since the instrumentation for both cases should be similar. In this proposal I've just focused on issues specific to inline editors.
These are various axes along which we should vary our test cases:
Also, most projects with nontrivial JS have some kind of framework included, so that needs to be factored in. For now, we'll include the non-minified version of the framework, since our features don't work well with minified versions yet in any case.
Proposed project profiles based on these axes:
Note that the "stress" case is intentionally unrealistic—we don't necessarily expect any users to have projects of this size/complexity (or expect it to be fast), but can use it as a stress test to figure out whether there's pathological slowdown somewhere that grows faster than it should with project size/complexity.
Question: Should we try to construct artificial cases that match these profiles, or should we instead just look for real-world projects that very roughly span different levels of complexity?
Question: Is the stress case useful?