Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CalTRACK Issue: Vote on/agree to uniform testing approach #129

carolinemfrancispge opened this issue May 15, 2019 · 1 comment


Copy link

commented May 15, 2019


  • Are you opening this issue in the correct repository (caltrack/grid/seat)?
  • Did you perform a search of previous Github issues to check if this issue has been previously considered?

Article reference number in CalTRACK documentation (optional): and indicate this is in scope, but it's not currently directly addressed in the methods


I'd like to propose that this group agree to a uniform approach we would use to evaluate changes to modeling approaches and/or new modeling approaches. This is motivated by a desire to have a standard understanding of how versions/updates improve CalTRACK, and to take the burden of determining a testing approach off of group members who want to propose a change.

The testing approach the group settles on should include a process for testing and appropriate metrics (note: I'm talking about modeling here; you might probably need different metrics for other sections of the methods). The metrics should be able to be applied between models with data at different time resolutions (e.g., hourly and daily), and applied in out-of-sample testing. We'd also need to discuss what the counterfactual against which you'd compare would be (current CalTRACK? An older version?).

Ideally this approach would come with a standard data set (or a few), but that is not necessarily required.

Proposed test methodology

  1. Open discussion for proposals from interested WG members for test methodologies (#117 proposes a testing approach for that issue that could be more broadly applied, #122 overlaps with this issue and also proposes a testing approach; #71 and #76 contain some relevant discussions, however, they're focused on model selection for particular buildings). Proposals should outline what the test methodology is, how it would be broadly applicable to CalTRACK, its use in other contexts (and why those are relevant)/prevalence in methodological literature, and the advantages and disadvantages it would offer.
  2. The chair may choose to consolidate the list of test methodologies, if several are similar, or narrow it down if some seem infeasible.
  3. Group members would vote on a test methodology (by consensus if possible)

Acceptance Criteria

A supermajority or consensus vote of group members would choose a testing methodology.


This comment has been minimized.

Copy link

commented May 22, 2019

Another way of going about this would be to submit a test methodology with a particular issue and reach consensus on the testing protocol prior to working on the issue. There are likely going to be different testing requirements and different thresholds for different issues and probably different data requirements as well. If we try to set this up beforehand, it's likely that we'll spend all of our time creating exceptions to the rules we've laid down for ourselves.

@jkoliner jkoliner added this to Phase 1: Pre-Draft in CalTRACK May 28, 2019
@jkoliner jkoliner moved this from Phase 1: Pre-Draft to Phase 2: Draft in CalTRACK Jun 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Phase 2: Draft
3 participants
You can’t perform that action at this time.