Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4P Modelling on NMECR v1.0.15 #28

Closed
NastaranAl opened this issue Jan 26, 2024 · 1 comment
Closed

4P Modelling on NMECR v1.0.15 #28

NastaranAl opened this issue Jan 26, 2024 · 1 comment

Comments

@NastaranAl
Copy link

Hi, We have upgraded NMECR v1.0.10 to v1.0.15 and one thing we noticed that raised a bit of concern on our end is that, for utility bill-based consumption data (therefore intervals roughly a month in length), it seems like the model fit for a given temperature and the prediction that NMECR throws out for a results-period day with roughly the same weather can vary quite a bit. Here is an anonymized example:

image

Top is training data vs model, and bottom is results period vs predictions. For either of the two rectangle sections, it seems that at essentially the same temperature, the predicted value can be something like 15% off from the original model fit for similar weather. What we expected here was the model to be a pricewise function of two simple linear pieces with independent variable being temperature and dependent variable being energy consumption.

Some context about the model above:

  • We’re only using OAT for sure;
  • We don’t do multivariate regressions.
  • The model is definitely 4P.
  • We noticed two more coefficients for interval_start, interval_end variables that we haven’t noticed before in previous versions.

Ultimately, I guess we have three more pointed questions:

  1. Did anything change between v1.0.10 and v1.0.15 that might engender this sort of behaviour change?
  2. Is there maybe something else factoring into the calculations beyond purely the weather parameter provided?
  3. Is it normal that the 4P model follows the training data so strictly instead of the usual sort of lines joined at a changepoint?

Thanks,

@mhsjacoby
Copy link
Contributor

Thanks for pointing this out, Nastaran. There was a bug, introduced in v.1.0.15, that was causing interval_start and interval_end to be used as independent variables in the monthly changepoint models. This is what led to the overfitting shape that you pointed out in the above image. We fixed this, and the resulting models fits are as you would expect. Without these variables, you shouldn't have the same variation in predictions that you first noticed. I'm closing this issue, and released v.1.0.16 to fix this bug. Let me know if you have additional questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants