You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, We have upgraded NMECR v1.0.10 to v1.0.15 and one thing we noticed that raised a bit of concern on our end is that, for utility bill-based consumption data (therefore intervals roughly a month in length), it seems like the model fit for a given temperature and the prediction that NMECR throws out for a results-period day with roughly the same weather can vary quite a bit. Here is an anonymized example:
Top is training data vs model, and bottom is results period vs predictions. For either of the two rectangle sections, it seems that at essentially the same temperature, the predicted value can be something like 15% off from the original model fit for similar weather. What we expected here was the model to be a pricewise function of two simple linear pieces with independent variable being temperature and dependent variable being energy consumption.
Some context about the model above:
We’re only using OAT for sure;
We don’t do multivariate regressions.
The model is definitely 4P.
We noticed two more coefficients for interval_start, interval_end variables that we haven’t noticed before in previous versions.
Ultimately, I guess we have three more pointed questions:
Did anything change between v1.0.10 and v1.0.15 that might engender this sort of behaviour change?
Is there maybe something else factoring into the calculations beyond purely the weather parameter provided?
Is it normal that the 4P model follows the training data so strictly instead of the usual sort of lines joined at a changepoint?
Thanks,
The text was updated successfully, but these errors were encountered:
Thanks for pointing this out, Nastaran. There was a bug, introduced in v.1.0.15, that was causing interval_start and interval_end to be used as independent variables in the monthly changepoint models. This is what led to the overfitting shape that you pointed out in the above image. We fixed this, and the resulting models fits are as you would expect. Without these variables, you shouldn't have the same variation in predictions that you first noticed. I'm closing this issue, and released v.1.0.16 to fix this bug. Let me know if you have additional questions.
Hi, We have upgraded NMECR v1.0.10 to v1.0.15 and one thing we noticed that raised a bit of concern on our end is that, for utility bill-based consumption data (therefore intervals roughly a month in length), it seems like the model fit for a given temperature and the prediction that NMECR throws out for a results-period day with roughly the same weather can vary quite a bit. Here is an anonymized example:
Top is training data vs model, and bottom is results period vs predictions. For either of the two rectangle sections, it seems that at essentially the same temperature, the predicted value can be something like 15% off from the original model fit for similar weather. What we expected here was the model to be a pricewise function of two simple linear pieces with independent variable being temperature and dependent variable being energy consumption.
Some context about the model above:
Ultimately, I guess we have three more pointed questions:
Thanks,
The text was updated successfully, but these errors were encountered: