-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inverted prediction with regressor? #1728
Comments
I think it would help to apply a bit of common sense here and consider that there is really no magic behind these tools. In the first example, the model is trying to forecast one year ahead using only six months of data. In the second, there seems to be only two data points for one year of prediction - really not enough data points to generate any meaningful prediction. As a comparison, one-day ahead hourly forecasts for electrical grid use models trained on three years worth of data - training period to forecast horizon ratio of 1000 (365 days x 3 years) to 1. |
Thanks for your answer @hansukyang ! I know that the data is limited in some cases, and this is exactly the reason why I tried to introduce the regressor which is based on data of at least a few years. I'm really not expecting a perfect prediction with only a few datapoints, but I'm finding the behaviour of the regressor in case of a negative trend very strange, which is why I opened this issue. |
I see, thanks for the clarification. I saw similar behaviour when I had small number of data points and I explained to myself that it's probably like fitting a parabola to two points. For my own use, I find the linear growth trend somewhat misleading so I was very happy when |
@JeremyKeustersML6 I think the issue here is a little subtle, and it isn't one that has come up much before.
so the magnitude of the regressor effect is The key thing to note here is that the sign of the magnitude of the regressor effect will flip when the sign of the trend flips. Let's take beta>0 for now. Then, with a positive trend an increase in the regressor will produce an increase in The core of the issue is really the fact that we are getting a negative trend when we know that is impossible. The clipping approach from #1668 ensures positive The other thing you could try would be an additive regressor, instead of multiplicative. Then the model is
and the effect of the regressor is no longer changed by the sign of the trend. |
Thank you both @hansukyang and @bletham for your answers.
This is an interesting option I didn't know about, thanks.
Thanks for the good explanation and the suggestion. I tried the There are two issues with these results:
An another note, I also changed my code such that the seasonality/forecast of 'Group A' and 'Group B' is forecasted with
I also tried this, but it performs in general worse on my data. For Object A, it pushes the prediction into negative numbers without any perspective to go to positive again: For Object B, it actually suddenly generates a positive trend: |
Thanks for sharing those results. Yeah it's a bit tricky here since the trend does go to 0 early on, and then it ends up being a bit sticky at 0. I think @hansukyang's suggestion of a flat trend would be probably be the best chance then at improving the fit since that would force the model to treat the initial decrease as coming from the regressor, and the trend would stay positive. |
Thanks again for your answer @bletham ! I'll do some experiments with the flat trend and see if it leads to something! Regarding the |
I think in these lines: while min(trend[indx_future:]) < 0:
indx_neg = indx_future + np.argmax(trend[indx_future:] < 0) if you replace 0 with something greater than 0 that would do it. |
Thanks again for your answer! I indeed tried something like this already in the past, but the program would just be stuck on some kind if infinite loop, so that probably has to do with the On the other hand, I was pleasantly surprised with the results that I got when using the The results make sense, as I wanted more 'weight' on the regressor. It also actually didn't make any sense to get a real trend based on only a few datapoints, so this is kind of solved here. I will still experiment by adding eg. the trend of 'Group A' and 'Group B' respectively (instead of only adding their seasonalities) to still have some kind of 'real trend' in there! Thank you again to @bletham and @hansukyang ! You've been both really helpful! I might still update this issue in the future with additional findings I encounter. |
Just a small update, I succeeded incorporating the I renamed the existing
I'm then overriding the
The issue however is that the
Do you have any suggestions/pointers on what to fix/where to look, since I'm not familiar with all the different variables? A quick and dirty solution that actually works is to change at the end of the bottom loop (right after Thanks in advance! 😄 |
Ah, yes, I should have noticed that sooner. The k_t[indx_neg:] -= k_t[indx_neg]
m_t[indx_neg:] -= m_t[indx_neg] is resetting the trend to have 0 slope (k_t) and 0 offset (m_t) at indx_neg so that the trend is flat at 0. To have the trend be flat at 1/y_scale, What we really want is to have 0 slope and offset 1/y_scale. So
should do the job. I would worry a little bit about relying on < for the float comparison with 1/y_scale, so I'd probably add a little tolerance in there, like add an extra 1e-6 or something to m_t. |
Hi all,
First of all, thanks for your work on this awesome repository!
I'm having some issues with an 'exploding' forecast that seems to be kind of inverted. I'm creating a Prophet model for 'Object A' that doesn't have a lot of data. Therefore, there's no seasonality found in the data and thus only a trend available. To compensate for this, I'm extracting the seasonality component from a model that was trained on 'Group A' to which 'Object A' belongs (and which has more data available). I'm then adding the seasonality output from the 'Group A' model as a regressor to the model of 'Object A'.
So to summarise: The seasonality of 'Group A' (to which 'Object A' belongs) is added as a regressor to 'Object A'. You can find a small schematic overview of this below.
You can see the prediction of 'Object A', together with the components, in the screenshot below.
(Left top graph is the trend of 'Object A'; left bottom graph is the regressor and thus the seasonality of 'Group A'; right graph is the prediction of 'Object A')
The issue here is that the prediction is kind of 'exploding' in the wrong direction. You can see that the extra regressor is going up around 2021-03/2021-04, but the actual prediction is going down. Around 2021-06, the regressor is going down again, while the prediction is actually going up.
I don't really understand how this is happening as this way of working (extracting seasonality data from a model and using it in another model) is working fine for all my other models. I suppose it has something to do with the negative trend, but I still don't feel like it is expected behaviour?
The regressor is added to the model using the following code:
The model itself is simply created with the following code:
Extra remarks
Prophet 0.7.1
together withPython 3.7
Thank you in advance for any input and/or solution!
The text was updated successfully, but these errors were encountered: