You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, can I please ask how you are handling the potential for look-ahead bias in the scaling of the features etc? This seems to be a common problem in timeseries prediction. I did search the docs but couldn't find any such information. Many thanks.
The text was updated successfully, but these errors were encountered:
I should probably make this clearer in the docs and tutorial.
I think there are two answers:
Either you do not care too much (maybe because your timeseries is long and stationary) and because you anyways will validate on the validation set which should have no look-ahead bias.
Or you use the EncoderNormalizer that scales dynamically on each encoder sequence as you train.
Generally, I would prefer the Encoder Normalizer. However, you might accept look-ahead bias if you are having troubles to find a reasonably stable normalization or you expect a more stable normalization in inference. In the later case, you ensure that you do not learn "weird" jumps that will not be present when running inference, thus training on a more realistic data set. I hope that rational makes sense. Always happy to read a more rigorous analysis. I am sure some statistician has devoted a PhD to the topic.
Hi, can I please ask how you are handling the potential for look-ahead bias in the scaling of the features etc? This seems to be a common problem in timeseries prediction. I did search the docs but couldn't find any such information. Many thanks.
The text was updated successfully, but these errors were encountered: