You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, add and sub work both for next and previous moves. This works OK even for moving regression, but some calculations just do not support previous move, e.g. MovingMedian. Should fallback to MoveAt() (state recreation) for such cases, and this will perform just like non-online version.
Also change the name of this cursor, it is weird.
The text was updated successfully, but these errors were encountered:
Call it WindowState cursor or something like this. Should add an option to cache values in a sorted dequeue (or SortedDequeueMap).
Later should add an internal property to Series which estimates complexity of MN/MP/MA moves and some very simple rule to chain and aggregate those complexities. In theory, for truly "online algorithms", we must always cache the entire state inside a cursor so that it could be completely updated with new values and without lagged cursors. But in practice, lagged cursors save memory and are faster for containers, but could do evaluation twice for nested calculations.
Currently, add and sub work both for next and previous moves. This works OK even for moving regression, but some calculations just do not support previous move, e.g. MovingMedian. Should fallback to MoveAt() (state recreation) for such cases, and this will perform just like non-online version.
Also change the name of this cursor, it is weird.
The text was updated successfully, but these errors were encountered: