Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Rep-Holdout #38

Closed
georgeblck opened this issue Jun 1, 2021 · 11 comments
Closed

Implement Rep-Holdout #38

georgeblck opened this issue Jun 1, 2021 · 11 comments

Comments

@georgeblck
Copy link

Thank you for this repository and the implemented CV-methods; especially GapRollForward. I was looking for exactly this package.

I was wondering if you are interested in implementing another CV-Method for time series, called Rep-Holdout. It is used in this evaluation paper (https://arxiv.org/abs/1905.11744) and has good performance compared to all other CV-methods - some of which you have implemented here.

As I understand it, it is somewhat like sklearn.model_selection.TimeSeriesSplit but with a randomized selection of all possible folds. Here is the description from the paper as an image:

Unbenannt


The authors provided code in R but it is written very differently than how it needs to look in Python. I adapted your functions to implement it in python but I am not the best coder and it really only serves my purpose of tuning a specific model. Seeing as the performance of Rep-Holdout is good and -to me at least - it makes sense for time series cross validation, maybe you are interested in adding this function to your package?

@WenjieZ
Copy link
Owner

WenjieZ commented Jun 1, 2021

The so-called Rep-Holdout seems lame to me. Use the following code instead:

n = LENGTH_OF_DATA
m = NUMBER_OF_FOLDS
window = (a, b)
cv = GapRollForward(min_train_size=a, min_test_size=n-b, roll_size=(b-a)//(m-1))

@WenjieZ
Copy link
Owner

WenjieZ commented Jun 7, 2021

Oops! The last line should have been:

cv = GapRollForward(min_train_size=a, 
                    min_test_size=n-b, 
                    max_test_size=np.inf, 
                    roll_size=(b-a)//(m-1))

@georgeblck
Copy link
Author

Thank you for your fast reply. My problem with the GapRollForward is that - without adjusting/tuning the various input arguments - it lead to overfitting when finetuning my time series forecasting task. E.g. I have 2.5 years and choose the first 2 years as the min_train_size and then do rolling cross validation from that point onwards. Although this rolling prediction closely resembles the actual prediction procedure, when I use it as part of hyperparameter tuning for the underlying method it naturally overfits to the data outside of the min_train_size.

That is why I liked the RepHoldout; because if used as part of a tuning procedure it removes the bias induced by min_train_size and results in a more even crossvalidation performance.

I'm not sure if I understand your suggestion correctly but I compared a small example here with the Rep-Holdout:

import numpy as np
from tscv import GapRollForward

# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (1, 5)

cv = GapRollForward(min_train_size=window[0], 
                    min_test_size=n-window[1], 
                    max_test_size=np.inf, 
                    roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
    print("train:", train, "test:", test)

train: [0] test: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29]
train: [0 1] test: [ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26 27 28 29]
train: [0 1 2] test: [ 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29]
train: [0 1 2 3] test: [ 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28 29]
train: [0 1 2 3 4] test: [ 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29]

and here is the output from my basic RepHoldout implementation:

cv =RepHoldout(nreps=5, train_size=4,
                 test_size=1, gap_size=0)

for train, test in cv.split(range(30)):
    print("train:", train, "test:", test)

train: [23 24 25 26] test: [27]
train: [3 4 5 6] test: [7]
train: [5 6 7 8] test: [9]
train: [6 7 8 9] test: [10]
train: [14 15 16 17] test: [18]

@WenjieZ
Copy link
Owner

WenjieZ commented Jun 8, 2021

That is what the max_train_size parameter is for.

# Number of Samples
n = 30
# Number of Folds
m = 5
# Windowsize
window = (5, 25)

cv = GapRollForward(min_train_size=window[0], 
                    min_test_size=n-window[1], 
                    max_train_size=4, 
                    roll_size=(window[1]-window[0])//(m-1))
for train, test in cv.split(range(n)):
    print("train:", train, "test:", test)

train: [1 2 3 4] test: [5]
train: [6 7 8 9] test: [10]
train: [11 12 13 14] test: [15]
train: [16 17 18 19] test: [20]
train: [21 22 23 24] test: [25]

@WenjieZ
Copy link
Owner

WenjieZ commented Jun 8, 2021

A few comments:

  1. The variable window refers to the "available window" in your Rep-Holdout.
  2. It's a good idea to use balanced training and test set across all folds of cross-validation. You are doing it right.

@georgeblck
Copy link
Author

Thank you for your comments. I like your code and see that it is very similar to the Rep-Holdout; just not randomized and instead truly rolling forward.

@WenjieZ
Copy link
Owner

WenjieZ commented Jun 20, 2021

Yeah, randomization is generally to be avoided if they are not an essential part of an algorithm. In particular, in your example, Rep-Holdout can be seen as simple sampling with replacement, and my code systematic sampling, which often results lower variance and is thus preferred.

@georgeblck
Copy link
Author

Thank you for the insights!
What are your thoughts on the paper I linked to then and the performance of the different methods?

@fengchi863
Copy link

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)):
train:[1,2,3,4] test:[5,6,7,8,9,10]
train:[1,2,3,4,5] test:[6,7,8,9,10]
train:[1,2,3] test:[4,5,6,7,8,9,10]
train:[1,2,3,4,5,6,7] test:[8,9,10]

@chasse20
Copy link

chasse20 commented Apr 2, 2023

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)): train:[1,2,3,4] test:[5,6,7,8,9,10] train:[1,2,3,4,5] test:[6,7,8,9,10] train:[1,2,3] test:[4,5,6,7,8,9,10] train:[1,2,3,4,5,6,7] test:[8,9,10]

The method described in that paper is very ambiguous, but what you have seems most likely what the author meant. The last one WenjieZ posted was just a rolling block method. I do agree with him though on forgoing randomness and would rather just exhaust each fold as you have shown starting from [1] to [1,2,3,4,5,6,7,8,9] training versus test.

@WenjieZ
Copy link
Owner

WenjieZ commented Apr 2, 2023

thanks for your link, I also read this paper yesterday. I have some questions. I think the rep-holdout is not like your demo. It gives an available window to split, so I think the train or test is not a fixed length.

it may be this:

cv.split(range(10)):

train:[1,2,3,4] test:[5,6,7,8,9,10]

train:[1,2,3,4,5] test:[6,7,8,9,10]

train:[1,2,3] test:[4,5,6,7,8,9,10]

train:[1,2,3,4,5,6,7] test:[8,9,10]

Increase the max_test_size parameter (default to 1) if this split is what you are looking for.

@WenjieZ WenjieZ closed this as completed Apr 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants