Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different WAPE compare to paper #3

Open
baghishani opened this issue Apr 23, 2024 · 6 comments
Open

Different WAPE compare to paper #3

baghishani opened this issue Apr 23, 2024 · 6 comments
Assignees

Comments

@baghishani
Copy link

Dear HumaticsLAB,

First of all, thanks for sharing your code.
I ran this model with all default parameters except instead of 12 weeks for 6 weeks and for model I chose "cross".
The WAPE I got is 53.46 while the WAPE in GTM paper for Cross model with Gtrend is 59.0.

I used your formula for WAPE in code. I highlighted the row in this picture
GTM

In the last four rows of this image, i got the same values by running models, only cross with Gtrend is different.

Could you please help me? I need to know how produce the same scores as you did for my comparison part as base-line models.

Bests,
Maryam

@joppichristian
Copy link
Contributor

joppichristian commented Apr 23, 2024

Hi @baghishani,
thanks for your interest in our code.
Did you train the model on 12 weeks or 6 weeks? It's not clear. What about the evaluation? 6 or 12?

@joppichristian joppichristian self-assigned this Apr 23, 2024
@baghishani
Copy link
Author

Hi @joppichristian, thanks for your response. Both train and evaluation is for 6 weeks.

@joppichristian
Copy link
Contributor

We trained the model on 12 weeks and test on 6.

@baghishani
Copy link
Author

Thanks a lot. I will try that :)

@baghishani
Copy link
Author

Dear @joppichristian, I trained the model on 12 weeks and then tested it on [12,8,6,4] weeks, and here are my results:
Test_Weeks WAPE MAE TS
0 12-week 58.20 30.06 1.16
1 8-week 55.90 30.71 0.49
2 6-week 54.97 30.03 0.42
3 4-week 55.14 29.22 0.44
I thought maybe other parameters are different. Here are parameters for train:
NUM_EPOCHS = 200
USE_TEACHERFORCING = True
TF_RATE = 0.5
LEARNING_RATE = 0.0001
BATCH_SIZE= 128
NUM_WORKERS = 8
USE_EXOG = True
EXOG_NUM = 3
EXOG_LEN = 52
HIDDEN_SIZE = 300
NORM = False
model_types = ["image", "concat", "residual", "cross"]
MODEL = 3

Do you think is there any difference between the train I did and yours, since I got different results?
Thanks a lot in advance.

@joppichristian
Copy link
Contributor

There might be some differences in the parameters and the initialization of the network (it depends on the hardware and other variables). I am unsure about the parameters; they should be those reported in the scripts as default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants