Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The performances in the paper is not reproduced. #13

Open
KanghoonYoon opened this issue Jun 10, 2022 · 8 comments
Open

The performances in the paper is not reproduced. #13

KanghoonYoon opened this issue Jun 10, 2022 · 8 comments

Comments

@KanghoonYoon
Copy link

Hi, I tried to reproduce the transformer hawkes process on StackOverflow fold1. However, the results of accuracy and RMSE is as below.

![image](https://user-images.githubusercontent.com/56212725/173004512-ba357b4d-244a-4f73-9ca1-9d9535f3f1df.png

화면 캡처 2022-06-10 153013)

I think I have something missing. Compared to the relased code of Self-Attentive Hawkes process, I think it is not because of scaling factor. What does make the difference between the paper and this repository?

@TendaG0
Copy link

TendaG0 commented Oct 9, 2022

I also find this question, I hope someone can help me, thanks a lot!

@pritamqu
Copy link

I encountered the same!

[ Epoch 100 ]
  - (Training)    loglikelihood: -0.67653, accuracy:  0.43488, RMSE:  106.19156, elapse: 0.438 min
  - (Testing)     loglikelihood: -0.65685, accuracy:  0.42897, RMSE:  107.70097, elapse: 0.045 min
  - [Info] Maximum ll: -0.65546, Maximum accuracy:  0.42900, Minimum RMSE:  107.18501

@TendaG0
Copy link

TendaG0 commented Nov 2, 2022

anyone can explain to me, please?

@waystogetthere
Copy link

waystogetthere commented Nov 22, 2022

I am still running the code, reply just to show that someone is still caring about this work.

Plus: looks like there are 2 errors in the log-likelihood function. Please refer to these 2 issues:

  1. Ambiguity in calculating log likelihood #14 (comment)

  2. Should event likelihood be computed using current or last hidden state? #10 (comment)

So for now I think the output does not make any sense.
I have fixed them and currently running the code.

@waystogetthere
Copy link

I have revised 2 errors and this is the result for dataset StackOverflow (data_so):
image

And this is the reported result:
image
image

Please refer to the 'SO' and 'StackOverflow' columns.

Only ACC matches...

@waystogetthere
Copy link

I will later upload my revised code soon. Any discussion is welcome!

@KanghoonYoon
Copy link
Author

I think it is obvious that the transformer Hawkes process is superior to the Neural Hawkes process and RMP. However, I think the research area does not have a strict criterion for evaluating the model yet. (e.g., only the accuracy measure is reproduced). If there are something I missed, it is better for the author to revise the code and show reproducibility since I am not the only one who has a concern about the work.

@Waves-forward
Copy link

I will later upload my revised code soon. Any discussion is welcome!

Hi, I am also trying to reproduce the result of this paper (especially for the mimic dataset). My event accuracy is similar but RMSE is a bit higher. Have you been able to get similar results as the original paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants