New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The performances in the paper is not reproduced. #13
Comments
I also find this question, I hope someone can help me, thanks a lot! |
I encountered the same!
|
anyone can explain to me, please? |
I am still running the code, reply just to show that someone is still caring about this work. Plus: looks like there are 2 errors in the log-likelihood function. Please refer to these 2 issues: So for now I think the output does not make any sense. |
I will later upload my revised code soon. Any discussion is welcome! |
I think it is obvious that the transformer Hawkes process is superior to the Neural Hawkes process and RMP. However, I think the research area does not have a strict criterion for evaluating the model yet. (e.g., only the accuracy measure is reproduced). If there are something I missed, it is better for the author to revise the code and show reproducibility since I am not the only one who has a concern about the work. |
Hi, I am also trying to reproduce the result of this paper (especially for the mimic dataset). My event accuracy is similar but RMSE is a bit higher. Have you been able to get similar results as the original paper? |
Hi, I tried to reproduce the transformer hawkes process on StackOverflow fold1. However, the results of accuracy and RMSE is as below.
![image](https://user-images.githubusercontent.com/56212725/173004512-ba357b4d-244a-4f73-9ca1-9d9535f3f1df.png
)
I think I have something missing. Compared to the relased code of Self-Attentive Hawkes process, I think it is not because of scaling factor. What does make the difference between the paper and this repository?
The text was updated successfully, but these errors were encountered: