-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Released code reproduce result with default parameters lower than publish one #21
Comments
Yes I've tried a few times before I released code. My results were quite closed to the published results with an approximate 0.2% std. There might be some very implicit gap between our running environments that led to the difference. The following is a log that you might compare to yours, especially for the loss values: python version : 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) [GCC 7.2.0] ------------------------------------------------------- options --------------------------------------------------------
|
Here is my log.I print the test results every training epoch to monitor this model. python version : 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] ------------------------------------------------------- options --------------------------------------------------------
|
Yeah. Probably due to dataset shift. Anyway, a difference of less than 1.5%/1% in rank-1/MAP may be expected since the statistic variation could lead to a shift in optimal parameters. |
Got it, Thanks. |
Thanks for your excellent work and kindly code release. This work is elegant and inspires my future study.
However, when I run your release code with default parameter on the Market dataset, the Rank1 and MAP is slightly lower than the published one. The Rank1 is 65.2 (67.7 in the paper) and MAP is 38.8 (40.0 in the paper) when the model converged.
Any suggestions for this mismatch? Thanks for your kindly reply.
The text was updated successfully, but these errors were encountered: