You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for such a great project. I have a question when reading Table 1 of the paper:
Are the evaluation data on APE and AVE shown in Table 1 the average of multiple evaluation results? If so I would like to ask how many experiments you did?
Because I changed the random number and trained the model multiple times, the results obtained each time were not as good as shown in the paper. Even if I averaged multiple experiments, I did not achieve the effect shown in the paper.
The text was updated successfully, but these errors were encountered:
In Table 1, it is not an average of multiple evaluation results. It corresponds to only one random generation. I actually made only one experiment. In Table 2, I am generating it 10 times and do the average (avg) or take the best.
The training of such models are not 100% deterministic so it may be normal that the results differ.
In Table 1, it is not an average of multiple evaluation results. It corresponds to only one random generation. I actually made only one experiment. In Table 2, I am generating it 10 times and do the average (avg) or take the best.
The training of such models are not 100% deterministic so it may be normal that the results differ.
Hello, thank you for such a great project. I have a question when reading Table 1 of the paper:
Are the evaluation data on APE and AVE shown in Table 1 the average of multiple evaluation results? If so I would like to ask how many experiments you did?
Because I changed the random number and trained the model multiple times, the results obtained each time were not as good as shown in the paper. Even if I averaged multiple experiments, I did not achieve the effect shown in the paper.
The text was updated successfully, but these errors were encountered: