You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great work. I have a question related to random seeds.
First, I tried the seed=1 and default hyperparameters, and the result of LADE is 52.2, which is actually lower than 53 in the original paper. I have no idea what is the difference.
Then, I tried the seed=2 and default hyperparameters, and the result of LADE is 48.9. Does that mean this method is unstable? Thanks very much if you can reply.
The text was updated successfully, but these errors were encountered:
All of our experiments are done by fixing the seed=1. We’re not entirely sure why you’re encountering the difference, but we suspect it may due to different machine settings. We did our experiment on AWS p3.16xlarge instances, and you may try this setting too.
Since the Imagenet-LT dataset is quite large and takes some time to train the model, we haven’t got time to try multiple seeds to check the stableness of our method. We also agree that our method can be somewhat unstable in some cases since it is using the Monte Carlo approximation to calculate the loss. Managing the unstableness might be quite an intriguing direction to work with.
Hi, thanks for your great work. I have a question related to random seeds.
First, I tried the seed=1 and default hyperparameters, and the result of LADE is 52.2, which is actually lower than 53 in the original paper. I have no idea what is the difference.
Then, I tried the seed=2 and default hyperparameters, and the result of LADE is 48.9. Does that mean this method is unstable? Thanks very much if you can reply.
The text was updated successfully, but these errors were encountered: