Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reproducing your result #13

Open
NoaGarnett opened this issue Jan 11, 2022 · 4 comments
Open

reproducing your result #13

NoaGarnett opened this issue Jan 11, 2022 · 4 comments

Comments

@NoaGarnett
Copy link

Thanks for the simple and elegant implementation!
I tried running your code as is, on Multi-MNIST data, and failed to reproduce results.
I ran main_multi_mnist.py without changing any hyper parameter (learning rate (0.0005) , batch size (256), number of epochs (100)). For comparison, I created a version with no pcgrad:
1. comment out line 57: ## optimizer = PCGrad(optimizer)
2. replace line 72: optimizer.pc_backward(losses) -> torch.sum(torch.stack(losses)).backward()

I run each version 7 times. my results (averaging left-digit and right-digit accuracy) are:

Without PCGrad: average accuracy 89.5%, max accuracy 89.9%, standard deviation 0.38
With PCGrad: average accuracy 89.5%, max accuracy 89.8%, standard deviation 0.20

Can you come up with an explanation?
Many thanks,
Noa Garnett

@Lizhaoqing123
Copy link

Thanks for the simple and elegant implementation! I tried running your code as is, on Multi-MNIST data, and failed to reproduce results. I ran main_multi_mnist.py without changing any hyper parameter (learning rate (0.0005) , batch size (256), number of epochs (100)). For comparison, I created a version with no pcgrad: 1. comment out line 57: ## optimizer = PCGrad(optimizer) 2. replace line 72: optimizer.pc_backward(losses) -> torch.sum(torch.stack(losses)).backward()

I run each version 7 times. my results (averaging left-digit and right-digit accuracy) are:

Without PCGrad: average accuracy 89.5%, max accuracy 89.9%, standard deviation 0.38 With PCGrad: average accuracy 89.5%, max accuracy 89.8%, standard deviation 0.20

Can you come up with an explanation? Many thanks, Noa Garnett

My result is same as yours, did you finally solve this problem?

@NoaGarnettDia
Copy link

No, never did. Just moved on...

@yangmin666
Copy link

Have you solved it? I have the same issue.

@NoaGarnett
Copy link
Author

NoaGarnett commented Aug 23, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants