-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproducing your result #13
Comments
My result is same as yours, did you finally solve this problem? |
No, never did. Just moved on... |
Have you solved it? I have the same issue. |
No, gave it up long ago.
…On Wed, Aug 23, 2023, 06:15 yangmin666 ***@***.***> wrote:
Have you solved it? I have the same issue.
—
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACNDAFIIUG7UBFFQJPJBDOLXWVYV3ANCNFSM5LV6WUVQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Thanks for the simple and elegant implementation!
I tried running your code as is, on Multi-MNIST data, and failed to reproduce results.
I ran main_multi_mnist.py without changing any hyper parameter (learning rate (0.0005) , batch size (256), number of epochs (100)). For comparison, I created a version with no pcgrad:
1. comment out line 57: ## optimizer = PCGrad(optimizer)
2. replace line 72: optimizer.pc_backward(losses) -> torch.sum(torch.stack(losses)).backward()
I run each version 7 times. my results (averaging left-digit and right-digit accuracy) are:
Without PCGrad: average accuracy 89.5%, max accuracy 89.9%, standard deviation 0.38
With PCGrad: average accuracy 89.5%, max accuracy 89.8%, standard deviation 0.20
Can you come up with an explanation?
Many thanks,
Noa Garnett
The text was updated successfully, but these errors were encountered: