Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log gain of all examples instead of unsuccessful examples. #201

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mzweilin
Copy link
Contributor

@mzweilin mzweilin commented Jul 14, 2023

What does this PR do?

This PR makes Adversary log gain of all examples, instead of gain of unsuccessful examples.

We should see gain increases on progress bar if the attack works.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16 reports 70% (21 sec/epoch).
  • CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2 reports 70% (14 sec/epoch).

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@mzweilin mzweilin marked this pull request as ready for review July 14, 2023 18:06
@mzweilin mzweilin requested a review from dxoigmn July 14, 2023 18:32
Copy link
Contributor

@dxoigmn dxoigmn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this necessary? Usually you want to log the actual loss you compute gradients of?

@mzweilin
Copy link
Contributor Author

Why is this necessary? Usually you want to log the actual loss you compute gradients of?

The trend could be confusing in the past. While we try to maximize gain, the number is progress bar goes down because it excludes successful examples gradually.

@dxoigmn
Copy link
Contributor

dxoigmn commented Jul 15, 2023

Why is this necessary? Usually you want to log the actual loss you compute gradients of?

The trend could be confusing in the past. While we try to maximize gain, the number is progress bar goes down because it excludes successful examples gradually.

What is confusing about the trend? That it is possible for the loss to go up? But that should be expected if you understand that the loss is only computed on some examples. Perhaps what you want to do instead is zero out the loss for those examples that are already adversarial or take the sum? You don't get the (potential?) speed up benefit though. I would note that the only thing that changes by doing the first thing is just the normalization constant (i.e., the total number of samples when averaging the loss across samples is fixed instead of changing).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants