Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change datamodule input to [0,1] scale #203

Open
wants to merge 103 commits into
base: main
Choose a base branch
from
Open

Change datamodule input to [0,1] scale #203

wants to merge 103 commits into from

Conversation

mzweilin
Copy link
Contributor

@mzweilin mzweilin commented Jul 15, 2023

What does this PR do?

This PR makes datamodule give input in the [0,1] scale, so that it is compatible with many other existing tools.

  • Remove [0,1]->[0,255] transforms in Datamodule.
  • Add transform and untransform in Adversary so we still use epsilon in integer.
  • Fix or remove preprocessor in models that were used for transforming input from [0,255].

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16 reports 70% (21 sec/epoch).
  • CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2 reports 70% (14 sec/epoch).

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@mzweilin mzweilin changed the base branch from generalized_adversary to add_adv_batch_converter August 28, 2023 19:19
Base automatically changed from add_adv_batch_converter to main August 31, 2023 19:17
@mzweilin mzweilin requested a review from dxoigmn August 31, 2023 21:05
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should add 8bit to filename somewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename as times_255_and_round.yaml in 2b0ef2b

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to normalize or something like that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to divided_by_255.yaml in 2b0ef2b

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this named data_coco and why is this in attack?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to configs/batch_c15n in d09524a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to input_tuple_float01.yaml in 2b0ef2b

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this in attack?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to configs/batch_c15n in d09524a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to input_tensor_float01.yaml in 2b0ef2b

@mzweilin mzweilin changed the base branch from main to mzweilin/upgrade_pre-commit_flake8 January 8, 2024 18:56
Base automatically changed from mzweilin/upgrade_pre-commit_flake8 to main January 8, 2024 21:25
@mzweilin
Copy link
Contributor Author

@dxoigmn This PR is ready to review again. It should simplify input transforms in our existing datamodules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants