Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive with the normalization of [-1, 1] #16

Closed
cxmscb opened this issue Aug 14, 2020 · 2 comments
Closed

Adaptive with the normalization of [-1, 1] #16

cxmscb opened this issue Aug 14, 2020 · 2 comments

Comments

@cxmscb
Copy link

cxmscb commented Aug 14, 2020

Hi, I found the normalization used for images is [0, 1]. If the normalization of images is [-1, 1], how could I revise the attack codes?

@fra31
Copy link
Owner

fra31 commented Aug 14, 2020

Hi,

I think the easiest solution (assuming PyTorch models) is to use images in [0, 1] and create a wrapper for the models like

class NewModel():
    def __init__(self, model):
        self.model = model

    def __call__(self, x):
        z = x * 2. - 1. # z in [-1, 1] if x in [0, 1]
        return self.model(z)

which takes input in [0, 1], but rescale it to [-1, 1] before inference. Please refer also to #13 for a similar case.

For TensorFlow models, I guess one could do something similar in the model definition, i.e. adding z = x * 2. - 1. on the input to rescale it to [-1, 1].

Let me know if this helps!

@cxmscb
Copy link
Author

cxmscb commented Aug 14, 2020

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants