You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Allow clip_values to have a different value per feature
Similarly, change attack strength (often eps, eps_step) from just scalar to scalar or vector. If a vector is provided, it should have the same size as the number of features. This is to allow attacks to be applied on features with different ranges and mostly concerns L_inf attacks.
In line with the previous item, adapt random initialization in boundary attack to take into account the different scales in features for feature vectors.
Only perform clipping in attacks if the targeted model has clip_values
Generalize the shape of the input assumed in Classifier and other modules: moving from four dimensions (first one being the batch) to any number of dimensions (where first one is still the batch size)
Add checks and safeguards on number of dimensions of data for the attacks / defences that can only be applied on images
Add tests for feature vectors classifiers
Add tests with no clipping / clipping per feature for both images and feature vectors
Update examples and notebooks to ensure that they still work
Create notebook with attacks on feature vectors using some malware dataset
The text was updated successfully, but these errors were encountered:
This release contains breaking changes to attacks and defenses with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.
# Added
- implement pickle for classifiers `tensorflow` and `pytorch` (#39)
- added example `data_augmentation.py` demonstrating the use of data generators
# Changed
- renamed and moved tests (#58)
- change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising expceptions (#49)
- clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (#49)
- [Breaking changes] class attributes in attacks can no longer be changed with method `generate`, changing attributes is only possible with methods `__init__` and `set_params`
- [Breaking changes] class attributes in defenses can no longer be changed with method `generate`, changing attributes is only possible with methods `__call__` and `set_params`
- resolved inconsistency in PGD random_init with Madry's version
# Removed
- deprecated static adversarial trainer `StaticAdversarialTrainer`
# Fixed
- Fixed bug in attack ZOO (#60)
I think this issue is ready to be closed. An example of usage for feature vectors is provided in PR #92. The item on having per-feature values for attack strenghts eps should probably be further discussed and addressed in a separate issue.
This requires multiple changes to all modules:
clip_values
to beNone
in all classifiersclip_values
to have a different value per featureeps
,eps_step
) from just scalar to scalar or vector. If a vector is provided, it should have the same size as the number of features. This is to allow attacks to be applied on features with different ranges and mostly concerns L_inf attacks.clip_values
Classifier
and other modules: moving from four dimensions (first one being the batch) to any number of dimensions (where first one is still the batch size)The text was updated successfully, but these errors were encountered: