Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Pickle capacities for classifiers #39

Closed
3 of 4 tasks
ririnicolae opened this issue Mar 7, 2019 · 2 comments
Closed
3 of 4 tasks

Implement Pickle capacities for classifiers #39

ririnicolae opened this issue Mar 7, 2019 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@ririnicolae
Copy link
Collaborator

ririnicolae commented Mar 7, 2019

Describe the solution you'd like
Provide __setstate__ and __getstate__ implementations for all classifiers in order to support serialization with pickle. To be determined if all frameworks can support this. The function can build on the pre existing save function from the Classifier API.

  • Keras
  • MXNet
  • PyTorch
  • TensorFlow
@ririnicolae ririnicolae added the enhancement New feature or request label Mar 7, 2019
@ririnicolae ririnicolae self-assigned this Apr 17, 2019
@minhitbk minhitbk self-assigned this Apr 23, 2019
@minhitbk
Copy link
Collaborator

I will help on Pytorch and TF versions.

@ririnicolae
Copy link
Collaborator Author

PyTorch and TF pickling implemented in #68.

beat-buesser pushed a commit that referenced this issue May 20, 2019
This release contains breaking changes to attacks and defenses with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.

  # Added

    - implement pickle for classifiers `tensorflow` and `pytorch` (#39)
    - added example `data_augmentation.py` demonstrating the use of data generators

 # Changed

    - renamed and moved tests (#58)
    - change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising expceptions (#49)
    - clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (#49)
    - [Breaking changes] class attributes in attacks can no longer be changed with method `generate`, changing attributes is only possible with methods `__init__` and `set_params`
    - [Breaking changes] class attributes in defenses can no longer be changed with method `generate`, changing attributes is only possible with methods `__call__` and `set_params`
    - resolved inconsistency in PGD random_init with Madry's version

  # Removed

    - deprecated static adversarial trainer `StaticAdversarialTrainer`

  # Fixed

    - Fixed bug in attack ZOO (#60)
imolloy pushed a commit to imolloy/adversarial-robustness-toolbox that referenced this issue Aug 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants