Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add drop_last option to method fit of PyTorchClassifier #1883

Merged
merged 7 commits into from
Nov 10, 2022

Conversation

beat-buesser
Copy link
Collaborator

Description

This pull request adds drop_last option to method fit of PyTorchClassifier.

Fixes #1723

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Beat Buesser added 2 commits October 19, 2022 16:07
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
@beat-buesser beat-buesser self-assigned this Oct 19, 2022
@beat-buesser beat-buesser added the improvement Improve implementation label Oct 19, 2022
@beat-buesser beat-buesser added this to the ART 1.12.2 milestone Oct 19, 2022
@beat-buesser beat-buesser linked an issue Oct 19, 2022 that may be closed by this pull request
@beat-buesser beat-buesser added this to Pull request open in ART 1.12.2 Oct 19, 2022
@codecov-commenter
Copy link

codecov-commenter commented Oct 19, 2022

Codecov Report

Merging #1883 (fc12c05) into dev_1.12.2 (3e3a438) will decrease coverage by 0.16%.
The diff coverage is 48.83%.

Impacted file tree graph

@@              Coverage Diff               @@
##           dev_1.12.2    #1883      +/-   ##
==============================================
- Coverage       85.90%   85.73%   -0.17%     
==============================================
  Files             248      248              
  Lines           23312    23491     +179     
  Branches         4212     4280      +68     
==============================================
+ Hits            20027    20141     +114     
- Misses           2224     2273      +49     
- Partials         1061     1077      +16     
Impacted Files Coverage Δ
...rs/certification/derandomized_smoothing/pytorch.py 79.31% <40.00%> (-8.69%) ⬇️
art/estimators/classification/pytorch.py 86.79% <50.00%> (-1.56%) ⬇️
...tors/certification/randomized_smoothing/pytorch.py 84.09% <52.94%> (-8.02%) ⬇️
...poison_mitigation/neural_cleanse/neural_cleanse.py 84.21% <0.00%> (-6.15%) ⬇️
art/attacks/evasion/boundary.py 92.77% <0.00%> (-1.21%) ⬇️
art/attacks/poisoning/sleeper_agent_attack.py 41.00% <0.00%> (-0.36%) ⬇️
art/metrics/verification_decisions_trees.py 92.83% <0.00%> (+1.16%) ⬆️

Copy link
Collaborator

@kieranfraser kieranfraser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Beat,
This looks good - I've tested locally and the code makes sense.
Just need to fix the style-checks, line 377 is over 120 characters.

Would it be worth wrapping the model prediction in a try-except block and warning the user that a drop_last may be required if this error is caught (as the default is False)?
Something like:

# Perform prediction
try:
    model_outputs = self._model(i_batch)
except ValueError as e:
    if 'Expected more than 1 value per channel when training' in str(e):
        logger.exception("Try dropping the last incomplete batch by setting drop_last=True.")
    raise e

@beat-buesser beat-buesser moved this from Pull request open to Pull request review in ART 1.12.2 Oct 19, 2022
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
Copy link
Collaborator

@kieranfraser kieranfraser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mypy and pylint failing in CI / Style checks. The drop_last option should be included in PyTorchDeRandomizedSmoothing and PyTorchRandomizedSmoothing fit method too?

@beat-buesser
Copy link
Collaborator Author

I think the try-except block is an interesting idea. I'm running a few tests if it would slow down the model training.

Beat Buesser added 2 commits October 21, 2022 00:27
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
@lgtm-com
Copy link

lgtm-com bot commented Oct 21, 2022

This pull request introduces 4 alerts and fixes 1 when merging 6e05685 into 3e3a438 - view on LGTM.com

new alerts:

  • 4 for Unused import

fixed alerts:

  • 1 for Module is imported more than once

Beat Buesser added 2 commits October 25, 2022 23:19
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
Signed-off-by: Beat Buesser <beat.buesser@ie.ibm.com>
Copy link
Collaborator

@kieranfraser kieranfraser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@beat-buesser beat-buesser merged commit d303e95 into dev_1.12.2 Nov 10, 2022
ART 1.12.2 automation moved this from Pull request review to Pull request done Nov 10, 2022
@beat-buesser beat-buesser deleted the development_issue_1723 branch November 10, 2022 11:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improve implementation
Projects
No open projects
ART 1.12.2
  
Pull request done
Development

Successfully merging this pull request may close these issues.

batch_norm error in fit() at end of training epoch
3 participants