Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Default Training Loop for TensorFlowV2Classifier #2124

Merged
merged 11 commits into from
May 29, 2023

Conversation

f4str
Copy link
Collaborator

@f4str f4str commented Apr 24, 2023

Description

A new optimizer parameter was added to the TensorFlowV2Classifier and its derivatives: TensorFlowV2RandomizedSmoothing and TensorFlowV2RandomizedSmoothing. This new parameter is now used in the .fit(...) methods to create a default train_step training loop that is optimized with the @tf.function decorator. The existing train_step parameter can still be provided to override the default training loop. This preserves backwards compatibility and ensures everything still works. The docstrings have been updated accordingly.

All test cases and relevant examples/notebooks have been updated to now use the default (optimized) training loop rather than creating one manually. This simplifies the setup for the TensorFlow v2 classifiers and makes it more similar to other estimators like the KerasClassifier and PyTorchClassifier. Note that all test cases still work the same as before and did not need to be modified at all, but this change just cleans things up.

By using this optimized default training loop, it will significantly improve training performance for the TensorFlowV2Classifier and its derivatives.

Fixes #2083

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • Existing test cases changed to use the new default training loop

Test Configuration:

  • OS
  • Python version
  • ART version or commit number
  • TensorFlow / Keras / PyTorch / MXNet version

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
@f4str f4str changed the base branch from main to dev_1.15.0 April 24, 2023 22:18
@codecov-commenter
Copy link

codecov-commenter commented Apr 24, 2023

Codecov Report

Merging #2124 (ede42ff) into dev_1.15.0 (36bc03d) will increase coverage by 1.34%.
The diff coverage is 69.23%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Impacted file tree graph

@@              Coverage Diff               @@
##           dev_1.15.0    #2124      +/-   ##
==============================================
+ Coverage       84.30%   85.64%   +1.34%     
==============================================
  Files             299      299              
  Lines           26648    26657       +9     
  Branches         4878     4878              
==============================================
+ Hits            22465    22831     +366     
+ Misses           2931     2581     -350     
+ Partials         1252     1245       -7     
Impacted Files Coverage Δ
art/attacks/evasion/auto_conjugate_gradient.py 86.52% <ø> (ø)
...attacks/evasion/auto_projected_gradient_descent.py 86.51% <ø> (ø)
art/attacks/evasion/brendel_bethge.py 87.01% <ø> (ø)
...s/certification/randomized_smoothing/tensorflow.py 89.83% <50.00%> (-1.55%) ⬇️
...certification/derandomized_smoothing/tensorflow.py 83.33% <66.66%> (-1.67%) ⬇️
art/estimators/classification/tensorflow.py 83.33% <75.00%> (-0.21%) ⬇️

... and 14 files with indirect coverage changes

Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
@f4str f4str marked this pull request as ready for review April 25, 2023 20:34
@beat-buesser beat-buesser self-requested a review April 27, 2023 22:54
@beat-buesser beat-buesser self-assigned this Apr 27, 2023
@beat-buesser beat-buesser added the improvement Improve implementation label Apr 27, 2023
@beat-buesser beat-buesser added this to the ART 1.15.0 milestone Apr 27, 2023
Copy link
Collaborator

@beat-buesser beat-buesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @f4str Thank you very much for implementing a default training step for TensorFlow v2 classifiers! I have added a few suggestions, what do you think?

@@ -2055,6 +2055,7 @@ def logits_difference(y_true, y_pred):
nb_classes=estimator.nb_classes,
input_shape=estimator.input_shape,
loss_object=self._loss_object,
optimizer=estimator._optimizer,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the new property:

Suggested change
optimizer=estimator._optimizer,
optimizer=estimator._optimizer,

@@ -203,6 +203,7 @@ def __call__(self, y_true: tf.Tensor, y_pred: tf.Tensor, *args, **kwargs) -> tf.
nb_classes=estimator.nb_classes,
input_shape=estimator.input_shape,
loss_object=_loss_object_tf,
optimizer=estimator._optimizer,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the new property:

Suggested change
optimizer=estimator._optimizer,
optimizer=estimator.optimizer,

@@ -2055,6 +2055,7 @@ def logits_difference(y_true, y_pred):
nb_classes=estimator.nb_classes,
input_shape=estimator.input_shape,
loss_object=self._loss_object,
optimizer=estimator._optimizer,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the new property:

Suggested change
optimizer=estimator._optimizer,
optimizer=estimator.optimizer,

@@ -224,6 +224,7 @@ def __call__(self, y_true: tf.Tensor, y_pred: tf.Tensor, *args, **kwargs) -> tf.
nb_classes=estimator.nb_classes,
input_shape=estimator.input_shape,
loss_object=_loss_object_tf,
optimizer=estimator._optimizer,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the new property:

Suggested change
optimizer=estimator._optimizer,
optimizer=estimator.optimizer,

@beat-buesser beat-buesser added this to Pull request review in ART 1.15.0 May 10, 2023
@f4str
Copy link
Collaborator Author

f4str commented May 10, 2023

Hi @beat-buesser thank you for the comments! For consistency, optimizer currently uses the private attribute instead of the public since loss_object and train_step use the private attribute. So all of these should use either the private attribute or public property to stay consistent.

Since ART is shifting from private attributes to public properties, I think it may be worth changing all of these (optimizer, loss_obejct, and train_step) to use the public property, but only for the affected files in this PR alone. Any thoughts on this?

Signed-off-by: Farhan Ahmed <Farhan.Ahmed@ibm.com>
@f4str f4str requested a review from beat-buesser May 13, 2023 00:51
@beat-buesser
Copy link
Collaborator

Hi @f4str Thank you very much, the updates look good to me.

@beat-buesser beat-buesser merged commit 76ae22a into Trusted-AI:dev_1.15.0 May 29, 2023
37 checks passed
ART 1.15.0 automation moved this from Pull request review to Pull request done May 29, 2023
@f4str f4str deleted the tf-default-fit branch May 30, 2023 05:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improve implementation
Projects
No open projects
ART 1.15.0
Pull request done
Development

Successfully merging this pull request may close these issues.

Default train_step for TensorFlowV2Classifier and Variations
3 participants