Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove scale_identity_multiplier to support tensorflow-probability 0.20 #828

Merged
merged 5 commits into from Jul 18, 2023

Conversation

ascillitoe
Copy link
Contributor

@ascillitoe ascillitoe commented Jul 10, 2023

In tensorflow-probability v0.20 the scale_identity_multiplier kwarg given to MultivariateNormalDiag is removed, meaning the covariance matrix must be specified via scale_diag (see tensorflow/probability@ca84850)

This PR modifies our elbo loss function, so that scale_diag is constructed from the sim kwarg, rather than passing it directly to scale_identity_multiplier. Currently, only one of cov_full, cov_diag or sim should be given to elbo.

Breaking change: This currently involves a breaking change since the default value of sim in elbo has been changed from sim=0.05 to sim=None. A possible alternative would be to have sim=0.05 as the default, and then set sim=None under-the-hood when either cov_diag or cov_full are given (i.e. not None).

dependabot bot and others added 3 commits May 9, 2023 07:57
Updates the requirements on [tensorflow-probability](https://github.com/tensorflow/probability) to permit the latest version.
- [Release notes](https://github.com/tensorflow/probability/releases)
- [Commits](tensorflow/probability@0.8.0...v0.20.0)

---
updated-dependencies:
- dependency-name: tensorflow-probability
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
@ascillitoe
Copy link
Contributor Author

Sanity check on od_vae_cifar10.ipynb. VAE training history with alibi-detect==0.11.3 and tensorflow-probability==0.19:

782/782 [=] - 68s 77ms/step - loss_ma: 8641.5905
782/782 [=] - 59s 75ms/step - loss_ma: -2348.6304
782/782 [=] - 57s 72ms/step - loss_ma: -3497.9333
782/782 [=] - 57s 72ms/step - loss_ma: -4035.8458
505/782 [.] - ETA: 20s - loss_ma: -4364.4066

History with this PR:

782/782 [=] - 66s 68ms/step - loss_ma: 9755.7217
782/782 [=] - 52s 67ms/step - loss_ma: -2190.6944
782/782 [=] - 52s 66ms/step - loss_ma: -3428.9409
782/782 [=] - 52s 67ms/step - loss_ma: -3992.8471
478/782 [.] - ETA: 19s - loss_ma: -4276.0701

@codecov
Copy link

codecov bot commented Jul 10, 2023

Codecov Report

Merging #828 (5337d4b) into master (9256aac) will increase coverage by 0.68%.
The diff coverage is 100.00%.

❗ Current head 5337d4b differs from pull request most recent head 4379868. Consider uploading reports for the commit 4379868 to get more accurate results

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #828      +/-   ##
==========================================
+ Coverage   81.16%   81.85%   +0.68%     
==========================================
  Files         148      159      +11     
  Lines        9834    10321     +487     
==========================================
+ Hits         7982     8448     +466     
- Misses       1852     1873      +21     
Impacted Files Coverage Δ
alibi_detect/models/tensorflow/losses.py 82.08% <100.00%> (+1.13%) ⬆️

... and 29 files with indirect coverage changes

@ascillitoe ascillitoe linked an issue Jul 10, 2023 that may be closed by this pull request
@jklaise
Copy link
Member

jklaise commented Jul 10, 2023

I think it's fine to have this change be breaking - it would be more unexpected behaviour if we kept it to 0.05 but then implicitly would disregard it if another parameter is passed, I like the current validation of only one parameter being set. Plus I think the actual elbo loss is not in our public API, in which case we could make it "not breaking" by just updating our library callsites to set sim=0.05? Not sure what the trade-off is, would suggest checking with the person who first implemented the default args.

Would be good to check all call-sites in the library and on the docs that utilize the elbo loss to check that everything works as expected with the now changed default if we do go with the change.

@ascillitoe
Copy link
Contributor Author

Thanks @jklaise. Re our library callsites for elbo, we set it's kwarg's in the vae outlier detectors, e.g.

cov_elbo: dict = dict(sim=.05),

But we're already setting sim=0.05 as you describe, so I think we're OK. Since @arnaudvl implemented the loss function I'm tagging him as a reviewer to confirm.

@jklaise jklaise merged commit 5b6cb5a into SeldonIO:master Jul 18, 2023
11 checks passed
@ascillitoe ascillitoe deleted the fix/remove_sim_kwarg branch July 18, 2023 09:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error related to 'scale_identity_multiplier' in tensorflow
3 participants