Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream Google autoencoder models to CompressAI #188

Closed
mmuckley opened this issue Dec 20, 2022 · 2 comments
Closed

Upstream Google autoencoder models to CompressAI #188

mmuckley opened this issue Dec 20, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@mmuckley
Copy link
Contributor

At the moment we have several model implementations that are already implemented in CompressAI (e.g., Scale Hyperprior, Mean-Scale Hyperprior). At this point CompressAI has pretty good adoption, so we should be able to remove these from our repository and upstream the dependency.

By default CompressAI doesn't handle reflective image padding for users, so if desired we could include wrappers like that in PR #185 to handle this for users unfamiliar with the functionality of these models.

@mmuckley mmuckley added the enhancement New feature or request label Dec 20, 2022
@desi-ivanova
Copy link
Contributor

Happy to help with that at some point.

Padding and image standardization (to 0-1 range) -- both of which are supposed to be handled by the user (described in the second to last paragraph in compressai documentation here) -- can be handled by wrappers instead, as you suggest.

@mmuckley
Copy link
Contributor Author

Addressed in PR #196.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants