-
Notifications
You must be signed in to change notification settings - Fork 54
Models refactor #196
Models refactor #196
Conversation
| @pytest.fixture( | ||
| scope="session", params=[(2, 2, 64, 64), (3, 5, 128, 64), (1, 3, 256, 256)] | ||
| ) | ||
| def arange_4d_image(request): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get which function this test is suppose to cover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a test - it's a fixture. Every test in the repository that has arange_4d_image as an input is re-run with each of the listed input image sizes.
tests/conftest.py
Outdated
| @pytest.fixture( | ||
| scope="session", params=[(2, 2, 65, 257), (3, 5, 124, 63), (1, 3, 252, 257)] | ||
| ) | ||
| def arange_4d_image_odd(request): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for this test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment for arange_4d_image.
|
|
||
|
|
||
| @pytest.mark.parametrize("affine", [True, False]) | ||
| def test_channel_norm_2d(affine, arange_4d_image: Tensor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This type of test is not my favorite: the test verifies that the content of the module is the same as the content of the module (since the test replicates the logic of the module).
I would try a more realistic tests: iterate several time on the same input and see that it does normalize ? Or just drop the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I didn't like this a ton - but it's more to guard against an implementation change of the module in the future.
Running multiple times won't change anything (unless you want to verify that the mean is still 0 and the variance is still 1). We can think a bit about it.
|
Merging after offline discussion with reviewer. |
This PR refactors the models of NeuralCompression. At a high level it does two things:
As we plan to use several variants of the Hyperprior model, we do introduce a new abstract base class for hyperprior models that encapsulates the HiFiC autoencoder. This will make it easier to build a common interface for training loops involving these classes of models.
The test suite has also been updated to test the new HiFiC autoencoder and its functionality, as well as removing old tests that references deleted code.