New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch original unet #216
Torch original unet #216
Conversation
elif self.normalization == 'batch': | ||
x = self.batch_normalization(x) | ||
|
||
x = self.activation_function(x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If none of the correct activation key was passed, will this become x = None(x)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it will give error within the __init__
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to make an option not to get any activation layer just like the normalization layer.
|
||
# assert result.shape == input_image.shape | ||
# assert result.dtype == input_image.dtype | ||
|
||
|
||
def test_masking_2D(): | ||
input_array = torch.zeros((1, 1, 64, 64)) | ||
@pytest.mark.parametrize("nb_unet_levels", [2, 3, 5, 8]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8 layers?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might worth to test, don't you think so? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall it looks good to me!
This PR implements the original unet for Noise2Self.