Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blurry results #1

Closed
tcapelle opened this issue Jul 9, 2020 · 0 comments
Closed

Blurry results #1

tcapelle opened this issue Jul 9, 2020 · 0 comments

Comments

@tcapelle
Copy link

tcapelle commented Jul 9, 2020

Hello, awesome repo.
I have been playing with various convlstm/gru implementation as we don't have an official one in Pytorch.
I am having trouble getting good images as output. I am unable to get sharp images as the ones you showed.
I modified your model to output 2 classes per image, to produce binary values and train with CrossEntropy (I just put to 1 all pixels greater that 0.5, and zero the others).
I am also currently trying this UpsampleBlock from fastai2 Unet for the decoder with good results:

class UpsampleBlock(Module):
    "A quasi-UNet block, using `PixelShuffle_ICNR upsampling`."
    @delegates(ConvLayer.__init__)
    def __init__(self, in_ch, out_ch, final_div=True, blur=False, act_cls=defaults.activation,
                 self_attention=False, init=nn.init.kaiming_normal_, norm_type=None, **kwargs):
        self.shuf = PixelShuffle_ICNR(in_ch, in_ch//2, blur=blur, act_cls=act_cls, norm_type=norm_type)
        ni = in_ch//2
        nf = out_ch
        self.conv1 = ConvLayer(ni, nf, act_cls=act_cls, norm_type=norm_type, **kwargs)
        self.conv2 = ConvLayer(nf, nf, act_cls=act_cls, norm_type=norm_type,
                               xtra=SelfAttention(nf) if self_attention else None, **kwargs)
        self.relu = act_cls()
        apply_init(nn.Sequential(self.conv1, self.conv2), init)

    def forward(self, up_in):
        up_out = self.shuf(up_in)
        return self.conv2(self.conv1(up_out))
@tcapelle tcapelle closed this as completed Jan 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant