Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about results #25

Open
YeHuanjie opened this issue May 21, 2020 · 2 comments
Open

about results #25

YeHuanjie opened this issue May 21, 2020 · 2 comments

Comments

@YeHuanjie
Copy link

my test images have Artifacts like this, what's the problem and how to sovle it? thx anyway!
1

@751994772
Copy link

I also have this problem, how did you solve it.

@xycjscs
Copy link

xycjscs commented Dec 26, 2023

this is a bug widely spreaded in GAN like networks.
when you do upsampling, you should not use deconvolution/transposed/ any similar networks.
You should use resize convolution/ Upsample networks.
check this paper: https://distill.pub/2016/deconv-checkerboard/
#Checkerboard #Artifacts

some code for beginners:

class UpBlock(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(UpBlock, self).__init__()
        # Replace upconv with an interpolation followed by a convolution.
        self.upsample = nn.Upsample(scale_factor=2, mode='bilinear')  # You can also try mode='bilinear'
        self.conv = nn.Conv2d(in_channels, in_channels // 2, kernel_size=3, padding=1)
        self.conv_block = ConvBlock(in_channels // 2 + in_channels // 2, out_channels)  # Adjust for the concatenated channels

    def forward(self, x, skip):
        x = self.upsample(x)
        x = self.conv(x)
        x = torch.cat([x, skip], dim=1)
        return self.conv_block(x)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants