Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
An alternative to reconstruct large image by parts #238
Why we should care
Many architectures use padding during training, so if waifu2x use padding , then it can use others' model check points to initialize training. And the models can concatenate layers (e.g. DCSCN ) or easily add up residuals.
Replication boarding padding + Overlapping Splitting
I first pad the whole image boarder with its replication values, then I split it into pieces. Each piece has overlapping boarders. After rescaling the pieces, I cut the overlapped parts and merge them back to an image. I also cut the padded boarder in the final image.
Overlapping 3 pixels seems enough, though larger value might be better.
Here is a naive and buggy example. An image is sliced from the top left. If a slice's width is smaller than the padded width, then the code will raise an error.
from PIL import Image model = DCSCN(*) # load and image and resize it img = Image.open("2.png").convert("RGB") img_up = img.resize((2*img.size, 2*img.size), Image.BILINEAR) img = to_tensor(img).unsqueeze(0) img_up = to_tensor(img_up).unsqueeze(0) # main seg = 78 padded = 3 rem = padded *2 img = nn.ReplicationPad2d(padded)(img) batch, channel, height, width = img.size() final = torch.zeros((1,3,height*2, width*2)) for i in range(padded, height, seg): for j in range(padded, width, seg): part = img[:,:, (i-padded):min(i+seg+padded, height), (j-padded):min(j+seg+padded, width)] part_u = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)(part) out = model.forward_checkpoint((part, part_u)) out.size() out = out[:,:,rem:-rem,rem:-rem] # remove overlapping part but might raise error _,_,p_h, p_w = out.size() final[:,:,2*i:2*i+p_h, 2*j:2*j+p_w] = out final_ = final[:,:,rem:-rem,rem:-rem]