You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to set the final inpainted image size same as the input images?
I tried the same and got this error:
Using cpu.
Model model/model_places2.pth loaded.
Inpainting...
Input size: (500, 333)
Traceback (most recent call last):
File "test.py", line 263, in
tester.inpaint(args.output, args.img, args.mask, merge_result=args.merge)
File "test.py", line 221, in inpaint
self.process_batch(batch, output)
File "test.py", line 172, in process_batch
result, alpha, raw = self.model(imgs_miss, masks)
File "/home/sadbhawna/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sadbhawna/torch/DFNet/model.py", line 260, in forward
out = decode(out, out_en[-i-2])
File "/home/sadbhawna/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sadbhawna/torch/DFNet/model.py", line 147, in forward
out = torch.cat([out, concat], dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 2 and 3 in dimension 3 at /opt/conda/conda-bld/pytorch_1574150980135/work/aten/src/TH/generic/THTensor.cpp:612
The text was updated successfully, but these errors were encountered:
Actually U-Net like networks do have this limitation. The height and width must be divided by 2^n (n depends on the times of downsample). Because images will be downsampled and then upsampled.
You could hack it by:
resize the input image to make the height and width satisfied the condition mentioned above
Is it possible to set the final inpainted image size same as the input images?
I tried the same and got this error:
Using cpu.
Model model/model_places2.pth loaded.
Inpainting...
Input size: (500, 333)
Traceback (most recent call last):
File "test.py", line 263, in
tester.inpaint(args.output, args.img, args.mask, merge_result=args.merge)
File "test.py", line 221, in inpaint
self.process_batch(batch, output)
File "test.py", line 172, in process_batch
result, alpha, raw = self.model(imgs_miss, masks)
File "/home/sadbhawna/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sadbhawna/torch/DFNet/model.py", line 260, in forward
out = decode(out, out_en[-i-2])
File "/home/sadbhawna/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/sadbhawna/torch/DFNet/model.py", line 147, in forward
out = torch.cat([out, concat], dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 2 and 3 in dimension 3 at /opt/conda/conda-bld/pytorch_1574150980135/work/aten/src/TH/generic/THTensor.cpp:612
The text was updated successfully, but these errors were encountered: