Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Required crop size is larger than input image size #61

Closed
PatrickHaopc opened this issue Dec 24, 2020 · 5 comments
Closed

Comments

@PatrickHaopc
Copy link

I am training a MobileNet-style teacher model on my own dataset, the input image size is 1084*708, there is my running command:
python train.py --dataroot database/endoscolor --model cycle_gan --log_dir logs/cycle_gan/endoscolor/mobile --real_stat_A_path real_stat/endoscolor_A.npz --real_stat_B_path real_stat/endoscolor_B.npz --gpu_ids 0 --preprocess scale_width_and_crop --load_size 1084 --crop_size 360
I have a trained model from "junyanz/pytorch-CycleGAN-and-pix2pix", and I used the same command:" --preprocess scale_width_and_crop --load_size 1084 --crop_size 360", but I got VlueRrror at epoch8:

**Traceback (most recent call last):
File "train.py", line 5, in
trainer.start()
File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 95, in start 'Saving the latest model (epoch %d, total_iters %d)' % (epoch, total_iter)) File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 63, in evaluate metrics = self.model.evaluate_model(iter)
File "/home/pengchengh/Codes/gan-compression-master/models/cycle_gan_model.py", line 282, in evaluate_model for i, data_i in enumerate(tqdm(eval_dataloader, desc='Eval %s ' % direction, position=2, leave=False)): File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py", line 1107, in iter for obj in iterable:
File "/home/pengchengh/Codes/gan-compression-master/data/init.py", line 113, in iter for i, data in enumerate(self.dataloader):
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data()
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data return self._process_data(data) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data data.reraise()
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last):
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/pengchengh/Codes/gan-compression-master/data/single_dataset.py", line 36, in getitem A = self.transform(A_img)
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 67, in call img = t(img)
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 585, in forward i, j, h, w = self.get_params(img, self.size) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 542, in get_params "Required crop size {} is larger then input image size {}".format((th, tw), (h, w)) ValueError: Required crop size (360, 360) is larger then input image size (235, 360)

@PatrickHaopc
Copy link
Author

Could I compress my trained full model from "junyanz/pytorch-CycleGAN-and-pix2pix" without training a MobileNet-style teacher model from scratch?

@lmxyy
Copy link
Collaborator

lmxyy commented Dec 24, 2020

Could I compress my trained full model from "junyanz/pytorch-CycleGAN-and-pix2pix" without training a MobileNet-style teacher model from scratch?

Of course, you can. Please refer to our tutorial of Fast GAN Compression.

@PatrickHaopc
Copy link
Author

PatrickHaopc commented Dec 25, 2020

Could I compress my trained full model from "junyanz/pytorch-CycleGAN-and-pix2pix" without training a MobileNet-style teacher model from scratch?

Of course, you can. Please refer to our tutorial of Fast GAN Compression.

There is another error when I train a full model with command:
"python train.py --dataroot database/endoscolor --model cycle_gan --netG resnet_9blocks --log_dir logs/cycle_gan/endoscolor/full --real_stat_A_path real_stat/endoscolor_A.npz --real_stat_B_path real_stat/endoscolor_B.npz --gpu_ids 0 --preprocess scale_width_and_crop --load_size 1084",


(epoch: 16, iters: 20000, time: 0.331) D_A: 0.165 G_A: 0.363 G_cycle_A: 0.821 G_idt_A: 0.384 D_B: 0.173 G_B: 0.518 G_cycle_B: 0.957 G_idt_B: 0.398
Epoch : 8%|██████▏ | 15/200 [1:39:27<20:39:52, 402.12s/itTraceback (most recent call last): | 49/1330 [00:15<06:19, 3.38it/s]
File "train.py", line 5, in | 0/190 [00:00<?, ?it/s]
trainer.start()
File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 94, in start
self.evaluate(epoch, total_iter,
File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 63, in evaluate
metrics = self.model.evaluate_model(iter)
File "/home/pengchengh/Codes/gan-compression-master/models/cycle_gan_model.py", line 282, in evaluate_model
for i, data_i in enumerate(tqdm(eval_dataloader, desc='Eval %s ' % direction, position=2, leave=False)):
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/tqdm/std.py", line 1107, in iter
for obj in iterable:
File "/home/pengchengh/Codes/gan-compression-master/data/init.py", line 113, in iter
for i, data in enumerate(self.dataloader):
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
return self._process_data(data)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pengchengh/Codes/gan-compression-master/data/single_dataset.py", line 36, in getitem
A = self.transform(A_img)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 70, in call
img = t(img)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 483, in call
i, j, h, w = self.get_params(img, self.size)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 461, in get_params
i = random.randint(0, h - th)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/random.py", line 248, in randint
return self.randrange(a, b+1)
File "/home/pengchengh/anaconda3/envs/pytorch1.4/lib/python3.8/random.py", line 226, in randrange
raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0, -88, -88)


It also occurred when I trained a "once-for-all" network at the same epoch with my full model from "junyanz/pytorch-CycleGAN-and-pix2pix", that's why I have tried to train a full model from scratch.
I googled this problem, someone explained that random cropping out of bounds, transforms.RandomCrop calls random.randint(a,b), the generated random integer n has a value range (a<=n<=b), if a=b, then n=a, if a>b, it will report an error.
Should I adjust the crop_size?

@PatrickHaopc PatrickHaopc changed the title ValueError: Required crop size is larger then input image size ValueError: Required crop size is larger than input image size Dec 25, 2020
@lmxyy
Copy link
Collaborator

lmxyy commented Dec 25, 2020

Hi! I think this error is caused by the inconsistency of pre-procession of 'scale_width_and_crop' in our repo and junyanz/pytorch-CycleGAN-and-pix2pix.

In junyanz/pytorch-CycleGAN-and-pix2pix, the aspect ratio of the cropped image is fixed to 1. But in our repo, the aspect ratio is the same as the original image. It is hard to tell which one is better, but I think you could modify a few codes to switch to the cropping strategy you want.

@PatrickHaopc
Copy link
Author

Hi! I think this error is caused by the inconsistency of pre-procession of 'scale_width_and_crop' in our repo and junyanz/pytorch-CycleGAN-and-pix2pix.

In junyanz/pytorch-CycleGAN-and-pix2pix, the aspect ratio of the cropped image is fixed to 1. But in our repo, the aspect ratio is the same as the original image. It is hard to tell which one is better, but I think you could modify a few codes to switch to the cropping strategy you want.

Thanks, I have solved this problem, my method is simple, just changed the base_model.py with junyanz/pytorch-CycleGAN-and-pix2pix

@PatrickHaopc PatrickHaopc reopened this Dec 26, 2020
@lmxyy lmxyy closed this as completed Dec 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants