New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Required crop size is larger than input image size #61
Comments
Could I compress my trained full model from "junyanz/pytorch-CycleGAN-and-pix2pix" without training a MobileNet-style teacher model from scratch? |
Of course, you can. Please refer to our tutorial of Fast GAN Compression. |
There is another error when I train a full model with command: (epoch: 16, iters: 20000, time: 0.331) D_A: 0.165 G_A: 0.363 G_cycle_A: 0.821 G_idt_A: 0.384 D_B: 0.173 G_B: 0.518 G_cycle_B: 0.957 G_idt_B: 0.398 It also occurred when I trained a "once-for-all" network at the same epoch with my full model from "junyanz/pytorch-CycleGAN-and-pix2pix", that's why I have tried to train a full model from scratch. |
Hi! I think this error is caused by the inconsistency of pre-procession of 'scale_width_and_crop' in our repo and junyanz/pytorch-CycleGAN-and-pix2pix. In junyanz/pytorch-CycleGAN-and-pix2pix, the aspect ratio of the cropped image is fixed to 1. But in our repo, the aspect ratio is the same as the original image. It is hard to tell which one is better, but I think you could modify a few codes to switch to the cropping strategy you want. |
Thanks, I have solved this problem, my method is simple, just changed the base_model.py with junyanz/pytorch-CycleGAN-and-pix2pix |
I am training a MobileNet-style teacher model on my own dataset, the input image size is 1084*708, there is my running command:
python train.py --dataroot database/endoscolor --model cycle_gan --log_dir logs/cycle_gan/endoscolor/mobile --real_stat_A_path real_stat/endoscolor_A.npz --real_stat_B_path real_stat/endoscolor_B.npz --gpu_ids 0 --preprocess scale_width_and_crop --load_size 1084 --crop_size 360
I have a trained model from "junyanz/pytorch-CycleGAN-and-pix2pix", and I used the same command:" --preprocess scale_width_and_crop --load_size 1084 --crop_size 360", but I got VlueRrror at epoch8:
**Traceback (most recent call last):
File "train.py", line 5, in
trainer.start()
File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 95, in start 'Saving the latest model (epoch %d, total_iters %d)' % (epoch, total_iter)) File "/home/pengchengh/Codes/gan-compression-master/trainer.py", line 63, in evaluate metrics = self.model.evaluate_model(iter)
File "/home/pengchengh/Codes/gan-compression-master/models/cycle_gan_model.py", line 282, in evaluate_model for i, data_i in enumerate(tqdm(eval_dataloader, desc='Eval %s ' % direction, position=2, leave=False)): File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py", line 1107, in iter for obj in iterable:
File "/home/pengchengh/Codes/gan-compression-master/data/init.py", line 113, in iter for i, data in enumerate(self.dataloader):
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data()
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data return self._process_data(data) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data data.reraise()
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last):
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/pengchengh/Codes/gan-compression-master/data/single_dataset.py", line 36, in getitem A = self.transform(A_img)
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 67, in call img = t(img)
File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 585, in forward i, j, h, w = self.get_params(img, self.size) File "/home/pengchengh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 542, in get_params "Required crop size {} is larger then input image size {}".format((th, tw), (h, w)) ValueError: Required crop size (360, 360) is larger then input image size (235, 360)
The text was updated successfully, but these errors were encountered: