New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 940 and 1254 in dimension 2 #86
Comments
Resize your videos to the same 256x256 size prior to training, as explained in README.md. |
Thanks @AliaksandrSiarohin |
No. This script is not for resizing. Use xargs + ffmpeg for resizing all the videos. |
@AliaksandrSiarohin |
Crop-video.py script is only for faces. Fashion is full body dataset. Why you are using it there is not clear. You shold resize all the videos. Write a python script to do this or use xargs + ffmpeg. |
@AliaksandrSiarohin |
github issues is for reporting project specific bugs... Use this python script to resize videos, adjust d and d_out appropriatly:
|
thanks for your help |
Hi,good job! @AliaksandrSiarohin
My command is " CUDA_VISIBLE_DEVICES=0,1 python run.py --config config/fashion-256.yaml --device_ids 0,1"
My fashion-dataset folder is
(I don't process the data such as crop operation,)
And i get the following error:
Traceback (most recent call last):
File "run.py", line 81, in
train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.device_ids)
File "/remote-home/my/pycharmprojects/first-order-model/train.py", line 50, in train
for x in dataloader:
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 637, in next
return self._process_next_batch(batch)
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 229, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 229, in
return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
File "/usr/local/miniconda3/envs/animation1/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 218, in default_collate
return torch.stack([torch.from_numpy(b) for b in batch], 0)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 940 and 1254 in dimension 2 at /pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1333
The text was updated successfully, but these errors were encountered: