Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dimension Mismatch Issue in running on local pc #28

Closed
ujjawalcse opened this issue Mar 7, 2022 · 4 comments
Closed

Dimension Mismatch Issue in running on local pc #28

ujjawalcse opened this issue Mar 7, 2022 · 4 comments
Labels
rembg rembg

Comments

@ujjawalcse
Copy link

Hey @YuliangXiu ,
I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the requirements.txt one by one.
Actually faced a lot of issues in the above procedure.
After that I was getting issue in loading model in rembg module. So I manually downloaded the model file
and modified the rembg accordingly. It takes corrected the process_image function in lib/pymaf/utils/imutils.py.

This produces hps_img having shape [3,224,224] in my case, that is being further fed to pymaf_net.py in line 282
to extract features using the defined backbone (res50).
But this backbone expects input size as [64, 3, 7, 7].
And that's why i'm getting dimension mismatch runtime error.

Note:- I have modified the image_to_pymaf_tensor in get_transformer() from lib/pymaf/utils/imutils.py as per my pytorch version .

image_to_pymaf_tensor = transforms.Compose([
        transforms.ToPILImage(),                   #Added by us
        transforms.Resize(224),
        transforms.ToTensor(),                     #Added by us
        transforms.Normalize(mean=constants.IMG_NORM_MEAN,
                             std=constants.IMG_NORM_STD)
    ])
ICON:
[w/ Global Image Encoder]: True
[Image Features used by MLP]: ['normal_F', 'normal_B']
[Geometry Features used by MLP]: ['sdf', 'norm', 'vis', 'cmap']
[Dim of Image Features (local)]: 6
[Dim of Geometry Features (ICON)]: 7
[Dim of MLP's first layer]: 13

initialize network with xavier
initialize network with xavier
Resume MLP weights from ../data/ckpt/icon-filter.ckpt
Resume normal model from ../data/ckpt/normal.ckpt
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Dataset Size: 2
  0%|                                                                                                                                             | 0/2 [00:00<?, ?it/s]*********************************
img_np shape: (512, 512, 3)
img_hps shape: torch.Size([3, 224, 224])
input shape x in pymaf_net : torch.Size([3, 224, 224])
input shape x in hmr : torch.Size([3, 224, 224])
  0%|                                                                                                                                             | 0/2 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "infer.py", line 97, in <module>
    for data in pbar:
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
    for obj in iterable:
  File "../lib/dataset/TestDataset.py", line 166, in __getitem__
    preds_dict = self.hps(img_hps.to(self.device))
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "../lib/pymaf/models/pymaf_net.py", line 285, in forward
    s_feat, g_feat = self.feature_extractor(x)
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "../lib/pymaf/models/hmr.py", line 159, in forward
    x = self.conv1(x)
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
    return self._conv_forward(input, self.weight)
  File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead

Please suggest your view on the same.

@YuliangXiu
Copy link
Owner

YuliangXiu commented Mar 7, 2022

I just updated the installation scripts and specified the versions in requirements.txt, colab can work well for now.

Google Colab

You could do git pull and try again.

@ujjawalcse
Copy link
Author

Thanks @YuliangXiu
Now working fine on colab. Will try it on local also.
Can you tell how to get the animated clothed model that is mentioned in the readme?

@YuliangXiu
Copy link
Owner

Please see #17

@ujjawalcse
Copy link
Author

Thanks for your suggestion.
Closing the issue.

@YuliangXiu YuliangXiu added the rembg rembg label Jul 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rembg rembg
Projects
None yet
Development

No branches or pull requests

2 participants