You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @YuliangXiu ,
I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the requirements.txt one by one.
Actually faced a lot of issues in the above procedure.
After that I was getting issue in loading model in rembg module. So I manually downloaded the model file
and modified the rembg accordingly. It takes corrected the process_image function in lib/pymaf/utils/imutils.py.
This produces hps_img having shape [3,224,224] in my case, that is being further fed to pymaf_net.py in line 282
to extract features using the defined backbone (res50).
But this backbone expects input size as [64, 3, 7, 7].
And that's why i'm getting dimension mismatch runtime error.
Note:- I have modified the image_to_pymaf_tensor in get_transformer() from lib/pymaf/utils/imutils.py as per my pytorch version .
image_to_pymaf_tensor = transforms.Compose([
transforms.ToPILImage(), #Added by us
transforms.Resize(224),
transforms.ToTensor(), #Added by us
transforms.Normalize(mean=constants.IMG_NORM_MEAN,
std=constants.IMG_NORM_STD)
])
ICON:
[w/ Global Image Encoder]: True
[Image Features used by MLP]: ['normal_F', 'normal_B']
[Geometry Features used by MLP]: ['sdf', 'norm', 'vis', 'cmap']
[Dim of Image Features (local)]: 6
[Dim of Geometry Features (ICON)]: 7
[Dim of MLP's first layer]: 13
initialize network with xavier
initialize network with xavier
Resume MLP weights from ../data/ckpt/icon-filter.ckpt
Resume normal model from ../data/ckpt/normal.ckpt
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Dataset Size: 2
0%| | 0/2 [00:00<?, ?it/s]*********************************
img_np shape: (512, 512, 3)
img_hps shape: torch.Size([3, 224, 224])
input shape x in pymaf_net : torch.Size([3, 224, 224])
input shape x in hmr : torch.Size([3, 224, 224])
0%| | 0/2 [00:01<?, ?it/s]
Traceback (most recent call last):
File "infer.py", line 97, in <module>
for data in pbar:
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "../lib/dataset/TestDataset.py", line 166, in __getitem__
preds_dict = self.hps(img_hps.to(self.device))
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "../lib/pymaf/models/pymaf_net.py", line 285, in forward
s_feat, g_feat = self.feature_extractor(x)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "../lib/pymaf/models/hmr.py", line 159, in forward
x = self.conv1(x)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead
Please suggest your view on the same.
The text was updated successfully, but these errors were encountered:
Thanks @YuliangXiu
Now working fine on colab. Will try it on local also.
Can you tell how to get the animated clothed model that is mentioned in the readme?
Hey @YuliangXiu ,
I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the
requirements.txt
one by one.Actually faced a lot of issues in the above procedure.
After that I was getting issue in loading model in
rembg module
. So I manually downloaded the model fileand modified the
rembg
accordingly. It takes corrected theprocess_image
function inlib/pymaf/utils/imutils.py
.This produces
hps_img
having shape[3,224,224]
in my case, that is being further fed topymaf_net.py
in line 282to extract features using the defined backbone (
res50
).But this backbone expects input size as
[64, 3, 7, 7]
.And that's why i'm getting dimension mismatch runtime error.
Note:- I have modified the
image_to_pymaf_tensor
inget_transformer()
fromlib/pymaf/utils/imutils.py
as per my pytorch version .Please suggest your view on the same.
The text was updated successfully, but these errors were encountered: