Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not getting 3d shape #11

Closed
asluborski opened this issue Oct 10, 2022 · 10 comments
Closed

not getting 3d shape #11

asluborski opened this issue Oct 10, 2022 · 10 comments

Comments

@asluborski
Copy link

Hello, I am running prediction script with no errors but I only get hrnet pose heatmap. The other images do not have SMPL model overlaid. Am I missing something?

Thank you.

@akil-ahmed3
Copy link

I am also getting the same thing with pytorch3d 0.7.0 and when I try with pytorch3d 0.3.0, it gives me an error saying,
libcudart.so.10.1: cannot open shared object file: No such file or directory

@asluborski
Copy link
Author

asluborski commented Oct 10, 2022

Yes I tried downgrading my pytorch3D to the version that is recommended along with other versions in requirements.txt but I get the same error.

@akashsengupta1997
Copy link
Owner

Hi,

Pytorch3D 0.5.0 had some breaking changes regarding camera conventions. If you want to use a Pytorch3D 0.5.0 or later, you will need to modify the camera code in the renderer class here.

You should be able to use Pytorch3d 0.3.0 instead. I am not 100% sure what the above error means, but I would guess that you are missing CUDA toolkit 10.1. Maybe this will be fixed by using the appropriate command to install Pytorch with CUDA toolkint 10.1 in your environment from here, i.e.

pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
or
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch

@asluborski
Copy link
Author

asluborski commented Oct 11, 2022

I was able to compile and get human body mesh with the latest pytorch and torchvision using CUDA 11.6, pytorch3d 0.7.0
and Python 3.10.6 after changing the perspective and orthographic camera functions like this

if projection_type == 'perspective': self.cameras = PerspectiveCameras(R=cam_R, T=cam_t, focal_length=perspective_focal_length, principal_point=((img_wh/2., img_wh/2.),), image_size=((img_wh, img_wh),), device=device, in_ndc=False) elif projection_type == 'orthographic': self.cameras = OrthographicCameras(R=cam_R, T=cam_t, focal_length=orthographic_scale*(img_wh/2.), principal_point=((img_wh / 2., img_wh / 2.),), image_size=((img_wh, img_wh),), device = device, in_ndc=False)
The error is probably due to CUDA toolkit or needing a older CUDA version for pytorch.

@akashsengupta1997
Copy link
Owner

Cool, thanks @asluborski. Are the visualisations as expected? If so, I will point future issues on this topic to this thread.

@akashsengupta1997
Copy link
Owner

akashsengupta1997 commented Oct 22, 2022

Formatting

        if projection_type == 'perspective':
            self.cameras = PerspectiveCameras(device=device,
                                              R=cam_R,
                                              T=cam_t,
                                              focal_length=perspective_focal_length,
                                              principal_point=((img_wh/2., img_wh/2.),),
                                              image_size=((img_wh, img_wh),),
                                              in_ndc=False)
        elif projection_type == 'orthographic':
            self.cameras = OrthographicCameras(device=device,
                                               R=cam_R,
                                               T=cam_t,
                                               focal_length=orthographic_scale*(img_wh/2.),
                                               principal_point=((img_wh / 2., img_wh / 2.),),
                                               image_size=((img_wh, img_wh),),
                                               in_ndc=False)

@MarkZuruck
Copy link

HI @akashsengupta1997,

I tried to predict and it looks promising, however, is there a way to get the smpl params (shape and pose) and save them after the prediction ?

@akashsengupta1997
Copy link
Owner

Hi,

The mode of the predicted distribution over SMPL shape and pose can be obtained from here.

Specifically, mode body pose is saved as 23 3x3 rotation matrices in pred_pose_rotmats_mode, global rotation is given by pred_glob and mode shape is given by pred_shape_dist.loc.

@MarkZuruck
Copy link

what do you mean with "saved as 23 3x3 rotation matrices" ? Shouldn't contain 72 params ?

@akashsengupta1997
Copy link
Owner

SMPL pose parameters are the 3D rotations of each joint in the kinematic tree. There are 23 body joints + 1 root joint, so 24 in total. If you represent these 3D rotations as axis-angle vectors, the pose can be given as a 24x3 array, or concatenated to 72x1. We represent the rotations using rotations matrices, hence 23 3x3 matrices for the body and 1 3x3 matrix for the global rotation about the root.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants