SAM 3D Body is one part of SAM 3D, a pair of models for object and human mesh reconstruction. If you’re looking for SAM 3D Objects, click here.
Xitong Yang*, Devansh Kukreja*, Don Pinkus*, Anushka Sagar, Taosha Fan, Jinhyung Park⚬, Soyong Shin⚬, Jinkun Cao, Jiawei Liu, Nicolas Ugrinovic, Matt Feiszli†, Jitendra Malik†, Piotr Dollar†, Kris Kitani†
Meta Superintelligence Labs
*Core Contributor, ⚬Intern, †Project Lead
SAM 3D Body (3DB) is a promptable model for single-image full-body 3D human mesh recovery (HMR). Our method demonstrates state-of-the-art performance, with strong generalization and consistent accuracy in diverse in-the-wild conditions. 3DB estimates the human pose of the body, feet, and hands based on the Momentum Human Rig (MHR), a new parametric mesh representation that decouples skeletal structure and surface shape for improved accuracy and interpretability.
3DB employs an encoder-decoder architecture and supports auxiliary prompts, including 2D keypoints and masks, enabling user-guided inference similar to the SAM family of models. Our model is trained on high-quality annotations from a multi-stage annotation pipeline using differentiable optimization, multi-view geometry, dense keypoint detection, and a data engine to collect and annotated data covering both common and rare poses across a wide range of viewpoints.
| Input | SAM 3D Body | CameraHMR | NLF | HMR2.0b |
|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Our SAM 3D Body demonstrates superior reconstruction quality with more accurate pose estimation, better shape recovery, and improved handling of occlusions and challenging viewpoints compared to existing approaches.
11/19/2025 -- Checkpoints Launched, Dataset Released, Web Demo and Paper are out!
See INSTALL.md for instructions for python environment setup and model checkpoint access.
3DB can reconstruct 3D full-body human mesh from a single image, optionally with keypoint/mask prompts and/or hand refinement from the hand decoder.
For a quick start, try the following lines of code with models loaded directly from Hugging Face (please make sure to follow INSTALL.md to request access to our checkpoints.).
from sam_3d_body import load_sam_3d_body_hf, SAM3DBodyEstimator
# Load model from HuggingFace
model, model_cfg = load_sam_3d_body_hf("facebook/sam-3d-body-dinov3")
# Create estimator
estimator = SAM3DBodyEstimator(
sam_3d_body_model=model,
model_cfg=model_cfg,
)
# 3D human mesh recovery
outputs = estimator.process_one_image("path/to/image.jpg")You can also run our demo script for model inference and visualization:
# Download assets from HuggingFace
hf download facebook/sam-3d-body-dinov3 --local-dir checkpoints/sam-3d-body-dinov3
# Run demo script
python demo.py \
--image_folder <path_to_images> \
--output_folder <path_to_output> \
--checkpoint_path ./checkpoints/sam-3d-body-dinov3/model.ckpt \
--mhr_path ./checkpoints/sam-3d-body-dinov3/assets/mhr_model.ptFor a complete demo with visualization, see notebook/demo_human.ipynb.
The table below shows the performance of SAM 3D Body checkpoints released on 11/19/2025.
| Backbone (size) | 3DPW (MPJPE) | EMDB (MPJPE) | RICH (PVE) | COCO (PCK@.05) | LSPET (PCK@.05) | Freihand (PA-MPJPE) |
|---|---|---|---|---|---|---|
| DINOv3-H+ (840M) (config, checkpoint) |
54.8 | 61.7 | 60.3 | 86.5 | 68.0 | 5.5 |
| ViT-H (631M) (config, checkpoint) |
54.8 | 62.9 | 61.7 | 86.8 | 68.9 | 5.5 |
The SAM 3D Body data is released on Hugging Face. Please follow the instructions to download and process the data.
SAM 3D Objects is a foundation model that reconstructs full 3D shape geometry, texture, and layout from a single image.
As a way to combine the strengths of both SAM 3D Objects and SAM 3D Body, we provide an example notebook that demonstrates how to combine the results of both models such that they are aligned in the same frame of reference. Check it out here.
The SAM 3D Body model checkpoints and code are licensed under SAM License.
See contributing and the code of conduct.
The SAM 3D Body project was made possible with the help of many contributors: Vivian Lee, George Orlin, Nikhila Ravi, Andrew Westbury, Jyun-Ting Song, Zejia Weng, Xizi Zhang, Yuting Ye, Federica Bogo, Ronald Mallet, Ahmed Osman, Rawal Khirodkar, Javier Romero, Carsten Stoll, Juan Carlos Guzman, Sofien Bouaziz, Yuan Dong, Su Zhaoen, Fabian Prada, Alexander Richard, Michael Zollhoefer, Roman Rädle, Sasha Mitts, Michelle Chan, Yael Yungster, Azita Shokrpour, Helen Klein, Mallika Malhotra, Ida Cheng, Eva Galper.
If you use SAM 3D Body or the SAM 3D Body dataset in your research, please use the following BibTeX entry.
@article{yang2025sam3dbody,
title={SAM 3D Body: Robust Full-Body Human Mesh Recovery},
author={Yang, Xitong and Kukreja, Devansh and Pinkus, Don and Sagar, Anushka and Fan, Taosha and Park, Jinhyung and Shin, Soyong and Cao, Jinkun and Liu, Jiawei and Ugrinovic, Nicolas and Feiszli, Matt and Malik, Jitendra and Dollar, Piotr and Kitani, Kris},
journal={arXiv preprint; identifier to be added},
year={2025}
}



















