Skip to content

gzhhhhhhh/Mantis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mantis: Mamba-native Tuning is Efficient for 3D Point Cloud Foundation Models

Zihao Guo1, Jihua Zhu1*, Jian Liu2, Ajmal Saeed Mian3

1 Xi’an Jiaotong University, Xi’an, China
2 Singapore University of Technology and Design, Singapore
3 University of Western Australia, Perth, Australia

arXiv Code License

Mantis is a parameter-efficient fine-tuning framework for point cloud analysis built on Mamba backbones.

📨 News

  • [2026/04/27] Initial repository release.
  • [2026/05/01] The official codebase and checkpoints are now publicly available.

Abstract

Pre-trained 3D point cloud foundation models (PFMs) have demonstrated strong transferability across diverse downstream tasks. However, full fine-tuning these models is computationally expensive and storage-intensive. Parameter-efficient fine-tuning (PEFT) offers a promising alternative, but existing PEFT approaches are primarily designed for Transformer-based backbones and rely on token-level prompting or feature transformation. Mamba-based backbones introduce a granularity mismatch between token-level adaptation and state-level sequence dynamics. Consequently, straightforward transfer of existing PEFT approaches to frozen Mamba backbones leads to substantial accuracy degradation and unstable optimization. To address this issue, we propose Mantis, the first Mamba-native PEFT framework for 3D PFMs. Specifically, a State-Aware Adapter (SAA) is introduced to inject lightweight task-conditioned control signals into selective state-space updates, enabling state-level adaptation while keeping the pre-trained backbone frozen. Moreover, different valid point cloud serializations are regularized by Dual-Serialization Consistency Distillation (DSCD), thereby reducing serialization-induced instability. Extensive experiments across multiple benchmarks demonstrate that our Mantis achieves competitive performance with only about 5% trainable parameters.

Overview

Mantis overview

Getting Started

In the following, we will guide you how to use this repository step by step.🤗

Requirements

  • Python 3.10
  • PyTorch 2.0.1
  • CUDA 11.8
  • GCC >= 4.9

Quick Start

conda create -n mantis python=3.10 -y
conda activate mantis

# PyTorch
conda install pytorch==2.0.1 torchvision==0.15.2 pytorch-cuda=11.8 -c pytorch -c nvidia

pip install -r requirements.txt

# PointNet++
pip install "git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"

# GPU kNN
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl

# Chamfer Distance & emd
cd ./extensions/chamfer_dist
python setup.py install

cd ../emd
python setup.py install

# Mamba install
cd ../..
pip install causal-conv1d==1.1.1 mamba-ssm==1.1.1

Datasets

Before running the code, please make sure the working directory is organized as follows:

click to expand 👈
Mantis/
├── data/
│   ├── ModelNet/
│   │   └── modelnet40_normal_resampled/
│   ├── ModelNetFewshot/
│   │   ├── 5way_10shot/
│   │   ├── 5way_20shot/
│   │   ├── 10way_10shot/
│   │   └── 10way_20shot/
│   ├── ScanObjectNN/
│   │   ├── main_split/
│   │   └── main_split_nobg/
│   ├── ShapeNet55-34/
│   │   ├── shapenet_pc/
│   │   └── ShapeNet-55/
│   └── shapenetcore_partanno_segmentation_benchmark_v0_normal/
├── cfgs/
├── datasets/
└── ...

Here are the download links of the required datasets:

Main Results (Mamba3D)

Task Dataset Config Acc. Checkpoints Download
Pre-training ShapeNet N.A. N.A. Point-MAE
Classification ScanObjectNN finetune_scan_objbg_mantis.yaml 93.29% OBJ_BG
Classification ScanObjectNN finetune_scan_objonly_mantis.yaml 92.77% OBJ_ONLY
Classification ScanObjectNN finetune_scan_hardest_mantis.yaml 93.48% PB_T50_RS
Classification ModelNet40 finetune_modelnet_mantis.yaml 94.70% ModelNet40
Part segmentation ShapeNetPart partseg_mantis.yaml 86.10% mIoU Part_Seg

The evaluation commands with checkpoints should be in the following format:

CUDA_VISIBLE_DEVICES=<GPU> python main.py --test --config cfgs/finetune_scan_hardest_mantis.yaml --ckpts <path/to/ckpt> --exp_name <name>

Fine-tuning on downstream tasks

ModelNet40

# Fine-tune Mantis on ModelNet40.
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config cfgs/finetune_modelnet_mantis.yaml --finetune_model --ckpts <path/to/pretrained_ckpt> --exp_name <name>

Although voting may further improve performance, we exclude it from the standard evaluation protocol because its additional test-time cost makes comparisons across different compute platforms less fair.

ScanObjectNN

# Fine-tune Mantis on ScanObjectNN PB_T50_RS.
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config cfgs/finetune_scan_hardest_mantis.yaml --finetune_model --ckpts <path/to/pretrained_ckpt> --exp_name <name>

# Fine-tune Mantis on ScanObjectNN OBJ_BG.
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config cfgs/finetune_scan_objbg_mantis.yaml --finetune_model --ckpts <path/to/pretrained_ckpt> --exp_name <name>

# Fine-tune Mantis on ScanObjectNN OBJ_ONLY.
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config cfgs/finetune_scan_objonly_mantis.yaml --finetune_model --ckpts <path/to/pretrained_ckpt> --exp_name <name>

Part Segmentation

# Fine-tune Mantis on ShapeNetPart.
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config cfgs/partseg_mantis.yaml --part_seg_model --ckpts <path/to/pretrained_ckpt> --exp_name <name>

t-SNE visualization

CUDA_VISIBLE_DEVICES=<GPU> python main.py --tsne_model --config cfgs/finetune_scan_hardest_mantis.yaml --ckpts <path/to/ckpt> --test_model point_mae --tsne_fig_path tsne_mantis_scan_hardest.pdf --exp_name <name>

You may also define custom configurations for other visualization settings.

Acknowledgements

This project is based on Mamba (paper, code), Vision Mamba (paper, code), Point-MAE (paper, code), and Mamba3D (paper, code). Thanks for their efforts.

Citation

If you find this repository useful in your research, please consider giving a star ⭐ and a citation.

@misc{guo2026mantismambanativetuningefficient,
      title={Mantis: Mamba-native Tuning is Efficient for 3D Point Cloud Foundation Models}, 
      author={Zihao Guo and Jihua Zhu and Jian Liu and Ajmal Saeed Mian},
      year={2026},
      eprint={2605.03438},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2605.03438}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors