Skip to content

yangyangyang127/Seg-NN

Repository files navigation

No Time to Train 🚀: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation

💥 News

  • [2024.04] Seg-NN is awarded as 🔥 Highlight Paper 🔥 in CVPR 2024!
  • [2024.02] Seg-NN is accepted by CVPR 2024 🎉!
  • [2023.12] We release the paper adapting Point-NN & Point-PN into 3D scene segmentation tasks.

Introduction

we propose an efficient Nonparametric Network for Few-shot 3D Segmentation, Seg-NN, and a further parametric variant, Seg-PN. Seg-NN introduces no learnable parameters and requires no training. Specifically, Seg-NN extracts dense representations by trigonometric positional encodings and achieves comparable performance to some training-based methods. Building upon Seg-NN, Seg-PN only requires to train a lightweight query-support transferring module (QUEST), which enhances the interaction between the few-shot query and support data.

framework

Requirements

Installation

Create a conda environment and install dependencies:

cd Seg-NN 

conda create -n SegNN python=3.7
conda activate SegNN

# Install the according versions of torch and torchvision
conda install pytorch torchvision cudatoolkit

pip install -r requirements.txt
pip install pointnet2_ops_lib/.

Datasets

Installation and data preparation please follow attMPTI.

Please Note: We find a bug in existing data preparation codes. The pre-processed points via existing code are ordered, which may cause models to learn the order of points during training. Thus we add this line to shuffle the points.

Seg-NN

Seg-NN does not require any training and can conduct few-shot segmentation directly via:

bash scripts/segnn.sh

Seg-PN

We have released the pre-trained models under log_s3dis_SegPN and log_scannet_SegPN fold. To test our model, direct run:

bash scripts/segpn_eval.sh

Please note that randomness exists during training even though we have set a random seed.

If you want to train our method under the few-shot setting:

bash scripts/segpn.sh

The test procedure has been included in the above training command after validation.

Note that the above scripts are used for 2-way 1-shot on S3DIS (S_0). Please modify the corresponding hyperparameters to conduct experiments in other settings.

Acknowledgement

We thank Point-NN, PAP-FZS3D, and attMPTI for sharing their source code.

About

[CVPR2024 Hightlight] No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published