Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions vista3d/cvpr_workshop/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,27 @@ limitations under the License.
This repository is written for the "CVPR 2025: Foundation Models for Interactive 3D Biomedical Image Segmentation"([link](https://www.codabench.org/competitions/5263/)) challenge. It
is based on MONAI 1.4. Many of the functions in the main VISTA3D repository are moved to MONAI 1.4 and this simplified folder will directly use components from MONAI.

It is overly simplied to train interactive segmentation models across different modalities. The sophisticated transforms and recipes used for VISTA3D are removed.

It is simplified to train interactive segmentation models across different modalities. The sophisticated transforms and recipes used for VISTA3D are removed. The finetuned VISTA3D checkpoint on the challenge subsets is available [here](https://drive.google.com/file/d/1r2KvHP_30nHR3LU7NJEdscVnlZ2hTtcd/view?usp=sharing)

# Setup
```
pip install -r requirements.txt
```

# Training
Download VISTA3D pretrained checkpoint or from scratch. Generate a json list that contains your traning data.
Download the challenge subsets finetuned [checkpoint](https://drive.google.com/file/d/1r2KvHP_30nHR3LU7NJEdscVnlZ2hTtcd/view?usp=sharing) or VISTA3D original [checkpoint]((https://drive.google.com/file/d/1DRYA2-AI-UJ23W1VbjqHsnHENGi0ShUl/view?usp=sharing)). Generate a json list that contains your traning data and update the json file path in the script.
```
torchrun --nnodes=1 --nproc_per_node=8 train_cvpr.py
```

# Inference
We provide a Dockerfile to satisfy the challenge format. For more details, refer to the [challenge website]((https://www.codabench.org/competitions/5263/))
You can directly download the [docker file](https://drive.google.com/file/d/1r2KvHP_30nHR3LU7NJEdscVnlZ2hTtcd/view?usp=sharing) for the challenge baseline.
We provide a Dockerfile to satisfy the challenge format. For more details, refer to the [challenge website]((https://www.codabench.org/competitions/5263/)).
```
docker build -t vista3d:latest .
docker save -o vista3d.tar.gz vista3d:latest
```



9 changes: 5 additions & 4 deletions vista3d/cvpr_workshop/train_cvpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,23 +104,24 @@ def __getitem__(self, idx):
return data
# Training function
def train():
json_file = "subset.json" # Update with your JSON file
epoch_number = 100
start_epoch = 30
start_epoch = 0
lr = 2e-5
checkpoint_dir = "checkpoints"
start_checkpoint = '/workspace/CPRR25_vista3D_model_final_10percent_data.pth'
os.makedirs(checkpoint_dir, exist_ok=True)
dist.init_process_group(backend="nccl")
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
device = torch.device(f"cuda:{local_rank}")
json_file = "subset.json" # Update with your JSON file
dataset = NPZDataset(json_file)
sampler = torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=world_size, rank=local_rank)
dataloader = DataLoader(dataset, batch_size=1, sampler=sampler, num_workers=32)
model = vista3d132(in_channels=1).to(device)
# pretrained_ckpt = torch.load('/workspace/VISTA/vista3d/bundles/vista3d/models/model.pt', map_location=device)
pretrained_ckpt = torch.load(os.path.join(checkpoint_dir, f"model_epoch{start_epoch}.pth"))
pretrained_ckpt = torch.load(start_checkpoint, map_location=device)
# pretrained_ckpt = torch.load(os.path.join(checkpoint_dir, f"model_epoch{start_epoch}.pth"))
model = DDP(model, device_ids=[local_rank], find_unused_parameters=True)
model.load_state_dict(pretrained_ckpt['model'], strict=True)
optimizer = optim.AdamW(model.parameters(), lr=lr, weight_decay=1.0e-05)
Expand Down