Skip to content

ShellRedia/SAM-OCTA-extend

Repository files navigation

SAM-OCTA_extend

This is an extended version of the previous SAM-OCTA project with some added experiments. The link for SAM-OCTA is: SAM-OCTA

Planning to submit to a journal, a reviewer previously suggested setting up two repositories to avoid disputes, so a new one was created

1. Pre-trained Weights Configuration

This project uses LoRA to fine-tune SAM and perform segmentation tasks on OCTA images, built with PyTorch.

First, you should place a pre-trained weights file into the sam_weights folder. The download links for the pre-trained weights are as follows:

vit_h (default): https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth

vit_l: https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth

vit_b: https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth

And modify the corresponding configuration in options.py.

...
parser.add_argument("-model_type", type=str, default="vit_h")
...

2. Dataset Storage Format

This time I got smart, left only the first 5 files under this complex path, making it crystal clear

See the storage format of data in the datasets folder.

If you need the complete dataset, you should contact the authors of the OCTA_500 dataset.

OCTA-500's related paper: https://arxiv.org/abs/2012.07261

Sample results and segmentation metrics will be recorded in the results folder (which will be created automatically if it does not exist).

If you need to visualize the prediction results, please use the display.py file. Since the results folder is generated by time, you will need to replace this line of code. The generated images are stored in the sample_display folder.

...
    result_dir = r"results\2024-03-18-17-05-26\3M_Vein_50_True_vit_b_Intersection_Cross\0010" # Your result dir
...

Here I've added an extra feature that allows comparing the results of two experiments by uncommenting the statement below.

3. Related Configuration

Essentially consistent with SAM-OCTA, but for the sake of experimental completeness, an additional -point_type is used to specify the type of special points, which will be explained in more detail in the next section.

4. Training Modes

In conjunction with the paper, this project offers three training modes:

Random Selection:

python train_special_points.py

Randomly selects a number of positive hint points from the segmentation target and randomly selects a number of negative hint points around them.

Special Annotation:

python train_special_points.py

Selects bifurcation points, endpoints, and intersections of arteries and veins on the vessels as positive hint points, which are obtained from sparse annotations. If needed, I will consider uploading a copy to the cloud storage.

Cross Suppression:

python train_cross.py

Selects non-target vessel areas as negative hint points to verify the consistent effect of negative hint points on non-target areas. The effect seems minimal, presumably because the hint points have a range effect, necessitating more precise constraints.

5. Related preprint:

https://arxiv.org/abs/2310.07183

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages