GAID: Generative Adversarial Irregularity Detection
This model detect irregular part in mammography images, after training GAID with normal dataset.
Recognizing irregular tissues in mammography images can be defined as discovering regions that does not comply with normal (healthy) tissues present in the training set. To this end, we propose a method based on adversarial training composed of two important modules. The first module, denoted by R (Reconstructor), discovers the distribution of the healthy tissues, by learning to reconstruct them. The second module, M (Representation matching), learn to detect if its input is healthy or irregular.
Prerequisites (my environments)
- Python 3.6
- SciPy = 1.0.0
- mritopng (Convert DICOM Files to PNG)
- CPU or NVIDIA GPU + CUDA CuDNN
- Mias: This dataset contains of 322 mammography images in MLO view with 1024*1024 resolution. The data is categorized into 3 classes: Benign, Malignant, and normal. The ground-trust of abnormal (Benign and Malignant tumor) regions are indicated by center and diameter of those regions
- INbreast: This dataset contains 410 mammography images in mediolateraloblique (MLO) and cranial-caudal (CC) views with a 3000*4000 resolution.We consider all the mass cases in this dataset as irregular versus the normal class present in the dataset.
- CBIS-DDSM: This dataset contains 2,620 scanned film mammography studies from both CC and MLO views. The labels in this dataset also include benign, malignant, and normal with verified pathology information. We use this dataset only in a testing scenario and qualitatively evaluate the pretrained model on MIAS and INbreast on this data.
data └── DATASET Name ├── test │ ├── normal │ │ ├── normal-0.png │ │ ├── normal-1.png │ │ ├── . │ │ ├── . │ │ └── . │ ├── abnormal │ │ ├── mass-0.png │ │ ├── mass-1.png │ │ ├── . │ │ ├── . │ │ └── . │ └── full image │ ├── mask │ │ ├── full image 0_mask.png │ │ ├── full image 1_mask.png │ │ ├── . │ │ ├── . │ │ └── . │ ├── full image 0.png │ ├── full image 1.png │ ├── . │ ├── . │ └── . └── train ├── normal-0.png ├── normal-1.png ├── . ├── . └── .
To train the model on the MIAS or INBreast datasets with preparing patches and create train and test datasets, run the following:
python main.py --dataset=DATASET_NAME --input_height=INPUT_HEIGHT --output_height=OUTPUT_HEIGHT --patch_size=PATCH_SIZE --preparing_data --train
To evaluate the model on the MIAS or INBreast datasets prepared in train step, run the following:
python main.py --dataset=DATASET_NAME --input_height=INPUT_HEIGHT --output_height=OUTPUT_HEIGHT --test
To evaluate the generalizability of GAID, we train it on MIAS and INBreast, and test it on the CBIS-DDSM dataset:
python main.py --dataset=DATASET_NAME --input_height=INPUT_HEIGHT --output_height=OUTPUT_HEIGHT --test_with_patch=False --test
fig 1 Examples of patches (denoted by X) and their reconstructed versions using AnoGAN, GANomaly ,and GAID.
fig 2 Testing results of the proposed irregularity detector on the CBIS-DDSM dataset, trained on MIAS and INBreast datasets. Brighter areas of heat-map indicate higher likelihood of irregularity; The heat-map1 and heat-map2 are for training the GAID on MIAS and INBreast datasets, respectively.
For questions about our paper or code, please contact Milad Ahmadi.
Thanks for LeeDoYup 's implementation of AnoGAN. I implemented GAID based on his implementation.