Improving Model Robustness Against Variations in Micrograph Quality with Unsupervised Domain Adaptation
This is an official implementation of the paper "Improving Model Robustness Against Variations in Micrograph Quality with Unsupervised Domain Adaptation".
- Linux
- NVIDIA GPU (with at least 12GB of memory) + CUDA CuDNN (CPU mode may work without any modification, but untested)
- Python 3.6
- Clone the repository
git clone to_be_added
cd SEM-UDASS
- Install required libraries with
pip3 install -r requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html
- To run the model with your own dataset, you need to create two subdirectories inside the dataset folder with one for source domain dataset and one for target domain dataset. Otherwise, line 52-56 in
style_sem.py
file need to be modified accordingly for training the appearance transformation model.
Follow the specification of parameters below to train or performance inference using the Appearance Transformation Model
or Classification Model [MISO|MLP-VQVAE]
or your own downstream task model under desired behaviors. Examples of training/testing specifications using MISO
and MLP-VQVAE
classification model can be found in the scripts
folder. Moreover, the train_multi_mags.txt
and val_multi_mags.txt
illustrate how we organized training and test samples into text files that are used to perform experiments described in the paper.
python style_manipulation.py *args*
gpu_id
- Specify the GPU# to run the model ondataset
- Specify the dataset. For a customized dataset, a new condition and dataset class need to be added tostyle_manipulation.py
filetrain_dir
- Root directory of train (source) datasettest_dir
- Root directory of test (target) datasetsub_dir
- Sub directory if the dataset is further categorized into various subfoldersconfig_file
- Name of config file that stores necessary parameters of the appearance transformation modelexp_name
- Name of current experimentstage
- Execution stage [train|test|viz]checkpoint_dir
- Directory for storing checkpointcheckpoint
- Checkpoint to be restoredgen_dir
- Directory for storing generated imagesnum_styles
- Number of random styles to be generated for each sample
python exps.py *args*
gpu_id
- Specify the GPU# to run the model onexp_name
- Name of current experimentdataset
- Specify the dataset type. For a customized dataset, a new condition and dataset class need to be added toexp.py
filetrain_dir
- Root directory of train (source) datasetval_dir
- Root directory of val (source) datasettarget_train_dir
- Root directory of adapt (target) datasettest_dir
- Root directory of test (target) datasetconfig_file
- Name of config file that stores necessary parameters for the downstream task modelseed
- [Optional] Specify the predefined seed number for random split of train/val or adapt/test filesstage
- Execution stage [train/test]adapt_type
- Adaptation method during inference [na|tent|hm|wct|app|hm_tent|wct_tent|app_tent]. Please seeinference
function inseg_model.py
for more informationsaved_model_dir
- Directory of the saved downstream task modelstyle_config_file
- Name of config file for the appearance transformation model used in the adaptation phasestyle_checkpoint_file
- Checkpoint of the appearance transformation model to be restoredvqvae_config_file
- Name of config file for the MLP-VQVAE downstream task modelvqvae_checkpoint_dir
- Directory of the saved MLP-VQVAE model
A few representative examples of the dataset used in this study can be located inside sem_datasets
folder.
This implementation is benefited greatly from the publicly available codes from MUNIT and MISO.
If you find this code useful for your research, please cite our paper:
@article{uda-ss,
title={Improving robustness for model discerning synthesis process of uranium oxide with unsupervised domain adaptation},
volume={2},
DOI={https://doi.org/10.3389/fnuen.2023.1230052},
journal={Frontiers in Nuclear Engineering},
author={Ly, Cuong and Nizinski, Cody and Hagen, Alex and McDonald, Luther W and Tasdizen, Tolga},
year={2023}
}