The source code for our IEEE VIS 2022 paper "VDL-Surrogate: A View-Dependent Latent-based Model for Parameter Space Exploration of Ensemble Simulations". This branch is for the Nyx dataset.
Given sampled view-dependent data and a selected viewpoint, we train a Ray AntoEncoder (RAE):
cd rae
python main.py --root dataset \
--direction x, y, or z \
--sn \
--weighted \
--data-size L_0 \
--img-size H(==W) \
--ch 64 \
--load-batch 8 \
--batch-size 2816 \
--check-every 16 \
--log-every 50
Given the same selected viewpoint, we train a VDL-Predictor, which takes the simulation parameters as input and output predicted view-dependent latent representations:
cd vdl_predictor
python main.py --root dataset \
--direction x, y, or z \
--sn \
--data-size L_0 \
--img-size H(==W) \
--ch 64 \
--batch-size 1 \
--check-every 25 \
--log-every 20
Given the same selected viewpoint, we feed a new simulation parameter into the corresponding trained VDL-Predictor for a predicted view-dependent latent presentation and decode the latent representation by the trained RAE decoder to data space for visualization.
To evaluate VDL-Surrogate on the testing dataset, run
cd vdl_predictor
python eval.py --root dataset \
--direction x, y, or z \
--sn \
--data-size L_0 \
--img-size H(==W) \
--ch 64 \
--resume path_to_trained_VDL-Predictor \
--ae-resume path_to_trained_RAE \
--batch-size 2048
To predict the simulation output given a particular simulation parameter setting, run
cd vdl_predictor
python infer.py --root dataset \
--direction x, y, or z \
--sn \
--data-size L_0 \
--img-size H(==W) \
--ch 64 \
--resume path_to_trained_VDL-Predictor \
--ae-resume path_to_trained_RAE \
--batch-size 2048 \
--omm OmM \
--omb OmB \
--h H
Having the predicted view-dependent data, consider using the following repo for visualization: VolumeRenderer_Nyx.
To evaluate the quality of generated visualization images, run:
cd vdl_predictor
python eval_img.py --root path/to/dataset/root \
--tf transfer_function_id \
--mode sub_directory/to/images
If you use this code for your research, please cite our paper.
@article{shi2022vdl,
title={VDL-Surrogate: A View-Dependent Latent-based Model for Parameter Space Exploration of Ensemble Simulations},
author={Shi, Neng and Xu, Jiayi and Li, Haoyu and Guo, Hanqi and Woodring, Jonathan and Shen, Han-Wei},
journal={IEEE Transactions on Visualization and Computer Graphics},
year={2022},
publisher={IEEE}
}
Our code is inspired by InSituNet.