Memory-based CNN that solves Particle Imaging Velocimetry (PIV).
System Tested On:
Ubuntu 20.04
Automatic Script
A shell script is provided to install necessary libraries in your system (virtual environment recommended).
Simply run
sh setup_env.sh
List of libraries
If the shell script does not work for your, you can also install the following libraries manually.
The data used for training or testing can be found at the shared Google Drive. Or you could create your own dataset using code and instructions available at PIV_Data_Processing.
To start training, run
python main.py
and use the following command-line options:
Necessary:
--mode
: For training purpose, usetrain
--network_model
: by default, usememory-piv-net
. Other variants could bememory-piv-net-lite
(lite version) ormemory-piv-net-ip-tiled
(use image pairs only)--train_dir
: Path to your.h5
training data--val_dir
: Path to your.h5
validation data--num_epoch
: Number of epochs--batch_size
: Batch size, For 24GB VRAM cards, the recommended batch size is 8--time_span
: Choose between3
,5
,7
,9
. Recommended time span is5
.--loss
: Choose betweenRMSE
,MSE
,MAE
andAEE
. Recommended loss isRMSE
.--model_dir
: Folder directory to where the trained model will be saved--output_dir
: Folder directory to where the training loss graph will be saved--save_freq
: Models are saved everysave_freq
epochs to themodel_dir
defined above.
Optional:
--data_type
: Choose betweenmulti-frame
(default) orimage-pair
--tile_size
: Size of the input data--long_term_memory
: Flag to keep the memory for the entire sequence (not recommended)--verbose
: Verbosity
To continue training, add
--checkpoint_dir
: Checkpoint model path to continue training with
An example complete command line input could be
python main.py --mode train --network_model memory-piv-net --train_dir TRAIN_DATA_PATH --val_dir VALIDATION_DATA_PATH --num_epoch 50 --batch_size 8 --time_span 5 --loss RMSE --model_dir model/temp/ --output_dir figs/temp/ --save_freq 5 --verbose