Our implementation of the paper Scalable Parallel Feature Extraction and Tracking for Large Time-varying 3D Volume Data. We built this as a part of our course CS677: Topics in Large Data Analysis and Visualization
To run the code the following libraries need to be present:
- A CPP compiler
- MPI library installation: We have used the mpich library to run our code
The script run_script.sh automates the running on the kd lab systems where the hosts can be configured.
Make a directory named build inside the project folder
mkdir build
cd buildWe have used cmake to build the Makefile which needs to be installed. Inside the build directory, run
cmake ..After this, the Makefile will be formed which can be run directly using make.
makeOnce the binary is built, we can directly run the run_script.sh in the parent directory. Be sure to make the PATH variables point correctly.
Now run the following command
../run_script.sh
binary_merger.pymerges all the binary outputs of each of the processorsbinary_to_vti.pyfor converting binary files to .vti filevti_to_binary.pyfor converting .vti to binarytfe_writer.pyfor writing the .tfe file from the colormap.json filenormalize_data.pyfor normalizing the data