This repository contains code for our multimodal teacher–student distillation framework designed to enhance surrogate modeling performance using privileged multimodal data during training. The final student model operates on unimodal input, making it practical for real-world deployment.
Due to anonymous review constraints, the full code for training is not included at this stage. The complete implementation will be released after the paper is published.
We provide three application examples:
- Flow (Computational Fluid Dynamics)
- Plate (Structural Mechanics)
- FOS (Geotechnical Engineering / Factor of Safety)
-
Datasets/
Contains datasets for each of the three domains: Flow, Plate, and FOS. -
Trained Models/
Includes pretrained models as reported in the associated paper. -
main.ipynb
Example notebook showing:- How to read and preprocess datasets
- How to create dataloaders for training and evaluation (after publication)
- How to load and test pretrained models
-
model.py
Contains the architecture definitions for both teacher and student networks. -
engine.py(after publication) Includes the training loop and loss functions for the distillation process.
# Create environment
conda create -n multimodal-distill python=3.9
conda activate multimodal-distill
# Install dependencies
pip install -r requirements.txt