Skip to content

Xihaier/DINR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DINR: Dynamical Implicit Neural Representations

Python 3.12 PyTorch Lightning License

This is the offical PyTorch implementation of DINR Dynamical Implicit Neural Representations for learning continuous representations of complex scientific data.


📋 Table of Contents


🔧 Installation

Prerequisites

  • Python 3.12+
  • CUDA 11.8+ (for GPU support)
  • conda (recommended for environment management)

Setup

  1. Clone the repository

    git clone https://github.com/Xihaier/DINR.git
    cd DINR
  2. Create conda environment

    conda env create -f environment.yml
    conda activate DINR

🚀 Quick Start

Basic Training

Train a Fourier Feature Network on turbulence data:

python src/train.py data=turbulence model=FFNet

Train a Dynamical FFNet:

python src/train.py data=turbulence model=DynamicalFFNet

Run All Experiments

Use the provided script to train all model variants:

bash scripts/run.sh

Evaluation

Evaluate a trained model:

python src/eval.py \
  data=turbulence \
  model=FFNet \
  ckpt_path=logs/ntk/FFNet/checkpoints/best.ckpt

📁 Project Structure

DINR/
├── configs/                    # Hydra configuration files
│   ├── callbacks/             # Training callbacks (checkpointing, early stopping)
│   ├── data/                  # Dataset configurations
│   ├── model/                 # Model architecture configs
│   │   ├── FFNet.yaml
│   │   ├── SIREN.yaml
│   │   ├── DynamicalFFNet.yaml
│   │   └── DynamicalSIREN.yaml
│   ├── trainer/               # PyTorch Lightning trainer configs
│   ├── logger/                # Logging configurations (W&B)
│   ├── train.yaml             # Main training configuration
│   └── eval.yaml              # Evaluation configuration
│
├── data/                       # Data directory (*.npy files, gitignored)
│   ├── turbulence_1024.npy
│   ├── ctbl3d.npy
│   └── ...
│
├── src/                        # Source code
│   ├── data/
│   │   └── datamodule.py      # Lightning DataModule with NTK subset support
│   ├── models/
│   │   ├── components/        # Model architectures
│   │   │   ├── FFNet.py              # Fourier Feature Network
│   │   │   ├── SIRENNet.py           # SIREN Network
│   │   │   ├── Dynamical_FFNet.py    # OC-FFNet
│   │   │   └── Dynamical_SIRENNet.py # OC-SIREN
│   │   └── modelmodule.py     # Lightning modules (INRTraining, DINRTraining)
│   ├── utils/
│   │   ├── ntk.py             # Neural Tangent Kernel analysis
│   │   ├── metrics.py         # Loss and error metrics
│   │   ├── viz.py             # Visualization utilities
│   │   └── ...                # Various utilities
│   ├── train.py               # Training entry point
│   └── eval.py                # Evaluation entry point
│
├── scripts/
│   └── run.sh                 # Batch training script
│
├── logs/                       # Training outputs (gitignored)
│   └── ntk/                   # Organized by experiment name
│
├── environment.yml             # Conda environment specification
├── .gitignore
├── .project-root              # Root marker for rootutils
└── README.md

⚙️ Configuration

DINR uses Hydra for configuration management. All configurations are in the configs/ directory.

Key Configuration Files

Model Configuration (configs/model/)

FFNet.yaml - Traditional Fourier Feature Network

net:
  _target_: src.models.components.FFNet.FourierFeatureNetwork
  input_dim: 2
  mapping_size: 256      # Fourier feature dimension
  hidden_dim: 256
  num_layers: 5
  output_dim: 1
  sigma: 10.0           # Fourier feature scale
  dropout_rate: 0.1
  activation: "GELU"
  use_residual: true

DynamicalFFNet.yaml - Dynamical FFNet

net:
  _target_: src.models.components.Dynamical_FFNet.DynamicalFourierFeatureNetwork
  input_dim: 2
  mapping_size: 256
  hidden_dim: 256
  num_layers: 3          # ODE function layers
  num_steps: 12          # ODE integration steps
  total_time: 1.0        # Integration time horizon
  ot_lambda: 0.1         # Optimal transport weight
  block_type: "residual"

Data Configuration (configs/data/turbulence.yaml)

_target_: src.data.datamodule.DataModule
data_dir: ${paths.data_dir}turbulence_1024.npy
in_features: 2
normalization: min-max
data_shape: [1024, 1024]
batch_size: [65536, 65536, 65536]  # [train, val, test]
ntk_subset_mode: subgrid           # NTK coordinate sampling
ntk_subgrid_g: 32                  # NTK grid resolution
generalization_test: false

📖 Citation

If you use this code in your research, please cite:

@article{park2025dynamical,
  title={Dynamical Implicit Neural Representations},
  author={Park, Yesom and Kan, Kelvin and Flynn, Thomas and Huang, Yi and Yoo, Shinjae and Osher, Stanley and Luo, Xihaier},
  journal={arXiv preprint arXiv:2511.21787},
  year={2025}
}

📄 License

This project is licensed under the MIT License.


🙏 Acknowledgments

  • PyTorch Lightning team for the excellent training framework
  • Hydra team for flexible configuration management
  • Authors of FFNet and SIREN for foundational INR architectures
  • The neural ODE community for continuous-depth architecture inspiration

📞 Contact


Note: This project is under active development. Star ⭐ the repository to stay updated!

About

CT GenAI Project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published