Code for the paper Learning Where to Learn: Training Distribution Selection for Provable OOD Performance
This repository contains code to reproduce experiments from the paper. It includes:
- Bilevel kernel-based function approximation experiments
- Dirichlet-to-Neumann (NtD) and Darcy flow examples
Both folders are organized independently, but share a unified objective of exploring optimal training distributions for out-of-distribution (OOD) generalization.
./
├── function_approximation/ # Kernel-based function approximation experiments
│ ├── driver.py
│ ├── plot.py
│ ├── Project.yml
│ └── ...
│
├── operator_learning/ # NtD and Darcy flow examples
│ ├── NtDExample.py
│ ├── DarcyFlowGPU.py
│ ├── PlotResults.ipynb
│ ├── requirements.txt
│ └── ...
conda env create -f function_approximation/Project.yml
conda activate bilevel
pip install -r operator_learning/requirements.txt
Note: Both environments assume GPU support if available.
-
Run experiments:
cd function_approximation python -u driver.py
Outputs:
errors.npy
and other intermediate files. -
Plot results:
python plot.py
-
Run NtD example:
cd operator_learning python NtDExample.py
-
Run Darcy flow example:
python DarcyFlowGPU.py
-
Plot results:
jupyter notebook PlotResults.ipynb
- Make sure
NtD_results.pkl
andDarcyFlow_results.pkl
are in the directory.
- Make sure
- Hyperparameters (e.g., sample sizes, training iterations) can be modified within each script.
- Ensure all required dependencies are installed before executing code.
- For plotting, a LaTeX distribution may be required.
- The Github for the AMINO architecture used in Figure 1 of our paper can be found here.