Skip to content

Add Latin Hypercube Sampling for Monte Carlo Analysis#57

Open
bsergi wants to merge 15 commits intomainfrom
bs/lhs
Open

Add Latin Hypercube Sampling for Monte Carlo Analysis#57
bsergi wants to merge 15 commits intomainfrom
bs/lhs

Conversation

@bsergi
Copy link
Copy Markdown
Contributor

@bsergi bsergi commented Apr 23, 2026

Summary

This PR adds the option to sample using a Latin Hypercube sampling (LHS) method when running a Monte Carlo analysis. It also fixes the sampling approach for load.

Technical details

LHS is a quasi-random sampling method that partitions each input distribution being sample into N bins of equal probability and then draws one sample from each bin. The advantage of this method is that it enables convergence to the "true" input distribution with a lower number of samples. The figure below illustrates this with a comparison of random and LHS methods for uniform and triangular costs using the 2050 nuclear ATB costs:

image

More details on the LHS approach can be found in Sheikholeslami, R., & Razavi, S. (2017).

Implementation notes

The LHS method is implemented in the mcs_sampler.py during input processing. When activated, the LHS method samples an n x d matrix where n=number of samples (specified by MCS_runs) and d=dimensions, or the number of variables being sample (specified by MCS_dist_groups).

This matrix is generated upfront and provides the place on the cumulative distribution function for each sample. These are subsequently used by a set of lhs functions to derive the realized values from the respective distributions. The original LHS sample matrix is saved to inputs_case/mcs_latin_hypercube_samples. Note that this approach is distinct from the implementation of the pure random approach, which samples weights and the applies them to the relevant files and switches.

Additional changes

Switches added/removed/changed

Adds MCS_lhs: 0 to use random sampling, 1 to use LHS (default: 1)
Modifies input_processing_only: adds an option 2 that stops input processing right after Monte Carlo sampling (useful for testing the input distributions before running).

Issues resolved

Partially addresses #41 by fixing load.

Known incompatibilities

Relevant sources or documentation

Validation, testing, and comparison report(s)

Monte Carlo sampling is off by default so there is no change to the default case. The attached slide deck summarizes input distribution and results from a set of 54-region ReEDS runs using both the random and LHS approaches for a different number of samples. In general the LHS converges faster to the expected input distributions than the random approach. Both approaches yield reasonably comparable results on aggregate metrics such as mean and 90% coverage for installed capacity, annual generation, and system costs.

20260430_latin_hypercube_sampling.pdf

Checklist for author

Details to double-check

  • Charge code provided to reviewers
  • Included comparison reports for appropriate test cases
  • Documentation updated if necessary
  • If input data added/modified:
    • Dollar year recorded and converted to 2004$ for GAMS
    • Timeseries are in Central Time
    • Units are specified
    • Preprocessing steps have been documented and committed to ReEDS_Input_Processing
    • New large data files handled with .h5 instead of .csv
    • If spatially resolved inputs are modified, the following visualizations for each file are included in the PR description (time-averaged if the inputs are time-resolved):
      • Map of absolute values before
      • Map of absolute values after
      • Map of differences: (after - before) or (after / before)
    • If entries are added/removed/changed in the EIA-NEMS unit database:
      • Changes have been committed to ReEDS_Input_Processing
      • hourlize/resource.py was rerun to regenerate the existing/prescribed VRE capacity data
  • Code formatting standardized
  • Reusable functions used where possible instead of copy/pasted code

General information to guide review

  • Zero impact on results of default case
  • No large data file(s) added/modified
  • No substantive impact on runtime for full-US reference case
  • No substantive impact on folder size for full-US reference case
  • No change to process flow (runbatch.py, d_solve_iterate.py)
  • No change to code organization
  • No change to package requirements (environment.yml or Project.toml)

Did you use LLM tools (chatbot or copilot) in the preparation of this PR? If so, describe how

I used Claude to generate the function docstrings

Tag points of contact here if you would like additional review of the relevant parts of the model

@bsergi bsergi marked this pull request as draft April 23, 2026 19:40
@bsergi bsergi self-assigned this Apr 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants