Lightweight, deterministic Monte Carlo simulation framework with robust statistical analytics, parallel execution, and GPU acceleration.
pip install mcframeworkgit clone https://github.com/milanfusco/mcframework.git
cd mcframework
pip install -e .| Package | Version | Purpose |
|---|---|---|
| Python | ≥ 3.10 | Runtime |
| NumPy | ≥ 1.26 | Arrays, RNG |
| SciPy | ≥ 1.10 | Statistics |
| Matplotlib | ≥ 3.7 | Visualization |
| PyTorch | ≥ 2.9.1 | GPU acceleration (optional) |
| CuPy | ≥ 12.0.0 | cuRAND backend (optional) |
# All extras
pip install -e ".[dev,test,docs,gui,gpu,cuda]"
# Individual extras
pip install -e ".[dev]" # Linting (ruff, pylint)
pip install -e ".[test]" # Testing (pytest, coverage)
pip install -e ".[docs]" # Documentation (Sphinx, themes)
pip install -e ".[gui]" # GUI application (PySide6)
pip install -e ".[gpu]" # PyTorch GPU backends (MPS, CUDA)
pip install -e ".[cuda]" # PyTorch + CuPy for cuRANDfrom mcframework import PiEstimationSimulation
sim = PiEstimationSimulation()
sim.set_seed(42)
result = sim.run(10_000, backend="thread")
print(result)from mcframework import MonteCarloSimulation
class DiceSumSimulation(MonteCarloSimulation):
def __init__(self):
super().__init__("Dice Sum")
def single_simulation(self, _rng=None, n_dice: int = 5) -> float:
rng = self._rng(_rng, self.rng)
return float(rng.integers(1, 7, size=n_dice).sum())
sim = DiceSumSimulation()
sim.set_seed(42)
result = sim.run(10_000, backend="thread")
print(f"Mean: {result.mean:.2f}") # ~17.5result = sim.run(
50_000,
percentiles=(1, 5, 50, 95, 99),
confidence=0.99,
ci_method="auto",
)
print(result.stats["ci_mean"]) # 99% confidence intervalMonteCarloFramework provides a registry for managing and comparing multiple simulations:
from mcframework import MonteCarloFramework, PiEstimationSimulation
fw = MonteCarloFramework()
fw.register_simulation(PiEstimationSimulation())
result = fw.run_simulation("Pi Estimation", 10_000, n_points=5000, backend="thread")
print(result.result_to_string())- Abstract base class (
MonteCarloSimulation) — Define simulations by implementingsingle_simulation() - Deterministic parallelism — Reproducible results via NumPy
SeedSequencespawning - Cross-platform execution — Threads on POSIX, processes on Windows
- Structured results —
SimulationResultdataclass with metadata and formatting
- Descriptive statistics — Mean, std, percentiles, skew, kurtosis
- Parametric CI — z/t critical values with auto-selection
- Bootstrap CI — Percentile and BCa (bias-corrected and accelerated) methods
- Distribution-free bounds — Chebyshev intervals, Markov probability
- CUDA (NVIDIA) — Adaptive batch sizing, CUDA streams, dual RNG (torch.Generator + cuRAND), native float64, multi-GPU
- MPS (Apple Silicon) — Metal Performance Shaders on M1/M2/M3/M4, unified memory, best-effort determinism
- Torch CPU — Vectorized batch execution via PyTorch without GPU hardware
- Pluggable architecture — Protocol-based
ExecutionBackendinterface for custom backends
- PyTorch profiler integration — CPU and CUDA profiling with configurable schedules
- Chrome trace export — Visualize execution timelines in
chrome://tracing - Memory and FLOPs — Optional memory profiling and floating-point operation estimation
- Pi Estimation — Geometric probability on unit disk
- Portfolio Simulation — GBM (Geometric Brownian Motion) wealth dynamics
- Black-Scholes — European/American option pricing with Greeks
mcframework/
├── __init__.py # Public API exports
├── core.py # SimulationResult, MonteCarloFramework
├── simulation.py # MonteCarloSimulation base class
├── stats_engine.py # StatsEngine, StatsContext, ComputeResult, metrics
├── utils.py # z_crit, t_crit, autocrit
├── profiling.py # PyTorch profiler integration
├── backends/
│ ├── __init__.py # Backend exports (lazy Torch imports)
│ ├── base.py # ExecutionBackend protocol, utilities
│ ├── sequential.py # Single-threaded execution
│ ├── parallel.py # Thread and process backends
│ ├── torch.py # TorchBackend factory (auto-selects device)
│ ├── torch_base.py # Shared Torch/cuRAND utilities
│ ├── torch_cpu.py # PyTorch CPU backend
│ ├── torch_mps.py # Apple Silicon MPS backend
│ └── torch_cuda.py # NVIDIA CUDA backend
└── sims/
├── __init__.py # Simulation catalog
├── pi.py # PiEstimationSimulation
├── portfolio.py # PortfolioSimulation
└── black_scholes.py # BlackScholesSimulation, BlackScholesPathSimulation
MonteCarloSimulation.run() supports multiple execution strategies via the backend parameter:
| Backend | Selection | Description |
|---|---|---|
"sequential" |
backend="sequential" |
Single-threaded execution |
"thread" |
backend="thread" (POSIX default) |
ThreadPoolExecutor — NumPy releases GIL |
"process" |
backend="process" (Windows default) |
ProcessPoolExecutor — avoids GIL serialization |
"torch" (CPU) |
backend="torch", torch_device="cpu" |
Vectorized PyTorch batching on CPU |
"torch" (MPS) |
backend="torch", torch_device="mps" |
Apple Silicon GPU via Metal |
"torch" (CUDA) |
backend="torch", torch_device="cuda" |
NVIDIA GPU with adaptive batching |
Simulations that implement torch_batch() can run on NVIDIA CUDA or Apple Silicon MPS devices with significant speedups over CPU. Install PyTorch via the gpu or cuda extras (see Optional Dependencies).
from mcframework import PiEstimationSimulation
sim = PiEstimationSimulation()
sim.set_seed(42)
# NVIDIA GPU
result = sim.run(1_000_000, backend="torch", torch_device="cuda")
# Apple Silicon GPU
result = sim.run(1_000_000, backend="torch", torch_device="mps")
# PyTorch CPU (vectorized batching, no GPU required)
result = sim.run(1_000_000, backend="torch", torch_device="cpu")Set supports_batch = True as a class attribute and implement torch_batch():
from mcframework import MonteCarloSimulation
import torch
class MySimulation(MonteCarloSimulation):
supports_batch = True
def single_simulation(self, _rng=None, **kwargs):
rng = self._rng(_rng, self.rng)
x, y = rng.random(), rng.random()
return 4.0 if (x*x + y*y) <= 1.0 else 0.0
def torch_batch(self, n, *, device, generator):
x = torch.rand(n, device=device, generator=generator)
y = torch.rand(n, device=device, generator=generator)
return 4.0 * ((x*x + y*y) <= 1.0).float()| Feature | CUDA (NVIDIA) | MPS (Apple Silicon) | CPU |
|---|---|---|---|
| float64 support | Native | Emulated (float32 -> float64) | Native |
| Determinism | Bitwise | Best-effort (statistical) | Bitwise |
| Multi-GPU | Yes | Single device | Multi-core |
| CUDA streams | Yes | No | N/A |
| Adaptive batching | Yes | No | Sequential/Parallel |
For detailed usage, configuration, and troubleshooting, see the dedicated backend guides:
- CUDA Backend Guide — adaptive batching, cuRAND, streams, multi-GPU
- MPS Backend Guide — Apple Silicon setup, float32 handling, determinism
The framework includes a PySide6 GUI for Black-Scholes Monte Carlo simulations.
pip install -e ".[gui]"
python demos/gui/quant_black_scholes.pyFeatures:
- Live stock data from Yahoo Finance
- Monte Carlo path simulations
- Option pricing with Greeks (Δ, Γ, ν, Θ, ρ)
- Interactive what-if analysis
- 3D option price surfaces
- HTML report export
Scenario Presets: High volatility (TSLA), Index ETFs (SPY), Crypto-adjacent (COIN), Dividend stocks (JNJ)
# Run tests with coverage
pytest --cov=mcframework -v
# Generate coverage reports
pytest --cov=mcframework --cov-report=xml:coverage.xml # XML
pytest --cov=mcframework --cov-report=html # HTMLruff check src/
pylint src/mcframework/# Install docs dependencies
pip install -e ".[docs]"
# Build HTML documentation
sphinx-build -b html docs/source docs/_build/html
# Serve locally
python -m http.server 8000 -d docs/_build/htmlThe documentation uses:
- Sphinx with pydata-sphinx-theme
- Mermaid for interactive diagrams
- NumPy-style docstrings with LaTeX math
- Light/dark theme toggle with diagram re-rendering
MIT License. See LICENSE file.
- Milan Fusco — mdfusco@student.ysu.edu