Active Learning for Reliability Analysis - A high-performance, production-ready Python framework for structural reliability analysis using active learning and Gaussian Process (Kriging) surrogate models.
-
AK-MCS (Active learning reliability method combining Kriging and Monte Carlo Simulation)
- Single-point sequential sampling
- Batch sampling with Determinantal Point Process (DPP)
- Two stopping criteria: U-function and BESC (Balanced Error Stopping Criterion)
- Two learning functions: U-function and EER (Expected Error Reduction)
-
AK-SYS (Active learning for system reliability)
- Series and parallel systems
- Masking effect weighting
- System-level error estimation
- Component-wise adaptive sampling
✅ Production-Ready Code Quality
- Comprehensive docstrings with mathematical formulas
- Type hints for better IDE support
- Extensive comments explaining algorithms
- Numerical stability handling (float64, log-space computation)
✅ Advanced Implementations
- DPP batch sampling with Schur complement acceleration (O(N·D) complexity)
- ARD (Automatic Relevance Determination) lengthscale-based distance weighting
- Adaptive MCS pool expansion when COV threshold is not met
- Automatic fallback to Adam optimizer when L-BFGS fails
✅ Rich Visualization
- 2D: Limit state surface with contour plots
- High-dimensional: t-SNE dimensionality reduction
- Convergence curves (Pf, COV, stopping metrics)
- Component-wise sample distribution plots
✅ Flexible Configuration
- Support for Normal, Uniform, Log-Normal, and Gumbel distributions
- Automatic boundary calculation
- Probability transformations (Nataf, isoprobabilistic)
- Easy-to-use dataclass-based configuration
- Python 3.8 or higher
- PyTorch 2.0 or higher
- CUDA (optional, for GPU acceleration)
# Clone the repository
git clone https://github.com/yourusername/ActiveLearning4RA.git
cd ActiveLearning4RA
# Install dependencies
pip install -r requirements.txt
# Install the package in development mode
pip install -e .torch>=2.0.0
numpy>=1.21.0
scipy>=1.7.0
matplotlib>=3.4.0
botorch>=0.9.0
gpytorch>=1.11.0
scikit-learn>=1.0.0
tqdm>=4.60.0
import torch
from ActiveLearning4RA.core.component import AK_MCS
from ActiveLearning4RA.config.input_config import InputConfig, VariableConfig
# Define performance function: g(x) <= 0 means failure
def performance_function(x):
"""Simple linear limit state"""
return x[:, 0] + 2*x[:, 1] - 5
# Configure input variables
input_config = InputConfig(variables=[
VariableConfig(name='X1', type='normal', parameter_1=0.0, parameter_2=1.0),
VariableConfig(name='X2', type='normal', parameter_1=0.0, parameter_2=1.0)
])
# Generate initial DOE using Latin Hypercube Sampling
init_doe = input_config.generate_doe(num_samples=10)
# Initialize AK-MCS
ak_mcs = AK_MCS(
func=performance_function,
input_config=input_config,
stop_criterion='BESC', # 'U' or 'BESC'
learning_func='EER', # 'U' or 'EER'
seed=42
)
# Run the algorithm
model = ak_mcs.run(
init_doe=init_doe,
max_iter=100,
samples_no=100000,
batch_size=3, # 1 for sequential, >1 for batch DPP
cov_threshold=0.05
)
# Visualize results
ak_mcs.plot_convergence() # Pf, COV, stopping metric
ak_mcs.plot_samples() # Sample distributionfrom ActiveLearning4RA.core.system import AK_SYS
# Define component performance functions
def component_1(x):
return 3 + 0.1*(x[:, 0] - x[:, 1])**2 - (x[:, 0] + x[:, 1]) / torch.sqrt(torch.tensor(2.0))
def component_2(x):
return 3 + 0.1*(x[:, 0] - x[:, 1])**2 + (x[:, 0] + x[:, 1]) / torch.sqrt(torch.tensor(2.0))
def component_3(x):
return (x[:, 0] - x[:, 1]) + 6 / torch.sqrt(torch.tensor(2.0))
def component_4(x):
return (x[:, 1] - x[:, 0]) + 6 / torch.sqrt(torch.tensor(2.0))
# Configure input
input_config = InputConfig(variables=[
VariableConfig(name='X1', type='normal', parameter_1=0.0, parameter_2=1.0),
VariableConfig(name='X2', type='normal', parameter_1=0.0, parameter_2=1.0)
])
# Initialize AK-SYS
ak_sys = AK_SYS(
funcs=[component_1, component_2, component_3, component_4],
input_config=input_config,
system_type='series', # 'series' or 'parallel'
stop_criterion='BESC',
learning_func='EER'
)
# Run the algorithm
models = ak_sys.run(
init_doe=input_config.generate_doe(num_samples=12),
max_iter=100,
samples_no=100000,
batch_size=3
)
# Visualize
ak_sys.plot_convergence()
ak_sys.plot_samples() # Plots each component separatelyWorkflow:
- Initialization: Generate initial DOE using LHS
- Surrogate Training: Fit Gaussian Process (Kriging) model to existing samples
- MCS Prediction: Predict performance function on large MCS pool
- Convergence Check:
- U-function:
min U(x) >= 2(97.7% confidence) - BESC:
ε_Kriging <= ε_MCS(adaptive threshold)
- U-function:
- Sample Selection:
- Single-point: Select point with minimum U or maximum EER
- Batch (DPP): Maximize quality-diversity trade-off using DPP
- Update: Add new sample(s) and retrain model
- Repeat until convergence
Learning Functions:
| Function | Formula | Interpretation |
|---|---|---|
| U | `U(x) = | μ(x) |
| EER | L = p_wse + γ√(p_wse(1-p_wse)) (safe domain) L = p_wse (failure domain) |
Expected error reduction (larger = more informative) |
Stopping Criteria:
| Criterion | Formula | Advantage |
|---|---|---|
| U | min U(x) >= 2 |
Simple, widely used |
| BESC | ε_K <= ε_MCS |
Adaptive, no manual threshold tuning |
Key Concepts:
-
Series System: System fails if ANY component fails
P_f = P(⋃ g_i ≤ 0)- Critical component:
argmin g_i(x)(smallest predicted value)
-
Parallel System: System fails if ALL components fail
P_f = P(⋂ g_i ≤ 0)- Critical component:
argmax g_i(x)(largest predicted value)
-
Masking Effect Weight:
- Series:
w_i(x) = ∏_{j≠i} Φ(μ_j/σ_j)(other components safe) - Parallel:
w_i(x) = ∏_{j≠i} Φ(-μ_j/σ_j)(other components fail)
- Series:
Composite Learning Function:
V_i(x) = L_i(x) × w_mask,i(x)
Select: (x*, k*) = argmax_{x,k} V_k(x)
Where L_i(x) is the component-level learning function (U or EER).
The examples/ directory contains comprehensive test cases:
| Case | Name | Dimension | Description |
|---|---|---|---|
| 1 | Dynamic Response | 6D | Stiffness-mass system dynamic response |
| 2 | High-Frequency Oscillation | 6D | Linear combination with sine perturbation |
| 3 | Structural Stiffness | 7D | Complex geometric cross-section stiffness |
| 4 | FEM Response | 7D | Quadratic polynomial response surface |
| 5 | RC Column Deformation | 6D | Reinforced concrete column displacement |
| 6 | Four-Branch Series | 2D | Classic four-branch series system |
| 7 | High-Dimensional Linear | 15D | 15D normal variable summation |
Run a single case:
cd examples/component
python main.py
# Edit main() to select case number and parametersCompare multiple methods:
from examples.component.main import compare_methods
results = compare_methods(
case=6, # Four-branch series system
init_samples=12,
batch_size=5,
samples_no=200000,
max_iter=100
)This will compare:
- AK-MCS (U + single)
- AK-MCS-BESC (BESC + EER + single)
- AK-BESC-BATCH (BESC + EER + batch)
- Direct MCS (reference)
And generate comparison tables showing Pf, COV, function evaluations, and speedup.
Similar structure for series/parallel systems.
| Method | Case 6 (Pf≈2.2e-3) | Function Calls | Speedup |
|---|---|---|---|
| Direct MCS | 2.20e-3 | 200,000 | 1.0× |
| AK-MCS (U) | 2.18e-3 | 87 | 2,299× |
| AK-MCS-BESC | 2.21e-3 | 63 | 3,175× |
| AK-BESC-BATCH (b=5) | 2.19e-3 | 52 | 3,846× |
Results may vary with random seed.
# Extend VariableConfig to support new distributions
class CustomDistribution:
def __init__(self, param1, param2):
self.param1 = param1
self.param2 = param2
def sample(self, n):
# Your sampling logic
pass
def pdf(self, x):
# Your PDF logic
passdef my_learning_function(mu, stddev):
"""
Define your own learning function
Args:
mu: Kriging mean prediction
stddev: Kriging standard deviation
Returns:
scores: Higher score = more informative
"""
# Example: combine U and variance
u_values = torch.abs(mu) / (stddev + 1e-9)
return (1.0 / (u_values + 1e-9)) * stddev# Automatically uses GPU if available
ak_mcs = AK_MCS(
func=performance_function,
input_config=input_config,
device=torch.device('cuda') # Force GPU
)
# Or specify CPU
ak_mcs = AK_MCS(..., device=torch.device('cpu'))class AK_MCS:
def __init__(
self,
func: Callable, # Performance function g(x)
input_config: InputConfig, # Input variable configuration
stop_criterion: str = 'BESC', # 'U' or 'BESC'
learning_func: str = 'EER', # 'U' or 'EER'
seed: int = None, # Random seed
device = None # torch.device
)
def run(
self,
init_doe: torch.Tensor, # Initial DOE samples
max_iter: int, # Maximum iterations
samples_no: int, # MCS pool size
batch_size: int = 1, # Batch size (1=single, >1=batch)
cov_threshold: float = 0.05, # COV convergence threshold
mcs_samples: torch.Tensor = None # Optional: pre-generated MCS pool
) -> ExactGP
def plot_convergence(self) # Plot Pf, COV, stopping metric
def plot_samples(self) # Plot sample distributionclass AK_SYS:
def __init__(
self,
funcs: List[Callable], # Component performance functions
input_config: InputConfig,
system_type: str = 'series', # 'series' or 'parallel'
stop_criterion: str = 'BESC',
learning_func: str = 'EER',
device = None
)
def run(
self,
init_doe: torch.Tensor,
max_iter: int,
samples_no: int,
batch_size: int = 1,
cov_threshold: float = 0.05
) -> List[ExactGP] # Returns list of component models
def plot_convergence(self)
def plot_samples(self) # Plots each component separately@dataclass
class VariableConfig:
name: str # Variable name
type: str # 'normal', 'uniform', 'log-normal', 'gumbel'
parameter_1: float # Mean (normal/log-normal), lower bound (uniform), location (gumbel)
parameter_2: float # Std (normal/log-normal), upper bound (uniform), scale (gumbel)
@dataclass
class InputConfig:
variables: List[VariableConfig]
def generate_doe(self, num_samples: int) -> torch.Tensor
def generate_samples(self, num_samples: int) -> torch.Tensor
def get_boundaries(self) -> Tuple[List[float], List[float]]
def compute_joint_pdf(self, samples: torch.Tensor) -> torch.Tensor
def transform_to_standard_normal(self, samples: torch.Tensor) -> torch.TensorContributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Code Style:
- Follow PEP 8
- Add docstrings for all public functions
- Include type hints
- Write descriptive comments for complex algorithms
This project is licensed under the MIT License - see the LICENSE file for details.
Related Publications:
- Echard, B., Gayton, N., & Lemaire, M. (2011). AK-MCS: An active learning reliability method combining Kriging and Monte Carlo Simulation. Structural Safety, 33(2), 145-154.
- Echard, B., Gayton, N., Lemaire, M., & Relun, N. (2013). A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models. Reliability Engineering & System Safety, 111, 232-240.
- Fauriat, W., & Gayton, N. (2014). AK-SYS: An adaptation of the AK-MCS method for system reliability. Reliability Engineering & System Safety, 123, 137-144.
Q: What's the difference between U and BESC stopping criteria?
A:
- U criterion (
min U >= 2) is a fixed threshold requiring all sample points to have high classification confidence (97.7%). Simple but may be conservative. - BESC compares Kriging error with MCS statistical error, automatically adapting to the problem. More efficient but slightly more complex.
Q: When should I use batch sampling?
A: Batch sampling (DPP) is beneficial when:
- Function evaluation is expensive and you can parallelize
- You want to reduce iterations (fewer model retraining)
- The limit state is complex with multiple critical regions
Q: How do I choose the initial DOE size?
A: Rules of thumb:
- Start with
10 × dimensionfor simple problems - Use
15-20 × dimensionfor complex/nonlinear problems - For AK-SYS, consider using shared initial DOE across all components
Q: Can I use this for time-dependent reliability?
A: The current version focuses on time-independent reliability. For time-dependent problems, you would need to:
- Define performance function over time domain
- Use out-crossing rate methods
- Extend the framework (contributions welcome!)
- GPyTorch - Gaussian Process library (our GP backend)
- BoTorch - Bayesian Optimization library
- UQpy - Uncertainty Quantification toolkit
- OpenTURNS - Uncertainty treatment library
For questions or collaborations:
- Open an issue on GitHub
- Email: your.email@example.com
- Thanks to the authors of AK-MCS and AK-SYS methods for their groundbreaking work
- GPyTorch and BoTorch teams for excellent GP implementations
- The structural reliability research community