A Python implementation of constrained optimization using the active set method with simplex-based direction finding. This library solves nonlinear constrained optimization problems of the form:
minimize f(x)
subject to g(x) ≤ 0
lower_bounds ≤ x ≤ upper_bounds
- Features
- Installation
- Quick Start
- Algorithm Overview
- API Reference
- Examples
- Customization Guide
- Performance Tips
- Limitations
- Contributing
- License
✅ Active Set Method: Efficiently handles equality and inequality constraints
✅ Simplex-based Direction Finding: Robust search direction computation
✅ Bound Constraints: Built-in support for variable bounds
✅ Line Search: Adaptive step size optimization
✅ Comprehensive Logging: Detailed iteration history and convergence tracking
✅ Type Hints: Full type annotations for better IDE support
✅ Example Problems: Ready-to-run test cases included
pip install numpy pandas
- Download the
constrained_optimization.py
file - Import the module in your Python project:
from constrained_optimization import constrained_optimization
Here's a simple example to get you started:
import numpy as np
from constrained_optimization import constrained_optimization
def my_optimization_problem(x):
"""
Minimize: f(x) = (x1 - 3)² + (x2 - 4)²
Subject to: x1 + x2 ≤ 5
"""
objective = (x[0] - 3)**2 + (x[1] - 4)**2
constraints = np.array([x[0] + x[1] - 5]) # g(x) ≤ 0 format
return [objective, constraints]
# Run optimization
optimal_point, optimal_value, iterations = constrained_optimization(my_optimization_problem)
print(f"Optimal solution: {optimal_point}")
print(f"Optimal value: {optimal_value}")
The implementation uses the Active Set Method, which is particularly effective for problems with:
- Smooth objective functions
- Well-behaved constraints
- Moderate problem sizes (< 1000 variables)
- Initialize at starting point with bound constraints
- Identify Active Constraints that are currently limiting movement
- Compute Search Direction using simplex method on active constraint subspace
- Perform Line Search to find optimal step size
- Update Point and check convergence criteria
- Repeat until convergence
The algorithm stops when any of the following conditions are met:
- Direction norm < tolerance (no improving direction found)
- Objective change < tolerance for 2 consecutive iterations
- Maximum iterations exceeded
constrained_optimization(objective_function: Callable) -> Tuple[np.ndarray, float, int]
Parameters:
objective_function
: Function returning[f_value, constraint_values]
Returns:
optimal_point
: Solution vector (np.ndarray)optimal_value
: Optimal objective value (float)iterations
: Number of iterations (int)
Function Signature for objective_function
:
def objective_function(x: np.ndarray) -> List[Union[float, np.ndarray]]:
f = ... # compute objective value
g = np.array([...]) # compute constraint values (g ≤ 0)
return [f, g]
The following parameters can be modified within the constrained_optimization
function:
Parameter | Default | Description |
---|---|---|
initial_point |
[0.5, 0.5] |
Starting guess for optimization |
lower_bounds |
[0.0, 0.0] |
Lower bounds for variables |
upper_bounds |
[5.0, 5.0] |
Upper bounds for variables |
convergence_tolerance |
1e-4 |
Main convergence criterion |
max_iterations |
1000 |
Maximum allowed iterations |
constraint_tolerance |
2e-3 |
Tolerance for constraint activity |
import numpy as np
def quadratic_problem(x):
"""Minimize (x1-3)² + (x2-4)² subject to x1 + x2 ≤ 5"""
f = (x[0] - 3)**2 + (x[1] - 4)**2
g = np.array([x[0] + x[1] - 5])
return [f, g]
x_opt, f_opt, iters = constrained_optimization(quadratic_problem)
# Expected result: x* ≈ [2.0, 3.0], f* ≈ 2.0
def rosenbrock_constrained(x):
"""
Minimize: 100*(x2-x1²)² + (1-x1)²
Subject to: x1² + x2² ≤ 2, x1 + x2 ≤ 1
"""
f = 100 * (x[1] - x[0]**2)**2 + (1 - x[0])**2
g = np.array([
x[0]**2 + x[1]**2 - 2, # circle constraint
x[0] + x[1] - 1 # linear constraint
])
return [f, g]
x_opt, f_opt, iters = constrained_optimization(rosenbrock_constrained)
def portfolio_optimization(x):
"""
Minimize portfolio variance subject to return constraint
x: portfolio weights
"""
# Covariance matrix (example)
C = np.array([[0.1, 0.02], [0.02, 0.08]])
returns = np.array([0.12, 0.10])
target_return = 0.11
# Minimize variance: x^T * C * x
f = 0.5 * np.dot(x, np.dot(C, x))
# Constraints: return ≥ target, weights sum to 1
g = np.array([
target_return - np.dot(returns, x), # return constraint
np.sum(x) - 1 # budget constraint (=1, but written as ≤0 and ≥0)
])
return [f, g]
To solve your specific problem, you need to customize:
- Problem dimensions: Update
initial_point
,lower_bounds
,upper_bounds
- Gradient functions: Modify
objective_gradient()
andconstraint_gradient()
functions - Convergence criteria: Adjust tolerance parameters
For best performance, provide analytical gradients:
def objective_gradient(x):
"""Return gradient of f(x)"""
return np.array([df_dx1, df_dx2, ...])
def constraint_gradient(x):
"""Return Jacobian of g(x)"""
return np.array([
[dg1_dx1, dg1_dx2, ...], # gradient of constraint 1
[dg2_dx1, dg2_dx2, ...], # gradient of constraint 2
# ...
])
Constraint Type | How to Handle |
---|---|
g(x) ≤ 0 |
Return g(x) directly |
g(x) ≥ 0 |
Return -g(x) |
g(x) = 0 |
Add both g(x) ≤ 0 and -g(x) ≤ 0 |
a ≤ x ≤ b |
Use lower_bounds and upper_bounds |
- Scale your problem: Ensure variables and constraints are of similar magnitude
- Provide good starting point: Choose
initial_point
close to expected solution - Use analytical gradients: Much faster than numerical differentiation
- Adjust tolerances: Tighter tolerances = more accuracy but slower convergence
- Monitor iterations: Check the iteration table for convergence patterns
Problem Size | Variables | Constraints | Expected Time |
---|---|---|---|
Small | 2-10 | 1-5 | < 1 second |
Medium | 10-100 | 5-20 | 1-10 seconds |
Large | 100-500 | 20-50 | 10-60 seconds |
- Problem size: Best suited for problems with < 500 variables
- Constraint types: Designed for smooth, continuously differentiable constraints
- Global optimization: Finds local optima only (not global)
- Sparse problems: No specialized handling for sparse constraint matrices
- May struggle with highly nonlinear or poorly scaled problems
- Requires manual gradient function customization for each problem
- No automatic differentiation support
Problem | Symptoms | Solution |
---|---|---|
Slow convergence | Many iterations, small progress | Improve starting point, check scaling |
Infeasible solution | Constraints violated at solution | Check constraint formulation (≤ 0 format) |
Unbounded message | Algorithm reports unbounded | Add missing bound constraints |
Oscillating objective | Objective bounces up/down | Reduce step size, check gradients |
- Check constraint formulation: Ensure all constraints use
g(x) ≤ 0
format - Verify gradients: Test gradients with finite differences
- Monitor iteration table: Look for patterns in convergence
- Start simple: Test with known analytical solutions first
This implementation is based on:
- Nocedal, J. & Wright, S. (2006). Numerical Optimization. Springer.
- Fletcher, R. (1987). Practical Methods of Optimization. Wiley.
- Gill, P.E., Murray, W. & Wright, M.H. (1981). Practical Optimization. Academic Press.
Contributions are welcome! Areas for improvement:
- Add automatic differentiation support
- Implement sparse matrix handling
- Add more convergence criteria options
- Create visualization tools for convergence
- Add support for more constraint types
git clone <repository>
cd constrained-optimization
pip install -r requirements.txt
python -m pytest tests/
This project is licensed under the MIT License - see the LICENSE file for details.
# Minimal working example
import numpy as np
from constrained_optimization import constrained_optimization
def my_problem(x):
f = (x[0] - 1)**2 + (x[1] - 2)**2 # minimize
g = np.array([x[0] + x[1] - 3]) # subject to x1+x2 ≤ 3
return [f, g]
x_opt, f_opt, iters = constrained_optimization(my_problem)
print(f"Solution: {x_opt}, Value: {f_opt}")
Happy Optimizing! 🎯