Skip to content

kpullak/ZoutendijkAlgorithm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Constrained Optimization using Active Set Method

A Python implementation of constrained optimization using the active set method with simplex-based direction finding. This library solves nonlinear constrained optimization problems of the form:

minimize    f(x)
subject to  g(x) ≤ 0
            lower_bounds ≤ x ≤ upper_bounds

Table of Contents

Features

Active Set Method: Efficiently handles equality and inequality constraints
Simplex-based Direction Finding: Robust search direction computation
Bound Constraints: Built-in support for variable bounds
Line Search: Adaptive step size optimization
Comprehensive Logging: Detailed iteration history and convergence tracking
Type Hints: Full type annotations for better IDE support
Example Problems: Ready-to-run test cases included

Installation

Prerequisites

pip install numpy pandas

Setup

  1. Download the constrained_optimization.py file
  2. Import the module in your Python project:
from constrained_optimization import constrained_optimization

Quick Start

Here's a simple example to get you started:

import numpy as np
from constrained_optimization import constrained_optimization

def my_optimization_problem(x):
    """
    Minimize: f(x) = (x1 - 3)² + (x2 - 4)²
    Subject to: x1 + x2 ≤ 5
    """
    objective = (x[0] - 3)**2 + (x[1] - 4)**2
    constraints = np.array([x[0] + x[1] - 5])  # g(x) ≤ 0 format
    return [objective, constraints]

# Run optimization
optimal_point, optimal_value, iterations = constrained_optimization(my_optimization_problem)

print(f"Optimal solution: {optimal_point}")
print(f"Optimal value: {optimal_value}")

Algorithm Overview

The implementation uses the Active Set Method, which is particularly effective for problems with:

  • Smooth objective functions
  • Well-behaved constraints
  • Moderate problem sizes (< 1000 variables)

Algorithm Steps

  1. Initialize at starting point with bound constraints
  2. Identify Active Constraints that are currently limiting movement
  3. Compute Search Direction using simplex method on active constraint subspace
  4. Perform Line Search to find optimal step size
  5. Update Point and check convergence criteria
  6. Repeat until convergence

Convergence Criteria

The algorithm stops when any of the following conditions are met:

  • Direction norm < tolerance (no improving direction found)
  • Objective change < tolerance for 2 consecutive iterations
  • Maximum iterations exceeded

API Reference

Main Function

constrained_optimization(objective_function: Callable) -> Tuple[np.ndarray, float, int]

Parameters:

  • objective_function: Function returning [f_value, constraint_values]

Returns:

  • optimal_point: Solution vector (np.ndarray)
  • optimal_value: Optimal objective value (float)
  • iterations: Number of iterations (int)

Function Signature for objective_function:

def objective_function(x: np.ndarray) -> List[Union[float, np.ndarray]]:
    f = ...  # compute objective value
    g = np.array([...])  # compute constraint values (g ≤ 0)
    return [f, g]

Configuration Parameters

The following parameters can be modified within the constrained_optimization function:

Parameter Default Description
initial_point [0.5, 0.5] Starting guess for optimization
lower_bounds [0.0, 0.0] Lower bounds for variables
upper_bounds [5.0, 5.0] Upper bounds for variables
convergence_tolerance 1e-4 Main convergence criterion
max_iterations 1000 Maximum allowed iterations
constraint_tolerance 2e-3 Tolerance for constraint activity

Examples

Example 1: Quadratic Function with Linear Constraint

import numpy as np

def quadratic_problem(x):
    """Minimize (x1-3)² + (x2-4)² subject to x1 + x2 ≤ 5"""
    f = (x[0] - 3)**2 + (x[1] - 4)**2
    g = np.array([x[0] + x[1] - 5])
    return [f, g]

x_opt, f_opt, iters = constrained_optimization(quadratic_problem)
# Expected result: x* ≈ [2.0, 3.0], f* ≈ 2.0

Example 2: Constrained Rosenbrock Function

def rosenbrock_constrained(x):
    """
    Minimize: 100*(x2-x1²)² + (1-x1)²
    Subject to: x1² + x2² ≤ 2, x1 + x2 ≤ 1
    """
    f = 100 * (x[1] - x[0]**2)**2 + (1 - x[0])**2
    g = np.array([
        x[0]**2 + x[1]**2 - 2,  # circle constraint
        x[0] + x[1] - 1         # linear constraint
    ])
    return [f, g]

x_opt, f_opt, iters = constrained_optimization(rosenbrock_constrained)

Example 3: Portfolio Optimization

def portfolio_optimization(x):
    """
    Minimize portfolio variance subject to return constraint
    x: portfolio weights
    """
    # Covariance matrix (example)
    C = np.array([[0.1, 0.02], [0.02, 0.08]])
    returns = np.array([0.12, 0.10])
    target_return = 0.11
    
    # Minimize variance: x^T * C * x
    f = 0.5 * np.dot(x, np.dot(C, x))
    
    # Constraints: return ≥ target, weights sum to 1
    g = np.array([
        target_return - np.dot(returns, x),  # return constraint
        np.sum(x) - 1                        # budget constraint (=1, but written as ≤0 and ≥0)
    ])
    return [f, g]

Customization Guide

Modifying Problem Parameters

To solve your specific problem, you need to customize:

  1. Problem dimensions: Update initial_point, lower_bounds, upper_bounds
  2. Gradient functions: Modify objective_gradient() and constraint_gradient() functions
  3. Convergence criteria: Adjust tolerance parameters

Custom Gradient Functions

For best performance, provide analytical gradients:

def objective_gradient(x):
    """Return gradient of f(x)"""
    return np.array([df_dx1, df_dx2, ...])

def constraint_gradient(x):
    """Return Jacobian of g(x)"""
    return np.array([
        [dg1_dx1, dg1_dx2, ...],  # gradient of constraint 1
        [dg2_dx1, dg2_dx2, ...],  # gradient of constraint 2
        # ...
    ])

Handling Different Constraint Types

Constraint Type How to Handle
g(x) ≤ 0 Return g(x) directly
g(x) ≥ 0 Return -g(x)
g(x) = 0 Add both g(x) ≤ 0 and -g(x) ≤ 0
a ≤ x ≤ b Use lower_bounds and upper_bounds

Performance Tips

🚀 Optimization Guidelines

  1. Scale your problem: Ensure variables and constraints are of similar magnitude
  2. Provide good starting point: Choose initial_point close to expected solution
  3. Use analytical gradients: Much faster than numerical differentiation
  4. Adjust tolerances: Tighter tolerances = more accuracy but slower convergence
  5. Monitor iterations: Check the iteration table for convergence patterns

📊 Typical Performance

Problem Size Variables Constraints Expected Time
Small 2-10 1-5 < 1 second
Medium 10-100 5-20 1-10 seconds
Large 100-500 20-50 10-60 seconds

Limitations

⚠️ Current Limitations:

  • Problem size: Best suited for problems with < 500 variables
  • Constraint types: Designed for smooth, continuously differentiable constraints
  • Global optimization: Finds local optima only (not global)
  • Sparse problems: No specialized handling for sparse constraint matrices

⚠️ Known Issues:

  • May struggle with highly nonlinear or poorly scaled problems
  • Requires manual gradient function customization for each problem
  • No automatic differentiation support

Troubleshooting

Common Issues and Solutions

Problem Symptoms Solution
Slow convergence Many iterations, small progress Improve starting point, check scaling
Infeasible solution Constraints violated at solution Check constraint formulation (≤ 0 format)
Unbounded message Algorithm reports unbounded Add missing bound constraints
Oscillating objective Objective bounces up/down Reduce step size, check gradients

Debugging Tips

  1. Check constraint formulation: Ensure all constraints use g(x) ≤ 0 format
  2. Verify gradients: Test gradients with finite differences
  3. Monitor iteration table: Look for patterns in convergence
  4. Start simple: Test with known analytical solutions first

Algorithm References

This implementation is based on:

  1. Nocedal, J. & Wright, S. (2006). Numerical Optimization. Springer.
  2. Fletcher, R. (1987). Practical Methods of Optimization. Wiley.
  3. Gill, P.E., Murray, W. & Wright, M.H. (1981). Practical Optimization. Academic Press.

Contributing

Contributions are welcome! Areas for improvement:

  • Add automatic differentiation support
  • Implement sparse matrix handling
  • Add more convergence criteria options
  • Create visualization tools for convergence
  • Add support for more constraint types

Development Setup

git clone <repository>
cd constrained-optimization
pip install -r requirements.txt
python -m pytest tests/

License

This project is licensed under the MIT License - see the LICENSE file for details.


Quick Reference Card

# Minimal working example
import numpy as np
from constrained_optimization import constrained_optimization

def my_problem(x):
    f = (x[0] - 1)**2 + (x[1] - 2)**2  # minimize
    g = np.array([x[0] + x[1] - 3])    # subject to x1+x2 ≤ 3
    return [f, g]

x_opt, f_opt, iters = constrained_optimization(my_problem)
print(f"Solution: {x_opt}, Value: {f_opt}")

Happy Optimizing! 🎯

About

This is an implementation of a constrained optimisation problem using Zoutendijk Algorithm

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published