-
Notifications
You must be signed in to change notification settings - Fork 16
Problem in eval() #74
Description
Hello,
Reporting a minor evaluation bug: it looks like the Points that are not associated with functions might not be correctly evaluable after solving the PEP. Example below (essentially exact line search example, where I try to evaluate a worst-case dx after solving the PEP).
--------------------------------------------------------- Main file:
from PEPit import PEP
from PEPit.functions import SmoothStronglyConvexFunction
from PEPit.primitive_steps import exact_linesearch_step
from PEPit.primitive_steps import inexact_gradient_step
def wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1):
# Instantiate PEP
problem = PEP()
# Declare a strongly convex smooth function
func = problem.declare_function(SmoothStronglyConvexFunction, mu=mu, L=L)
# Start by defining its unique optimal point xs = x_* and corresponding function value fs = f_*
xs = func.stationary_point()
fs = func(xs)
# Then define the starting point x0 of the algorithm as well as corresponding gradient and function value g0 and f0
x0 = problem.set_initial_point()
# Set the initial constraint that is the distance between f0 and f_*
problem.set_initial_condition(func(x0) - fs <= 1)
# Run n steps of the inexact gradient method with ELS
x = x0
for i in range(n):
_, dx, _ = inexact_gradient_step(x, func, gamma=0, epsilon=epsilon, notion='relative')
x, gx, fx = exact_linesearch_step(x, func, [dx])
# Set the performance metric to the function value accuracy
problem.set_performance_metric(func(x) - fs)
# Solve the PEP
pepit_verbose = max(verbose, 0)
pepit_tau = problem.solve(verbose=pepit_verbose)
# Compute theoretical guarantee (for comparison)
Leps = (1 + epsilon) * L
meps = (1 - epsilon) * mu
theoretical_tau = ((Leps - meps) / (Leps + meps)) ** (2 * n)
# Print conclusion if required
if verbose != -1:
print('*** Example file: worst-case performance of inexact gradient descent with exact linesearch ***')
print('\tPEPit guarantee:\t f(x_n)-f_* <= {:.6} (f(x_0)-f_*)'.format(pepit_tau))
print('\tTheoretical guarantee:\t f(x_n)-f_* <= {:.6} (f(x_0)-f_*)'.format(theoretical_tau))
# Return the worst-case guarantee of the evaluated method (and the reference theoretical value)
return pepit_tau, theoretical_tau, dx.eval()--------------------------------------------------------- Calling the function:
L = 1
mu = .1
epsilon = .5
n = 1
wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1)--------------------------------------------------------- Output + Error
(PEPit) Setting up the problem: size of the main PSD matrix: 6x6
(PEPit) Setting up the problem: performance measure is minimum of 1 element(s)
(PEPit) Setting up the problem: Adding initial conditions and general constraints ...
(PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
function 1 : Adding 9 scalar constraint(s) ...
function 1 : 9 scalar constraint(s) added
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (solver: MOSEK); optimal value: 0.8751298780724822
(PEPit) Postprocessing: solver's output is not entirely feasible (smallest eigenvalue of the Gram matrix is: -1.05e-08 < 0).
Small deviation from 0 may simply be due to numerical error. Big ones should be deeply investigated.
In any case, from now the provided values of parameters are based on the projection of the Gram matrix onto the cone of symmetric semi-definite matrix.
*** Example file: worst-case performance of inexact gradient descent with exact linesearch ***
PEPit guarantee: f(x_n)-f_* <= 0.87513 (f(x_0)-f_)
Theoretical guarantee: f(x_n)-f_ <= 0.87513 (f(x_0)-f_*)
ValueError Traceback (most recent call last)
/tmp/ipykernel_265884/2910429637.py in
4 n = 1
5
----> 6 wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1)
/tmp/ipykernel_265884/1085922103.py in wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose)
48
49 # Return the worst-case guarantee of the evaluated method (and the reference theoretical value)
---> 50 return pepit_tau, theoretical_tau, dx.eval()
51
52
~/anaconda3/lib/python3.9/site-packages/PEPit/point.py in eval(self)
268 # If leaf, the PEP would have filled the attribute at the end of the solve.
269 if self._is_leaf:
--> 270 raise ValueError("The PEP must be solved to evaluate Points!")
271 # If linear combination, combine the values of the leaf, and store the result before returning it.
272 else:
ValueError: The PEP must be solved to evaluate Points!