-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exponential Cone #29
Exponential Cone #29
Changes from 25 commits
2fb3701
906e784
45bac58
f21cded
052dbe1
ba899b8
4bf77c4
3ce2c5e
7ead9ad
48d3fcf
52fd09f
1988359
cc830fc
2070ab1
b1e922e
773bf8c
82fc6f9
b1fa666
4e12347
a61e4ee
6c275a1
3912710
276ca15
7f37235
c97645a
25d8518
1b24cf6
f7688e1
9a1f470
dc2606a
b547f28
aee7112
edad4d4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
|
||
function _bisection(f, left, right; max_iters=10000, tol=1e-10) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The code in this function will yield many iterations or not converge because of strict inequality check, one should use: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Originally designed to hit floating point limit (since intervals are relatively small), but I agree that this level of precision probably yields too many iterations. Will change to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
yes but here you ask for strict equality right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, so the algorithm stops when the boundary is indistinguishable from the midpoint to floating point precision. E.g., |
||
# STOP CODES: | ||
# 0: Success (floating point limit or exactly 0) | ||
# 1: Max iters but within tol | ||
# 2: Failure | ||
|
||
for _ in 1:max_iters | ||
f_left, f_right = f(left), f(right) | ||
sign(f_left) == sign(f_right) && error("Interval became non-bracketing.") | ||
|
||
mid = (left + right) / 2 | ||
if left == mid || right == mid | ||
return mid, 0 | ||
end | ||
|
||
f_mid = f(mid) | ||
if f_mid == 0 | ||
return mid, 0 | ||
end | ||
if sign(f_mid) == sign(f_left) | ||
left = mid | ||
continue | ||
end | ||
if sign(f_mid) == sign(f_right) | ||
right = mid | ||
continue | ||
end | ||
end | ||
|
||
mid = (left + right) / 2 | ||
if abs(f(mid)) < tol | ||
return mid, 1 | ||
end | ||
|
||
return nothing, 2 | ||
end |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,4 @@ | ||
using JuMP, SCS | ||
const DD = MOD.DefaultDistance() | ||
|
||
@testset "Test projections distance on vector sets" begin | ||
|
@@ -76,3 +77,56 @@ end | |
output_joint = MOD.projection_gradient_on_set(DD, [v1, v2], [c1, c2]) | ||
@test output_joint ≈ BlockDiagonal([output_1, output_2]) | ||
end | ||
|
||
|
||
@testset "Exponential Cone Projections" begin | ||
function det_case_exp_cone(v; dual=false) | ||
v = dual ? -v : v | ||
if MOD.distance_to_set(DD, v, MOI.ExponentialCone()) < 1e-8 | ||
return 1 | ||
elseif MOD.distance_to_set(DD, -v, MOI.DualExponentialCone()) < 1e-8 | ||
return 2 | ||
elseif v[1] <= 0 && v[2] <= 0 #TODO: threshold here?? | ||
return 3 | ||
else | ||
return 4 | ||
end | ||
end | ||
|
||
function _test_proj_exp_cone_help(x, tol; dual=false) | ||
cone = dual ? MOI.DualExponentialCone() : MOI.ExponentialCone() | ||
model = Model() | ||
set_optimizer(model, optimizer_with_attributes( | ||
SCS.Optimizer, "eps" => 1e-10, "max_iters" => 10000, "verbose" => 0)) | ||
@variable(model, z[1:3]) | ||
@variable(model, t) | ||
@objective(model, Min, t) | ||
@constraint(model, sum((x-z).^2) <= t) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Random note:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh interesting. Admittedly, I didn't think to check the latter. |
||
@constraint(model, z in cone) | ||
optimize!(model) | ||
z_star = value.(z) | ||
px = MOD.projection_on_set(DD, x, cone) | ||
if !isapprox(px, z_star, atol=tol) | ||
# error("Exp cone projection failed:\n x = $x\nMOD: $px\nJuMP: $z_star | ||
# \nnorm: $(norm(px - z_star))") | ||
return false | ||
end | ||
return true | ||
end | ||
|
||
Random.seed!(0) | ||
n = 3 | ||
atol = 1e-7 | ||
case_p = zeros(4) | ||
case_d = zeros(4) | ||
for _ in 1:100 | ||
x = randn(3) | ||
|
||
case_p[det_case_exp_cone(x; dual=false)] += 1 | ||
@test _test_proj_exp_cone_help(x, atol; dual=false) | ||
|
||
case_d[det_case_exp_cone(x; dual=true)] += 1 | ||
@test _test_proj_exp_cone_help(x, atol; dual=true) | ||
end | ||
@test all(case_p .> 0) && all(case_d .> 0) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dont we need Random here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw that Random was already in the package dependencies, but I don't think it's used outside of tests. I can move it to
extras
, unless there's something I'm missing.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll do that in a following PR, we can leave it in the deps for now