Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying layer-bounds in reluplex leads to false :holds #82

Closed
tomerarnon opened this issue Feb 5, 2020 · 0 comments
Closed

Applying layer-bounds in reluplex leads to false :holds #82

tomerarnon opened this issue Feb 5, 2020 · 0 comments

Comments

@tomerarnon
Copy link
Collaborator

tomerarnon commented Feb 5, 2020

-- update --
Actually for this other simple network, tight = false doesn't solve the problem:

l1 = Layer(reshape([-3.0; 1.7], (2, 1)), [1.2, -4.8], ReLU())
l2 = Layer([-0.58 -1.37], [3.94], ReLU())
nnet = Network([l1, l2])

However, this is still definitely a bounds issue since completely removing the layer-bounds from reluplex leads to the correct result.

It isn't clear whether the bound calculation is intrinsically wrong somehow, or that its usage isn't quite correct; still needs to be explored.

-- original issue --

The incorrect bounds affect other solvers that use them as constraints, such as Reluplex. This failure case is courtesy of Chris Strong:

nnet = Network([Layer(reshape([1.0; 1.0], (2, 1)), [0.5, 0.1], ReLU()), 
                Layer([-4. -1.], [1.0], Id())])

# on the interval [-1, 1], nnet ranges in [1, -6.1]. We test
# y ≤ 0.05 | x ∈ [-1, 1]  which we know is false.
input_set = Hyperrectangle(low = [-1.0], high = [1.0])
output_set = HalfSpace([1.0], 0.05)
problem = Problem(nnet, input_set, output_set)

julia> solve(Reluplex(), problem)
CounterExampleResult(:holds, Float64[])

This is obviously incorrect, and is due to the bounds being overly tight (leading to an infeasible optimization problem). Changing the MaxSens calculation to tight = false yields the correct result (i.e. :violated). See these sections for reference on where this is happening:

The tight version is used in get_bounds:

function get_bounds(nnet::Network, input::Hyperrectangle, act::Bool = true) # NOTE there is another function by the same name in convDual. Should reconsider dispatch
if act
solver = MaxSens(0.0, true)

For the return statement in question, see line 60:

function forward_node(solver::MaxSens, node::Node, input::Hyperrectangle)
output = node.w' * input.center + node.b
deviation = sum(abs.(node.w) .* input.radius)
β = node.act(output) # TODO expert suggestion for variable name. beta? β? O? x?
βmax = node.act(output + deviation)
βmin = node.act(output - deviation)
if solver.tight
return ((βmax + βmin)/2, (βmax - βmin)/2)
else
return (β, max(abs(βmax - β), abs(βmin - β)))
end
end

@changliuliu, before I go any deeper, is there anything above that strikes you as incorrect?

@tomerarnon tomerarnon changed the title The tight version of MaxSens leads to incorrect bounds Applying layer-bounds in reluplex leads to false :holds Feb 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant