Skip to content

Commit

Permalink
Merge 967d7fd into 0c1f9e8
Browse files Browse the repository at this point in the history
  • Loading branch information
tomerarnon committed Mar 6, 2019
2 parents 0c1f9e8 + 967d7fd commit 0cf41fd
Show file tree
Hide file tree
Showing 20 changed files with 197 additions and 65 deletions.
4 changes: 2 additions & 2 deletions Manifest.toml
Expand Up @@ -187,11 +187,11 @@ version = "0.18.5"

[[LazySets]]
deps = ["Compat", "Distributed", "Expokit", "GLPKMathProgInterface", "IntervalArithmetic", "LinearAlgebra", "MathProgBase", "Pkg", "Random", "RecipesBase", "Requires", "SharedArrays", "SparseArrays"]
git-tree-sha1 = "ad0dfae4f073c6ffd7db129d8c449603184e4207"
git-tree-sha1 = "17607a56c94108b9385fbe320f24e74b0191b761"
repo-rev = "master"
repo-url = "https://github.com/JuliaReach/LazySets.jl.git"
uuid = "b4f0291d-fe17-52bc-9479-3d1a343d9043"
version = "1.7.0+"
version = "1.8.0+"

[[LibCURL]]
deps = ["BinaryProvider", "Compat", "Libdl", "Printf"]
Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Expand Up @@ -26,7 +26,7 @@ using NeuralVerification
solver = MaxSens(resolution = 0.3)
```

### Creating up a Problem
### Creating a Problem
A `Problem` consists of a `Network` to be tested, a set representing the domain of the *input test region*, and another set representing the range of the *output region*.
In this example, we use a small neural network with only one hidden layer consisting of 2 neurons.

Expand Down
56 changes: 55 additions & 1 deletion docs/src/problem.md
Expand Up @@ -2,7 +2,7 @@

```@contents
Pages = ["problem.md"]
Depth = 3
Depth = 4
```

## Problem
Expand All @@ -11,6 +11,60 @@
Problem
```

### Input/Output Sets

Different solvers require problems formulated with particular input and output sets.
The table below lists all of the solvers with their required input/output sets.

- HR = `Hyperrectangle`
- HS = `HalfSpace`
- HP = `HPolytope`
- PC = `PolytopeComplement`

[](TODO: review these. Not sure they're all correct.)

| Solver | Input set | Output set |
|----------------------|:-----------:|:----------------:|
| [`ExactReach`](@ref) | HP | HP (bounded[^1]) |
| [`AI2`](@ref) | HP | HP (bounded[^1]) |
| [`MaxSens`](@ref) | HR | HP (bounded[^1]) |
| [`NSVerify`](@ref) | HR | PC[^2] |
| [`MIPVerify`](@ref) | HR | PC[^2] |
| [`ILP`](@ref) | HR | PC[^2] |
| [`Duality`](@ref) | HR(uniform) | HS |
| [`ConvDual`](@ref) | HR(uniform) | HS |
| [`Certify`](@ref) | HR | HS |
| [`FastLin`](@ref) | HR | HS |
| [`FastLip`](@ref) | HR | HS |
| [`ReluVal`](@ref) | HR | HR |
| [`DLV`](@ref) | HR | HR[^3] |
| [`Sherlock`](@ref) | HR | HR[^3] |
| [`BaB`](@ref) | HR | HR[^3] |
| [`Planet`](@ref) | HR | PC[^2] |
| [`Reluplex`](@ref) | HP | PC[^2] |

[^1]: This restriction is not due to a theoretic limitation, but rather to our implementation, and will eventually be relaxed.

[^2]: See [`PolytopeComplement`](@ref) for a justification of this output set restriction.

[^3]: The set can only have one output node. I.e. it must be a set of dimension 1.

Note that solvers which require `Hyperrectangle`s also work on `HPolytope`s by overapproximating the input set. This is likewise true for solvers that require `HPolytope`s converting a `Hyperrectangle` input to H-representation. Any set which can be made into the required set is converted, wrapped, or approximated appropriately.

### PolytopeComplements

Some optimization-based solvers work on the principle of a complementary output constraint.
Essentially, they test whether a point *is not* in a set, by checking whether it is in the complement of the set (or vice versa).
To represent the kinds of sets we are interested in for these solvers, we define the `PolytopeComplement`, which represents the complement of a convex set.
Note that in general, the complement of a convex set is neither convex nor closed. [](would be good to include an image like the one in the paper that goes with AdversarialResult)

Although it is possible to represent the complement of a `HalfSpace` as another `HalfSpace`, we require that it be specified as a `PolytopeComplement` to disambiguate the boundary.

```@docs
PolytopeComplement
```


## Network

```@autodocs
Expand Down
2 changes: 2 additions & 0 deletions src/NeuralVerification.jl
Expand Up @@ -32,6 +32,8 @@ export
Solver,
Network,
AbstractActivation,
PolytopeComplement,
complement,
# NOTE: not sure if exporting these is a good idea as far as namespace conflicts go:
# ReLU,
# Max,
Expand Down
4 changes: 2 additions & 2 deletions src/adversarial/dlv.jl
Expand Up @@ -5,8 +5,8 @@ DLV searches layer by layer for counter examples in hidden layers.
# Problem requirement
1. Network: any depth, any activation (currently only support ReLU)
2. Input: hyperrectangle
3. Output: abstractpolytope
2. Input: Hyperrectangle
3. Output: AbstractPolytope
# Return
`CounterExampleResult`
Expand Down
2 changes: 1 addition & 1 deletion src/adversarial/fastLin.jl
Expand Up @@ -8,7 +8,7 @@ FastLin combines reachability analysis with binary search to find maximum allowa
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hypercube
3. Output: halfspace
3. Output: AbstractPolytope
# Return
`AdversarialResult`
Expand Down
4 changes: 2 additions & 2 deletions src/adversarial/reluVal.jl
Expand Up @@ -6,7 +6,7 @@ ReluVal combines symbolic reachability analysis with iterative interval refineme
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle
3. Output: hpolytope
3. Output: AbstractPolytope
# Return
`CounterExampleResult` or `ReachabilityResult`
Expand Down Expand Up @@ -93,7 +93,7 @@ end
function check_inclusion(reach::SymbolicInterval, output::AbstractPolytope, nnet::Network)
reachable = symbol_to_concrete(reach)
issubset(reachable, output) && return BasicResult(:holds)
is_intersection_empty(reachable, output) && return BasicResult(:violated)
# is_intersection_empty(reachable, output) && return BasicResult(:violated)

# Sample the middle point
middle_point = (high(reach.interval) + low(reach.interval))./2
Expand Down
2 changes: 1 addition & 1 deletion src/optimization/iLP.jl
Expand Up @@ -7,7 +7,7 @@ It iteratively adds the linear constraint to the problem.
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle
3. Output: halfspace
3. Output: PolytopeComplement
# Return
`AdversarialResult`
Expand Down
2 changes: 1 addition & 1 deletion src/optimization/mipVerify.jl
Expand Up @@ -6,7 +6,7 @@ MIPVerify computes maximum allowable disturbance using mixed integer linear prog
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle
3. Output: halfspace
3. Output: PolytopeComplement
# Return
`AdversarialResult`
Expand Down
2 changes: 1 addition & 1 deletion src/optimization/nsVerify.jl
Expand Up @@ -6,7 +6,7 @@ NSVerify finds counter examples using mixed integer linear programming.
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle or hpolytope
3. Output: halfspace
3. Output: PolytopeComplement
# Return
`CounterExampleResult`
Expand Down
46 changes: 24 additions & 22 deletions src/optimization/utils/constraints.jl
Expand Up @@ -222,43 +222,45 @@ end
#=
Add input/output constraints to model
=#
function add_complementary_set_constraint!(model::Model, output::HPolytope, neuron_vars::Vector{Variable})
function add_complementary_set_constraint!(model::Model, output::HPolytope, z::Vector{Variable})
out_A, out_b = tosimplehrep(output)
# Needs to take the complementary of output constraint
n = length(out_b)
n = length(constraints_list(output))
if n == 1
# Here the output constraint is a half space
# So the complementary is just out_A * y .> out_b
@constraint(model, -out_A * neuron_vars .<= -out_b)
halfspace = first(constraints_list(output))
add_complementary_set_constraint!(model, halfspace, z)
else
LC = length(constraints_list(output))
@assert LC == 1 "Quadratic constraints are not yet supported. Please make sure that the
output constraint is a HalfSpace (an HPolytope with a single constraint). Got $LC constraints."
# Here the complementary is a union of different constraints
# We use binary variable to encode the union of constraints
out_deltas = @variable(model, [1:n], Bin)
@constraint(model, sum(out_deltas) == 1)
for i in 1:n
@constraint(model, -out_A[i, :]' * neuron_vars * out_deltas[i] <= -out_b[i] * out_deltas[i])
end
error("Non-convex constraints are not supported. Please make sure that the
output set is a HalfSpace (or an HPolytope with a single constraint) so that the
complement of the output is convex. Got $n constraints.")
end
return nothing
end

function add_complementary_set_constraint!(model::Model, output::Hyperrectangle, neuron_vars::Vector{Variable})
@constraint(model, neuron_vars .>= -high(output))
@constraint(model, neuron_vars .<= -low(output))
function add_complementary_set_constraint!(m::Model, H::HalfSpace, z::Vector{Variable})
a, b = tosimplehrep(H)
@constraint(m, a * z .>= b)
return nothing
end
function add_complementary_set_constraint!(m::Model, PC::PolytopeComplement, z::Vector{Variable})
add_set_constraint!(m, PC.P, z)
return nothing
end

function add_set_constraint!(model::Model, set::HPolytope, neuron_vars::Vector{Variable})
function add_set_constraint!(m::Model, set::Union{HPolytope, HalfSpace}, z::Vector{Variable})
A, b = tosimplehrep(set)
@constraint(model, A * neuron_vars .<= b)
@constraint(m, A * z .<= b)
return nothing
end

function add_set_constraint!(m::Model, set::Hyperrectangle, z::Vector{Variable})
@constraint(m, z .<= high(set))
@constraint(m, z .>= low(set))
return nothing
end

function add_set_constraint!(model::Model, set::Hyperrectangle, neuron_vars::Vector{Variable})
@constraint(model, neuron_vars .<= high(set))
@constraint(model, neuron_vars .>= low(set))
function add_set_constraint!(m::Model, PC::PolytopeComplement, z::Vector{Variable})
add_complementary_set_constraint!(m, PC.P, z)
return nothing
end
2 changes: 1 addition & 1 deletion src/reachability/ai2.jl
Expand Up @@ -6,7 +6,7 @@ Ai2 performs over-approximated reachability analysis to compute the over-approxi
# Problem requirement
1. Network: any depth, ReLU activation (more activations to be supported in the future)
2. Input: HPolytope
3. Output: HPolytope
3. Output: AbstractPolytope
# Return
`ReachabilityResult`
Expand Down
2 changes: 1 addition & 1 deletion src/reachability/exactReach.jl
Expand Up @@ -6,7 +6,7 @@ ExactReach performs exact reachability analysis to compute the output reachable
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: HPolytope
3. Output: HPolytope
3. Output: AbstractPolytope
# Return
`ReachabilityResult`
Expand Down
2 changes: 1 addition & 1 deletion src/reachability/maxSens.jl
Expand Up @@ -6,7 +6,7 @@ MaxSens performs over-approximated reachability analysis to compute the over-app
# Problem requirement
1. Network: any depth, any activation that is monotone
2. Input: `Hyperrectangle` or `HPolytope`
3. Output: `HPolytope`
3. Output: `AbstractPolytope`
# Return
`ReachabilityResult`
Expand Down
4 changes: 2 additions & 2 deletions src/reachability/utils/reachability.jl
Expand Up @@ -14,14 +14,14 @@ end
# Checks whether the reachable set belongs to the output constraint
# It is called by all solvers under reachability
# Note vertices_list is not defined for HPolytope: to be defined
function check_inclusion(reach::Vector{<:AbstractPolytope}, output::AbstractPolytope)
function check_inclusion(reach::Vector{<:AbstractPolytope}, output)
for poly in reach
issubset(poly, output) || return ReachabilityResult(:violated, reach)
end
return ReachabilityResult(:holds, similar(reach, 0))
end

function check_inclusion(reach::P, output::AbstractPolytope) where P<:AbstractPolytope
function check_inclusion(reach::P, output) where P<:AbstractPolytope
if issubset(reach, output)
return ReachabilityResult(:holds, P[])
end
Expand Down
2 changes: 1 addition & 1 deletion src/satisfiability/planet.jl
Expand Up @@ -6,7 +6,7 @@ Planet integrates a SAT solver (`PicoSAT.jl`) to find an activation pattern that
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle or hpolytope
3. Output: halfspace
3. Output: PolytopeComplement
# Return
`BasicResult`
Expand Down
2 changes: 1 addition & 1 deletion src/satisfiability/reluplex.jl
Expand Up @@ -6,7 +6,7 @@ Reluplex uses binary tree search to find an activation pattern that maps a feasi
# Problem requirement
1. Network: any depth, ReLU activation
2. Input: hyperrectangle
3. Output: halfspace
3. Output: PolytopeComplement
# Return
`CounterExampleResult`
Expand Down
44 changes: 39 additions & 5 deletions src/utils/problem.jl
@@ -1,15 +1,49 @@

"""
PolytopeComplement
The complement to a given set. Note that in general, a `PolytopeComplement` is not necessarily a convex set.
Also note that `PolytopeComplement`s are open by definition.
### Examples
```julia
julia> H = Hyperrectangle([0,0], [1,1])
Hyperrectangle{Int64}([0, 0], [1, 1])
julia> PC = complement(H)
PolytopeComplement of:
Hyperrectangle{Int64}([0, 0], [1, 1])
julia> center(H) ∈ PC
false
julia> high(H).+[1,1] ∈ PC
true
```
"""
struct PolytopeComplement{S<:LazySet}
P::S
end

Base.show(io::IO, PC::PolytopeComplement) = (println(io, "PolytopeComplement of:"), println(io, " ", PC.P))
LazySets.issubset(s, PC::PolytopeComplement) = LazySets.is_intersection_empty(s, PC.P)
LazySets.is_intersection_empty(s, PC::PolytopeComplement) = LazySets.issubset(s, PC.P)
LazySets.tohrep(PC::PolytopeComplement) = PolytopeComplement(convert(HPolytope, PC.P))
Base.in(pt, PC::PolytopeComplement) = pt PC.P
complement(PC::PolytopeComplement) = PC.P
complement(P::LazySet) = PolytopeComplement(P)
# etc.


"""
Problem(network, input, output)
Problem{P, Q}(network::Network, input::P, output::Q)
Problem definition for neural verification.
- `network` is of type `Network`
- `input` belongs to `AbstractPolytope` in `LazySets.jl`
- `output` belongs to `AbstractPolytope` in `LazySets.jl`
The verification problem consists of: for all points in the input set,
the corresponding output of the network must belong to the output set.
"""
struct Problem{P<:AbstractPolytope, Q<:AbstractPolytope}
struct Problem{P, Q}
network::Network
input::P
output::Q
Expand Down

0 comments on commit 0cf41fd

Please sign in to comment.