Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
c418e52
Update JuMP version and remove ConditionalJuMP, vendoring required
vtjeng Sep 20, 2020
2bb4b28
Rename Variable -> VariableRef, per PR #1248 in jump-dev/JuMP.jl.
vtjeng Sep 20, 2020
9227ef5
Bump Cbc compat.
vtjeng Sep 21, 2020
95b3acb
WIP: fix test errors.
vtjeng Sep 21, 2020
727a8ff
Remove increment! function and replace with its JuMP equivalent.
vtjeng Sep 22, 2020
232ecd9
Change get_log_level function and fix a few more unit tests.
vtjeng Sep 22, 2020
8d646ee
Reimplement LP solve.
vtjeng Sep 22, 2020
277c4ae
Update JuMP version link.
vtjeng Sep 22, 2020
12ec9c9
Make tests quieter.
vtjeng Sep 22, 2020
9e45aca
Don't extract objective bounds or values when situation is infeasible.
vtjeng Sep 22, 2020
ba61519
Convert solve_status to string before serialization so it can be writ…
vtjeng Sep 22, 2020
49f3bab
Fix default solver bug, improve doc formatting.
vtjeng Sep 22, 2020
0ac9427
Remove unnecessary solve_time block.
vtjeng Sep 22, 2020
b5e70e7
Simplify get_model.
vtjeng Sep 22, 2020
5528e70
Use solve context for relaxing models.
vtjeng Sep 22, 2020
5863f89
Add auxilliary objective when necessary.
vtjeng Sep 22, 2020
7783669
Fix use of test_find_adversarial_example in Gurobi-only tests.
vtjeng Sep 22, 2020
7ea1ee4
Fix 'nothing' checks.
vtjeng Sep 22, 2020
5260e53
Add tests to cover new code.
vtjeng Sep 25, 2020
9332379
Inspect type of optimizer returned
vtjeng Sep 26, 2020
94ee907
Fix get_default_tightening_options.
vtjeng Sep 26, 2020
9b10b35
Apply suggestions from code review
vtjeng Sep 26, 2020
20ffd70
Modify signature of tight_bound_helper.
vtjeng Sep 26, 2020
6d9a389
Eliminate solve_context.
vtjeng Sep 26, 2020
a7f00f6
Relax check for optimality.
vtjeng Sep 26, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,27 @@ version = "0.2.3"

[deps]
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
ConditionalJuMP = "ae04f764-fc8b-5ee0-af1c-aa760b5c9409"
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We vendor the functions used in ConditionalJuMP, removing this as a dependency (since it did not support JuMP >= 0.18) and adding IntervalArithmetic instead,

DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
DocStringExtensions = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
IntervalArithmetic = "d1acc4aa-44c8-5952-acd4-ba5d80a2a253"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
MAT = "23992714-dd62-5051-b70f-ba57cb901cac"
MathProgBase = "fdba3010-5040-5b88-9595-932c9decdf73"
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MathProgBase was replaced with MathOptInterface by the upgrade to JuMP.

Memento = "f28f55f0-a522-5efc-85c2-fe41dfb9b2d9"
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
Serialization = "9e88b42a-f829-5b0c-bbe9-9e923198166b"

[compat]
CSV = "0.7"
Cbc = "0.4, 0.5, 0.6"
ConditionalJuMP = "0.1"
Cbc = "0.4, 0.5, 0.6, 0.7"
DataFrames = "0.21"
DocStringExtensions = "0.8"
JuMP = "0.18"
JuMP = "0.21.4, 0.21.5"
IntervalArithmetic = "0.14, 0.15, 0.16, 0.17"
MAT = "0.5, 0.6, 0.7, 0.8"
MathProgBase = "0.7"
MathOptInterface = "0.9"
Memento = "0.12, 0.13, 1.0"
ProgressMeter = "1"
TimerOutputs = "0.5"
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Fortunately, we _can_ evaluate robustness to adversarial examples in a principle
Determining the minimum adversarial distortion for some input (or proving that no bounded perturbation of that input causes a misclassification) corresponds to solving an optimization problem. For piecewise-linear neural networks, the optimization problem can be expressed as a mixed-integer linear programming (MILP) problem.

## Features
`MIPVerify.jl` translates your query on the robustness of a neural network for some input into an MILP problem, which can then be solved by any solver supported by [JuMP](https://github.com/JuliaOpt/JuMP.jl). Efficient solves are enabled by tight specification of ReLU and maximum constraints and a progressive bounds tightening approach where time is spent refining bounds only if doing so could provide additional information to improve the problem formulation.
`MIPVerify.jl` translates your query on the robustness of a neural network for some input into an MILP problem, which can then be solved by any optimizer supported by [JuMP](https://github.com/JuliaOpt/JuMP.jl). Efficient solves are enabled by tight specification of ReLU and maximum constraints and a progressive bounds tightening approach where time is spent refining bounds only if doing so could provide additional information to improve the problem formulation.

The package provides
+ High-level abstractions for common types of neural network layers:
Expand All @@ -35,12 +35,12 @@ The package provides
+ Perturbations of bounded l-infty norm
+ Perturbations where the image is convolved with an adversarial blurring kernel
+ Utility functions for:
+ Evaluating the robustness of a network on multiple samples in a dataset, with good support for pausing and resuming evaluation or running solvers with different parameters
+ Evaluating the robustness of a network on multiple samples in a dataset, with good support for pausing and resuming evaluation or running optimizers with different parameters
+ MNIST and CIFAR10 datasets for verification
+ Sample neural networks, including the networks verified in our paper.

## Results in Brief
Below is a modified version of Table 1 from our paper, where we report the adversarial error for classifiers to bounded perturbations with l-infinity norm-bound `eps`. For our verifier, a time limit of 120s per sample is imposed. Gaps between our bounds correspond to cases where the solver reached the time limit for some samples. Error is over the full MNIST test set of 10,000 samples.
Below is a modified version of Table 1 from our paper, where we report the adversarial error for classifiers to bounded perturbations with l-infinity norm-bound `eps`. For our verifier, a time limit of 120s per sample is imposed. Gaps between our bounds correspond to cases where the optimizer reached the time limit for some samples. Error is over the full MNIST test set of 10,000 samples.

| Dataset | Training Approach | `eps` | Lower<br>Bound<br>(PGD Error) | Lower<br>Bound<br>(ours) | Upper<br>Bound<br>(SOA)\^ | Upper<br>Bound<br>(ours)| Name in package\* |
|---|---|---|---|---|---|---|---|
Expand Down
8 changes: 4 additions & 4 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
To use our package, you require

1. The Julia programming language
2. An optimization solver [supported](http://www.juliaopt.org/JuMP.jl/0.18/installation.html#getting-solvers) by `JuMP`
3. The Julia package for working with that solver
2. An optimizer [supported](https://jump.dev/JuMP.jl/stable/installation/#Getting-Solvers-1) by `JuMP`
3. The Julia package for working with that optimizer

Our choice of solver is [Gurobi](http://www.gurobi.com/), but any supported optimization solver will work.
Our choice of optimizer is [Gurobi](http://www.gurobi.com/), but any supported optimizer will work.

**Platform compatibility:** Julia and Gurobi are available for 32-bit and 64-bit Windows, 64-bit macOS, and 64-bit Linux, but example code in this README is for Linux.

Expand Down Expand Up @@ -84,7 +84,7 @@ info : License 000000 written to file /home/ubuntu/gurobi.lic
If you store the license file in a non-default location, you will have to add the environment variable `GRB_LICENSE_FILE` to your startup file: `export GRB_LICENSE_FILE="/your/path/here/gurobi.lic"`

#### Installing `Gurobi.jl`
`Gurobi.jl` is a wrapper of the Gurobi solver accessible in Julia. Once you have installed Gurobi *and* activated the license, install the latest release of `Gurobi.jl`:
`Gurobi.jl` is a wrapper of the Gurobi optimizer accessible in Julia. Once you have installed Gurobi *and* activated the license, install the latest release of `Gurobi.jl`:
```julia
julia> using Pkg; Pkg.add("Gurobi")
```
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ We suggest getting started with the tutorials.
+ Have more precise control over the activations in the output layer
+ Restrict the family of perturbations (for example to the blurring perturbations discussed in our paper)
+ Select whether you want to minimize the $L_1$, $L_2$ or $L_\infty$ norm of the perturbation.
+ Modify the amount of time dedicated to building the model (by selecting the `tightening_algorithm`, and/or passing in a custom `tightening_solver`).
+ Modify the amount of time dedicated to building the model (by selecting the `tightening_algorithm`, and/or passing in custom `tightening_options`).

For Gurobi, we show how to specify solver settings to:
For Gurobi, we show how to specify optimizer settings to:
+ Mute output
+ Terminate early if:
+ A time limit is reached
Expand All @@ -28,4 +28,4 @@ For Gurobi, we show how to specify solver settings to:
[Walks you through](https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/blob/master/examples/03_interpreting_the_output_of_find_adversarial_example.ipynb) the output dictionary produced by a call to `find_adversarial_example`.

## Managing log output
[Explains how](https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/blob/master/examples/04_managing_log_output.ipynb) to get more granular log settings and to write log output to file.
[Explains how](https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/blob/master/examples/04_managing_log_output.ipynb) to get more granular log settings and to write log output to file.
87 changes: 46 additions & 41 deletions src/MIPVerify.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ module MIPVerify

using Base.Cartesian
using JuMP
using ConditionalJuMP
using MathOptInterface
using Memento
using DocStringExtensions
using ProgressMeter
Expand All @@ -16,6 +16,9 @@ export find_adversarial_example, frac_correct, interval_arithmetic, lp, mip
@enum AdversarialExampleObjective closest = 1 worst = 2
const DEFAULT_TIGHTENING_ALGORITHM = mip

# importing vendor/ConditionalJuMP.jl first as the remaining files use functions
# defined in it. we're unsure if this is necessary.
include("vendor/ConditionalJuMP.jl")
include("net_components.jl")
include("models.jl")
include("utils.jl")
Expand All @@ -25,12 +28,15 @@ function get_max_index(x::Array{<:Real,1})::Integer
return findmax(x)[2]
end

function get_default_tightening_solver(
main_solver::MathProgBase.SolverInterface.AbstractMathProgSolver,
)::MathProgBase.SolverInterface.AbstractMathProgSolver
tightening_solver = typeof(main_solver)()
MathProgBase.setparameters!(tightening_solver, Silent = true, TimeLimit = 20)
return tightening_solver
function get_default_tightening_options(optimizer)::Dict
optimizer_type_name = string(typeof(optimizer()))
if optimizer_type_name == "Gurobi.Optimizer"
return Dict("OutputFlag" => 0, "TimeLimit" => 20)
elseif optimizer_type_name == "Cbc.Optimizer"
return Dict("logLevel" => 0, "seconds" => 20)
else
return Dict()
end
end

"""
Expand All @@ -40,7 +46,7 @@ Finds the perturbed image closest to `input` such that the network described by
classifies the perturbed image in one of the categories identified by the
indexes in `target_selection`.

`main_solver` specifies the solver used to solve the MIP problem once it has been built.
`optimizer` specifies the optimizer used to solve the MIP problem once it has been built.

The output dictionary has keys `:Model, :PerturbationFamily, :TargetIndexes, :SolveStatus,
:Perturbation, :PerturbedInput, :Output`.
Expand All @@ -49,7 +55,8 @@ on what individual dictionary entries correspond to.

*Formal Definition*: If there are a total of `n` categories, the (perturbed) output vector
`y=d[:Output]=d[:PerturbedInput] |> nn` has length `n`.
We guarantee that `y[j] - y[i] ≥ 0` for some `j ∈ target_selection` and for all `i ∉ target_selection`.
We guarantee that `y[j] - y[i] ≥ 0` for some `j ∈ target_selection` and for all
`i ∉ target_selection`.

# Named Arguments:
+ `invert_target_selection::Bool`: Defaults to `false`. If `true`, sets `target_selection` to
Expand All @@ -58,35 +65,38 @@ We guarantee that `y[j] - y[i] ≥ 0` for some `j ∈ target_selection` and for
the family of perturbations over which we are searching for adversarial examples.
+ `norm_order::Real`: Defaults to `1`. Determines the distance norm used to determine the
distance from the perturbed image to the original. Supported options are `1`, `Inf`
and `2` (if the `main_solver` used can solve MIQPs.)
and `2` (if the `optimizer` used can solve MIQPs.)
+ `tightening_algorithm::MIPVerify.TighteningAlgorithm`: Defaults to `mip`. Determines how we
determine the upper and lower bounds on input to each nonlinear unit.
Allowed options are `interval_arithmetic`, `lp`, `mip`.
(1) `interval_arithmetic` looks at the bounds on the output to the previous layer.
(2) `lp` solves an `lp` corresponding to the `mip` formulation, but with any integer constraints relaxed.
(2) `lp` solves an `lp` corresponding to the `mip` formulation, but with any integer constraints
relaxed.
(3) `mip` solves the full `mip` formulation.
+ `tightening_solver`: Solver used to determine upper and lower bounds for input to nonlinear units.
Defaults to the same type of solver as the `main_solver`, with a time limit of 20s per solver
and output suppressed. Used only if the `tightening_algorithm` is `lp` or `mip`.
+ `solve_if_predicted_in_targeted`: Defaults to `true`. The prediction that `nn` makes for the unperturbed
`input` can be determined efficiently. If the predicted index is one of the indexes in `target_selection`,
we can skip the relatively costly process of building the model for the MIP problem since we already have an
"adversarial example" --- namely, the input itself. We continue build the model and solve the (trivial) MIP
problem if and only if `solve_if_predicted_in_targeted` is `true`.
+ `tightening_options`: Solver-specific options passed to optimizer when used to determine upper and
lower bounds for input to nonlinear units. Note that these are only used if the
`tightening_algorithm` is `lp` or `mip` (no solver is used when `interval_arithmetic` is used
to compute the bounds). Defaults for Gurobi and Cbc to a time limit of 20s per solve,
with output suppressed.
+ `solve_if_predicted_in_targeted`: Defaults to `true`. The prediction that `nn` makes for the
unperturbed `input` can be determined efficiently. If the predicted index is one of the indexes
in `target_selection`, we can skip the relatively costly process of building the model for the
MIP problem since we already have an "adversarial example" --- namely, the input itself. We
continue build the model and solve the (trivial) MIP problem if and only if
`solve_if_predicted_in_targeted` is `true`.
"""
function find_adversarial_example(
nn::NeuralNet,
input::Array{<:Real},
target_selection::Union{Integer,Array{<:Integer,1}},
main_solver::MathProgBase.SolverInterface.AbstractMathProgSolver;
optimizer,
main_solve_options::Dict;
invert_target_selection::Bool = false,
pp::PerturbationFamily = UnrestrictedPerturbationFamily(),
norm_order::Real = 1,
adversarial_example_objective::AdversarialExampleObjective = closest,
tightening_algorithm::TighteningAlgorithm = DEFAULT_TIGHTENING_ALGORITHM,
tightening_solver::MathProgBase.SolverInterface.AbstractMathProgSolver = get_default_tightening_solver(
main_solver,
),
tightening_options::Dict = get_default_tightening_options(optimizer),
solve_if_predicted_in_targeted = true,
)::Dict

Expand All @@ -111,43 +121,38 @@ function find_adversarial_example(
"Attempting to find adversarial example. Neural net predicted label is $(predicted_index), target labels are $(d[:TargetIndexes])",
)

# Only call solver if predicted index is not found among target indexes.
# Only call optimizer if predicted index is not found among target indexes.
if !(d[:PredictedIndex] in d[:TargetIndexes]) || solve_if_predicted_in_targeted
merge!(d, get_model(nn, input, pp, tightening_solver, tightening_algorithm))
merge!(d, get_model(nn, input, pp, optimizer, tightening_options, tightening_algorithm))
m = d[:Model]

if adversarial_example_objective == closest
set_max_indexes(m, d[:Output], d[:TargetIndexes])

# Set perturbation objective
# NOTE (vtjeng): It is important to set the objective immediately before we carry out
# the solve. Functions like `set_max_indexes` can modify the objective.
# NOTE (vtjeng): It is important to set the objective immediately before we carry
# out the solve. Functions like `set_max_indexes` can modify the objective.
@objective(m, Min, get_norm(norm_order, d[:Perturbation]))
elseif adversarial_example_objective == worst
(maximum_target_var, nontarget_vars) =
get_vars_for_max_index(d[:Output], d[:TargetIndexes])
maximum_nontarget_var = maximum_ge(nontarget_vars)
# Introduce an additional variable since Gurobi ignores constant terms in objective,
# but we explicitly need these if we want to stop early based on the value of the objective
# (not simply whether or not it is maximized).
# See discussion in https://github.com/jump-dev/Gurobi.jl/issues/111 for more details.
# but we explicitly need these if we want to stop early based on the value of the
# objective (not simply whether or not it is maximized).
# See discussion in https://github.com/jump-dev/Gurobi.jl/issues/111 for more
# details.
v_obj = @variable(m)
@constraint(m, v_obj == maximum_target_var - maximum_nontarget_var)
@objective(m, Max, v_obj)
else
error("Unknown adversarial_example_objective $adversarial_example_objective")
end
setsolver(m, main_solver)
solve_time = @elapsed begin
d[:SolveStatus] = solve(m)
end
d[:SolveTime] = try
getsolvetime(m)
catch err
# CBC solver, used for testing, does not implement `getsolvetime`.
isa(err, MethodError) || rethrow(err)
solve_time
end
set_optimizer(m, optimizer)
set_optimizer_attributes(m, main_solve_options...)
optimize!(m)
d[:SolveStatus] = JuMP.termination_status(m)
d[:SolveTime] = JuMP.solve_time(m)
end
end

Expand Down
Loading