Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing my own neural network for verification #13

Closed
UserAlreadyTaken opened this issue Apr 23, 2019 · 24 comments
Closed

Importing my own neural network for verification #13

UserAlreadyTaken opened this issue Apr 23, 2019 · 24 comments

Comments

@UserAlreadyTaken
Copy link

I'm studying your tool now, and I want to import my own network for verification, but I do not know how to get .mat file. Your example shows that exported from tensorflow, but how? And my own network has no bias or Adam_1 like your example(fc1/bias). Or could say I can not obtain it from my network.
could you tell me how to get a .mat file(anywhere can download or how to create by myself)?

@vtjeng
Copy link
Owner

vtjeng commented Apr 23, 2019

Hi @UserAlreadyTaken --- I'm not certain what you're trying to do here.

1. If you'd just like to work with an example neural network, you can do so using the function

MIPVerify.get_example_network_params("MNIST.n1") 
# returns a `Sequential` object representing a neural network

The supported options are documented here:

* `'MNIST.n1'`:
* Architecture: Two fully connected layers with 40 and 20 units.
* Training: Trained regularly with no attempt to increase robustness.
* `'MNIST.WK17a_linf0.1_authors'`.
* Architecture: Two convolutional layers (stride length 2) with 16 and
32 filters respectively (size 4 × 4 in both layers), followed by a
fully-connected layer with 100 units.
* Training: Network trained to be robust to attacks with \$l_\\infty\$ norm
at most 0.1 via method in [Provable defenses against adversarial examples
via the convex outer adversarial polytope](https://arxiv.org/abs/1711.00851).
Is MNIST network for which results are reported in that paper.
* `'MNIST.RSL18a_linf0.1_authors'`.
* Architecture: One fully connected layer with 500 units.
* Training: Network trained to be robust to attacks with \$l_\\infty\$ norm
at most 0.1 via method in [Certified Defenses against Adversarial Examples
](https://arxiv.org/abs/1801.09344).
Is MNIST network for which results are reported in that paper.

2. If you'd like to understand how to extracting weights saved in a checkpoint and saving it to a .mat file, https://github.com/vtjeng/MIPVerify_data/tree/master/weights/mnist/WK17a can hopefully provide a useful example of extracting weights (from a .pth file produced by Pytorch).


Feel free to provide a link to an example neural network you're trying to verify and I might be able to provide more specific suggestions. Alternatively, it would also help if you described the structure of your network!

@UserAlreadyTaken
Copy link
Author

https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/blob/master/examples/01_importing_your_own_neural_net.ipynb In your tutorials you said that We'll download a .mat file containing the parameters of a sample neural net containing three layers (exported from tensorflow). But I find that there is no tensorflow API for save network weight as a .mat file. And we train a network using tensorflow then save the weight into a .h5 file.
http://veriai.xyz/index.php?share/file&user=107&sid=huGQ4a95 provives our network .h5 file.
We can not extracting weights from a .h5 file as your format "conv1/bias", (eg. we can just extract layer name and corresponding weights, but no bias).
Or it because you used pytorch, and we used tensorflow?

@vtjeng
Copy link
Owner

vtjeng commented Apr 24, 2019

Ah, sorry for the lack of clarity --- I'll fix that in the next release.

As you pointed out, there is no tensorflow API for saving network weights as a .mat file. (There is also no direct way to do so from pytorch --- as you can see from the example code I linked to in my previous comment, we also had to manually construct the .mat file).

Instead, you can use the h5py library to read in the .h5 file, extracting the parameter tensors as weights in a dictionary [1]. You need to then use scipy.io.savemat to save the output as a .mat file.

Sorry this is inconvenient. I'm open to feedback (or a pull request!) for how it can be made better.

[1] this example code could be useful.

@UserAlreadyTaken
Copy link
Author

I have another problem, before verify network, you must use nn=Sequential([]), but I do not know how to express maxpool layer, what's the parameters of MaxPool(), I just use (0,2,2,1), and then when use MIPVerify.frac_correct(), it report the ERROR: InexactError()...orz...,
julia> MIPVerify.frac_correct(nn,cifar10.test,10000) ERROR: InexactError() Stacktrace: [1] trunc(::Type{Int64}, ::Float64) at ./float.jl:672 [2] map at ./tuple.jl:181 [inlined] [3] broadcast at ./broadcast.jl:17 [inlined] [4] getoutputsize(::Array{Float64,4}, ::NTuple{4,Int64}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components/layers/pool.jl:90 [5] poolmap(::MIPVerify.#maximum, ::Array{Float64,4}, ::NTuple{4,Int64}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components/layers/pool.jl:101 [6] (::MIPVerify.Pool{4})(::Array{Float64,4}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components/layers/pool.jl:118 [7] chain(::Array{Float64,4}, ::Array{MIPVerify.Layer,1}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components.jl:26 (repeats 2 times) [8] macro expansion at /home/zyw/.julia/v0.6/MIPVerify/src/MIPVerify.jl:186 [inlined] [9] macro expansion at /home/zyw/.julia/v0.6/ProgressMeter/src/ProgressMeter.jl:483 [inlined] [10] frac_correct(::MIPVerify.Sequential, ::MIPVerify.LabelledImageDataset{Float64,Int32}, ::Int64) at /home/zyw/.julia/v0.6/MIPVerify/src/MIPVerify.jl:183

@vtjeng
Copy link
Owner

vtjeng commented Apr 28, 2019

If you're looking for a pooling layer with a height and width of 2x2, the parameters you want are MaxPool(1, 2, 2, 1).

(just in case, this is the source code for the pool layer)

If possible, I would suggest skipping the max pooling layer if you're running into timeouts during verification since it adds a large number of binary variables; instead, you could use a convolution with a stride of 2.

@UserAlreadyTaken
Copy link
Author

UserAlreadyTaken commented Apr 30, 2019

emmmm, I'm sorry about I have to ask for your help again.
It prompts ERROR: ArgumentError: Linear() layers work only on one-dimensional input. You likely forgot to add a Flatten() layer before your first linear layer.
and then I add Flatten(4) before fully connection layer, but it also prompts ERROR:DimensionMismatch("arrays could not be broadcast to a common size")
and this is my network structure
nn = Sequential([
conv1,
ReLU(),
MaxPool((1,2,2,1)),
conv2,
ReLU(),
MaxPool((1,2,2,1)),
Flatten(4),
fc1,
ReLU(),
fc2,
ReLU(),
fc3],"CIFAR10.N1")
sequential net CIFAR10.N1
(1) Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1), padding=same)
(2) ReLU()
(3) max pooling with a 2x2 filter and a stride of (2, 2)
(4) Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1), padding=same)
(5) ReLU()
(6) max pooling with a 2x2 filter and a stride of (2, 2)
(7) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [4, 3, 2, 1]
(8) Linear(400 -> 120)
(9) ReLU()
(10) Linear(120 -> 84)
(11) ReLU()
(12) Linear(84 -> 10)
I think it because when I train network using pytorch, it has a x = x.view(-1, 16 * 5 * 5), and I don't know how can I match the output size and input size in MIPVerify.
What should I do to deal with the transmission between convolution layer and fully connection layer.

@vtjeng
Copy link
Owner

vtjeng commented May 1, 2019

No worries --- thank you for asking all these questions.

I believe that Flatten([1, 3, 2, 4]) should work, as in the code below:

Flatten([1, 3, 2, 4]),

[1, 3, 2, 4] specifies the order of permutation; if you're not getting sensible results, you may need to experiment with that order of permutation.

More generally, I would consider following the code here:
https://github.com/vtjeng/MIPVerify_data/blob/master/weights/mnist/WK17a/convert.py
https://github.com/vtjeng/MIPVerify.jl/blob/master/src/utils/import_example_nets.jl#L39-L52

which shows how we take a neural net from a .pth file to a NeuralNet class that we can work with in MIPVerify.

@UserAlreadyTaken
Copy link
Author

Flatten([1, 3, 2, 4]) does not work..., and I experiment with that all 24 orders of permutation,
all it prompts
ERROR: DimensionMismatch("matrix A has dimensions (120,400), vector B has length 1024") Stacktrace: [1] generic_matvecmul!(::Array{Float64,1}, ::Char, ::Array{Float32,2}, ::Array{Float64,1}) at ./linalg/matmul.jl:407 [2] matmul(::Array{Float64,1}, ::MIPVerify.Linear{Float32,Float32}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components/layers/linear.jl:58 [3] (::MIPVerify.Linear{Float32,Float32})(::Array{Float64,1}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components/layers/linear.jl:95 [4] chain(::Array{Float64,1}, ::Array{MIPVerify.Layer,1}) at /home/zyw/.julia/v0.6/MIPVerify/src/net_components.jl:26 (repeats 8 times) [5] macro expansion at /home/zyw/.julia/v0.6/MIPVerify/src/MIPVerify.jl:186 [inlined] [6] macro expansion at /home/zyw/.julia/v0.6/ProgressMeter/src/ProgressMeter.jl:483 [inlined] [7] frac_correct(::MIPVerify.Sequential, ::MIPVerify.LabelledImageDataset{Float64,Int32}, ::Int64) at /home/zyw/.julia/v0.6/MIPVerify/src/MIPVerify.j
Maybe I should change a simpler network that does not contain maxpooling layer?
But I wanna verify a big or deep enough network.
Can you give me some advice?

@vtjeng
Copy link
Owner

vtjeng commented May 2, 2019 via email

@UserAlreadyTaken
Copy link
Author

I'm sure that I'm missing no layer, I used example provided by official website.
This is the network structure shows in MIPVerify

sequential net CIFAR10.n1
(1) Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1), padding=same)
(2) ReLU()
(3) max pooling with a 2x2 filter and a stride of (2, 2)
(4) Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1), padding=same)
(5) ReLU()
(6) max pooling with a 2x2 filter and a stride of (2, 2)
(7) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [1, 3, 2, 4]
(8) Linear(400 -> 120)
(9) ReLU()
(10) Linear(120 -> 84)
(11) ReLU()
(12) Linear(84 -> 10)

and in pytorch like this:

def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self,x):
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))
    # fully connect
    x = x.view(-1, 16 * 5 * 5)
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)

    return x

in convert.py:

class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size(0), -1)

I think the problem may occur to different between x.view(-1, 400) , x.view(x.size(0), -1) and Flatten([1,3,2,4]) in MIPVerify.

@vtjeng
Copy link
Owner

vtjeng commented May 14, 2019

It looks like your neural network is using the "valid" padding for convolutions, rather than the "same" padding that is the default for the MIPVerify code. "valid" padding is currently unimplemented in my package, but I can get it implemented sometime soon. In the meantime, you can consider using "same" padding --- this should be realtively straightforward to do.

padding="same"

(0) 32x32x3 [input]
(1) 32x32x6 (padding="same")

(3) 16x16x6
(4) 16x16x16 (padding="same")

(6) 8x8x16
(7) 1024

padding="valid"

(0) 32x32x3 [input]
(1) 28x28x6 (padding="valid")

(3) 14x14x6
(4) 10x10x16 (padding="valid")

(6) 5x5x16
(7) 400

[a] I'm using the notation for convolution padding in https://keras.io/layers/convolutional/

@UserAlreadyTaken
Copy link
Author

UserAlreadyTaken commented May 15, 2019

I imported my network successfully, but when I verifying I imported the network correctly, the accuracy rate is just 0.0917... somewhere was gong wrong... when testing network, the accuracy rate is 0.69

MIPVerify.frac_correct(nn, cifar10.test, 10000)
Computing fraction correct...100%|██████████████████████| Time: 0:06:23
0.0917

julia> nn = Sequential([
conv1,
ReLU(),
MaxPool((1,2,2,1)),
conv2,
ReLU(),
MaxPool((1,2,2,1)),
Flatten([1,3,2,4]),
fc1,
ReLU(),
fc2,
ReLU(),
fc3
], "CIFAR10.n1")
sequential net CIFAR10.n1
(1) Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1), padding=same)
(2) ReLU()
(3) max pooling with a 2x2 filter and a stride of (2, 2)
(4) Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1), padding=same)
(5) ReLU()
(6) max pooling with a 2x2 filter and a stride of (2, 2)
(7) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [1, 3, 2, 4]
(8) Linear(1024 -> 120)
(9) ReLU()
(10) Linear(120 -> 84)
(11) ReLU()
(12) Linear(84 -> 10)

another network I tried has same problem, the accuracy rate should be 0.72, but in MIPVerify, it is 0.09

@vtjeng
Copy link
Owner

vtjeng commented May 16, 2019

That looks like you're getting the accuracy of random guessing. Here's one thing to try:

When working with convolution layers from Pytorch, I found that transposing the tensor is necessary because pytorch and Julia have different conventions.

# example transposing tensor
parameters_torch["conv1/weight"] = np.transpose(lpnet_torch[0].weight.data.numpy(), [2, 3, 1, 0])

A full example is below:
https://github.com/vtjeng/MIPVerify_data/blob/master/weights/mnist/WK17a/convert.py#L45-L56

I'm planning to fix this: #14 but in the meantime can you try transposing when you export the weights?

@vtjeng vtjeng changed the title How to get .mat file Importing my own neural network for verification May 16, 2019
@UserAlreadyTaken
Copy link
Author

ah, I have imported my network correctly. Verification has taken me 2days and has no result. It's normal that it cost for a long time? and the next step I want to verify residual network, could you tell me how to import a residual network? thanks a lot!

@vtjeng
Copy link
Owner

vtjeng commented May 17, 2019

Do you mean for a single sample? That is unexpectedly long. I'd be happy to dig deeper into your issue if you provide me the logs from the solver.

For resnets, we use the skip unit, mirroring the implementation in Wong et. al..

Here is an example of verification of one of their networks, with the weights already extracted in "cifar_resnet_8px.mat":

using MIPVerify
using MAT
using Gurobi

param_dict = matread("cifar_resnet_8px.mat")

n = Normalize([0.485, 0.456, 0.406], [0.225, 0.225, 0.225])
conv1 = get_conv_params(param_dict, "conv1", (3, 3, 3, 16), expected_stride = 1)
conv2 = get_conv_params(param_dict, "conv2", (3, 3, 16, 16), expected_stride = 1)

conv3_1 = get_conv_params(param_dict, "conv3_1", (1, 1, 16, 16), expected_stride = 1)
conv3_2 = get_conv_params(param_dict, "conv3_2", (3, 3, 16, 16), expected_stride = 1)
conv4 = get_conv_params(param_dict, "conv4", (3, 3, 16, 16), expected_stride = 1)

conv5_1 = get_conv_params(param_dict, "conv5_1", (1, 1, 16, 16), expected_stride = 1)
conv5_2 = get_conv_params(param_dict, "conv5_2", (3, 3, 16, 16), expected_stride = 1)
conv6 = get_conv_params(param_dict, "conv6", (4, 4, 16, 32), expected_stride = 2)

conv7_1 = get_conv_params(param_dict, "conv7_1", (2, 2, 16, 32), expected_stride = 2)
conv7_2 = get_conv_params(param_dict, "conv7_2", (3, 3, 32, 32), expected_stride = 1)
conv8 = get_conv_params(param_dict, "conv8", (4, 4, 32, 64), expected_stride = 2)

conv9_1 = get_conv_params(param_dict, "conv9_1", (2, 2, 32, 64), expected_stride = 2)
conv9_2 = get_conv_params(param_dict, "conv9_2", (3, 3, 64, 64), expected_stride = 1)

fc1 = get_matrix_params(param_dict, "fc1", (4096, 1000))
logits = get_matrix_params(param_dict, "logits", (1000, 10))

s3 = SkipBlock([conv3_1, Zero(), conv3_2])
s5 = SkipBlock([conv5_1, Zero(), conv5_2])
s7 = SkipBlock([conv7_1, Zero(), conv7_2])
s9 = SkipBlock([conv9_1, Zero(), conv9_2])

cifar10 = read_datasets("cifar10")

nnparams = SkipSequential([
    n,
    conv1, ReLU(interval_arithmetic),
    conv2, ReLU(),
    s3, ReLU(),
    conv4, ReLU(),
    s5, ReLU(),
    conv6, ReLU(),
    s7, ReLU(),
    conv8, ReLU(),
    s9, ReLU(),
    Flatten([1, 3, 2, 4]),
    fc1, ReLU(),
    logits], "cifar-resnet-8px")

num_samples = 50
println("Fraction correct of first $num_samples is $(frac_correct(nnparams, cifar10.test, num_samples))")
# Expected value over 10,000 samples --- 0.2707
# Corresponding error --- 0.7293
eps=8/255

MIPVerify.setloglevel!("info")

MIPVerify.batch_find_untargeted_attack(
    nnparams, 
    cifar10.test, 
    1:10000, 
    GurobiSolver(Gurobi.Env(), BestObjStop=eps, TimeLimit=1200), 
    pp = MIPVerify.LInfNormBoundedPerturbationFamily(eps),
    norm_order=Inf, 
    rebuild=true, 
    solve_rerun_option = MIPVerify.never,
    tightening_algorithm=lp, 
    tightening_solver = GurobiSolver(Gurobi.Env(), OutputFlag=0, TimeLimit=20),
    cache_model = false,
    solve_if_predicted_in_targeted = false
)

@UserAlreadyTaken
Copy link
Author

yep, it's a single sample. It has no more log, the network is the one I asked all the time.
it just like this:

[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 4, target labels are [4]
[notice | MIPVerify]: Rebuilding model from scratch. This may take some time as we determine upper and lower bounds for the input to each non-linear unit.
Calculating upper bounds: 0%| | ETA: 0:13:48Academic license - for non-commercial use only
Calculating upper bounds: 100%|███████████████████████| Time: 0:00:14
Calculating lower bounds: 100%|███████████████████████| Time: 0:00:13
Imposing relu constraint: 100%|███████████████████████| Time: 0:00:05
Calculating upper bounds: 100%|███████████████████████| Time: 0:00:42
...

Calculating upper bounds and lower bounds.......................for two days.

@vtjeng
Copy link
Owner

vtjeng commented May 21, 2019

Hi @UserAlreadyTaken --- these were the log outputs from the solver I was referring to --- thank you for sharing them.

I am surprised that it is taking so long. What size of perturbations are you trying to verify robustness to?

@UserAlreadyTaken
Copy link
Author

UserAlreadyTaken commented May 21, 2019

The size of perturbations I haven;t thought about it yet. The parameter of MIPVerify are default, I just have try....I didn't expect it to take so long.
The script contains following:

using MIPVerify
using Gurobi
using MAT

param_dict = Base.download("https://github.com/UserAlreadyTaken/zzz/raw/master/cifar_model5-15_weights.mat") |> matread

conv1 = get_conv_params(param_dict, "conv1", (5, 5, 3, 6))
conv2 = get_conv_params(param_dict, "conv2", (5, 5, 6, 16))
fc1 = get_matrix_params(param_dict, "fc1", (1024, 120))
fc2 = get_matrix_params(param_dict, "fc2", (120, 84))
fc3 = get_matrix_params(param_dict, "fc3", (84, 10))

nn = Sequential([
conv1,
ReLU(),
MaxPool((1,2,2,1)),
conv2,
ReLU(),
MaxPool((1,2,2,1)),
Flatten([1,3,2,4]),
fc1,
ReLU(),using MIPVerify
using Gurobi
using MAT

param_dict = Base.download("https://github.com/UserAlreadyTaken/zzz/raw/master/cifar_model5-15_weights.mat") |> matread

conv1 = get_conv_params(param_dict, "conv1", (5, 5, 3, 6))
conv2 = get_conv_params(param_dict, "conv2", (5, 5, 6, 16))
fc1 = get_matrix_params(param_dict, "fc1", (1024, 120))
fc2 = get_matrix_params(param_dict, "fc2", (120, 84))
fc3 = get_matrix_params(param_dict, "fc3", (84, 10))

nn = Sequential([
conv1,
ReLU(),
MaxPool((1,2,2,1)),
conv2,
ReLU(),
MaxPool((1,2,2,1)),
Flatten([1,3,2,4]),
fc1,
ReLU(),
fc2,
ReLU(),
fc3
], "CIFAR10.n1")

cifar10 = read_datasets("CIFAR10")
#MIPVerify.frac_correct(nn, cifar10.test, 10000)
sample_image = MIPVerify.get_image(cifar10.test.images, 1)
MIPVerify.find_adversarial_example(nn, sample_image, 4, GurobiSolver())
fc2,
ReLU(),
fc3
], "CIFAR10.n1")

cifar10 = read_datasets("CIFAR10")
#MIPVerify.frac_correct(nn, cifar10.test, 10000)
sample_image = MIPVerify.get_image(cifar10.test.images, 1)
MIPVerify.find_adversarial_example(nn, sample_image, 4, GurobiSolver())

@UserAlreadyTaken
Copy link
Author

And it has more log outputs from solver, as following:

[notice | MIPVerify]: The model built will be cached and re-used for future solves, unless you explicitly set rebuild=true.
Academic license - for non-commercial use only
Optimize a model with 77113 rows, 45976 columns and 2320524 nonzeros
Variable types: 25292 continuous, 20684 integer (20684 binary)
Coefficient statistics:
Matrix range [5e-07, 5e+03]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 5e+03]
RHS range [9e-05, 5e+03]

MIP start did not produce a new incumbent solution
MIP start violates constraint R67888 by 0.380392157

Presolve removed 15608 rows and 5163 columns
Presolve time: 3.85s
Presolved: 61505 rows, 40813 columns, 2234511 nonzeros
Variable types: 20129 continuous, 20684 integer (20684 binary)

Deterministic concurrent LP optimizer: primal simplex, dual simplex, and barrier
Showing barrier log only...

Presolve removed 10443 rows and 10443 columns
Presolved: 51062 rows, 30370 columns, 2203673 nonzeros

Root barrier log...

Ordering time: 4.40s

Barrier statistics:
Dense cols : 1512
AA' NZ : 1.022e+07
Factor NZ : 3.356e+07 (roughly 300 MBytes of memory)
Factor Ops : 5.670e+10 (roughly 2 seconds per iteration)
Threads : 1

              Objective                Residual

Iter Primal Dual Primal Dual Compl Time
0 3.51330048e+06 -3.08266682e+05 6.21e+03 1.18e-01 1.29e+03 18s
1 6.52971410e+05 -3.72236482e+05 7.31e+02 2.74e-13 1.78e+02 23s
2 5.43634997e+03 -2.62197171e+05 4.05e+00 3.28e-13 3.51e+00 27s
3 1.42498026e+03 -5.84413616e+04 1.97e-01 7.74e-14 5.74e-01 32s
4 5.20958024e+02 -1.21856897e+03 5.12e-13 7.42e-15 1.61e-02 37s
5 1.67308012e+01 -4.18503982e+01 3.41e-13 7.49e-16 5.41e-04 41s
6 1.85324506e-01 -9.12694679e-02 3.41e-13 7.08e-16 2.55e-06 46s
7 1.90645873e-04 -5.10701500e-04 2.84e-13 4.44e-16 6.48e-09 50s
8 1.90645751e-07 -5.10701648e-07 1.71e-13 4.44e-16 6.48e-12 54s
9 1.90644722e-10 -5.10733011e-10 1.71e-13 4.44e-16 6.48e-15 58s

Barrier solved model in 9 iterations and 58.33 seconds
Optimal objective 1.90644722e-10

Root crossover log...

   0 DPushes remaining with DInf 0.0000000e+00                59s

21288 PPushes remaining with PInf 0.0000000e+00 59s
630 PPushes remaining with PInf 0.0000000e+00 62s
0 PPushes remaining with PInf 0.0000000e+00 64s

Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 64s

Root simplex log...

Iteration Objective Primal Inf. Dual Inf. Time
23606 0.0000000e+00 0.000000e+00 0.000000e+00 65s
23606 0.0000000e+00 0.000000e+00 0.000000e+00 65s
Concurrent spin time: 0.01s

Solved with barrier

Root relaxation: objective 0.000000e+00, 23606 iterations, 60.08 seconds
Total elapsed time = 65.14s

Nodes    |    Current Node    |     Objective Bounds      |     Work

Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time

 0     0    0.00000    0 16047          -    0.00000      -     -  218s

Another try with MIP start

it seems like it's going to keep calculating.......

@vtjeng
Copy link
Owner

vtjeng commented May 22, 2019

Hi --- this call to find_adversarial_example attempts to search the whole input space for an adversarial example. This can be very inefficient, since the bounds on the input to each non-linearity are very loose.

To fully take advantage of the approach, I would advocate that you specify the range of perturbations that you're searching in. For example, if you're searching over a l-infinity ball of size eps=0.01, you would do the following.

MIPVerify.find_adversarial_example(
    nn, 
    sample_image, 
    4, 
    GurobiSolver(),
    pp=MIPVerify.LInfNormBoundedPerturbationFamily(0.01)
)

It also makes a difference whether you're trying to find the minimal adversarial example or just some adversarial example within a given distance. Let me know what you're trying to do and I can give you a bit more advice.

@UserAlreadyTaken
Copy link
Author

ok, I will try to specify the range of perturbations. What I'm trying to do is that learning how to verify robustness of neural network. I am a student and I'm studying this research. By the way, can I have you other contact information like Wechat or anyothers, it will be more convenient to get advice from you, thanks!

@vtjeng
Copy link
Owner

vtjeng commented Aug 18, 2019

Closing as there has not been any activity on this issue for more than 90 days --- feel free to re-open if there are any additional questions.

@vtjeng vtjeng closed this as completed Aug 18, 2019
@ksilken
Copy link

ksilken commented Mar 8, 2020

I have questions that seem to fit this thread. My neural network is in an h5 file and I want to convert it to .mat by Matlab. First, do you think this is a good way to do this and if not, what would you do? Second, I know for your conversion where you started with pytorch, you saved your nn in a dictionary. Assuming I'm using Matlab, which should I use: structures or containers? Thank you.

@vtjeng
Copy link
Owner

vtjeng commented Mar 16, 2020

Hi @ksilken, saw that you had a follow-up question in #36, so perhaps you've already addressed these issues.

Anyway --- If I were doing this personally, I would use Python but only because that's what I'm most familiar with. (I haven't used Matlab in a long time).

If you succeeded at using Matlab, would you share whether you used structures or containers for other people coming across this thread?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants