-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce the refinezono result #77
Comments
Hello @jiahaubai, Thank you for your interest in ERAN. First, I want to recommend comparing to our much more recent results using PRIMA, which is not only faster than RefineZono, but also much more accurate (see Figure 10 in the paper linked above) and is the current state-of-the-art on the 6X100 network (called 5x100 in this network due to its number of hidden layers). In any case, your command is indeed missing the neuron-wise bound-refinement described in the paper (also leading to the much lower average runtime of just 4.6s instead of 194s). To activate this refinement, please run ERAN as follows:
Here "refine_neurons" generally activates the LP refinement of neuron-wise bounds and "n_milp_refine" determines the number of layers for which to use MILP instead of LP. For the above settings and with my machine I get the following result:
Cheers, |
Hi @jiahaubai, Your result does indeed look like you would have to increase the timeouts a little bit to compensate for differences in hardware to reproduce the results we reported. I looked into Q2 and that should be resolved if you pull an updated version of ELINA and recompile. Regardingg Q1: Further, I would recommend switching the order of the MaxPool and ReLU layer, as that will decrease the number of error terms as the ReLU layer will be applied to fewer neurons. Under certain conditions, this can also increase analysis precision. Cheers, |
Hi Mark, Thank you for your suggestion on Q2 and kind instruction on Q1 ! In addition, I want to try using
I hope refinedpoly can be used correctly on my onnx model Thank you very much for your team for providing such a great tool to help me verify different models. It is very flexible for the user. I really appreciate it ! Thanks, |
Hello @jiahaubai, Both Cheers, |
Hi,
I want to reproduce the refinezono result from the paper
Boosting Robustness Certification of Neural Networks
I want the result of the yellow block
and I use the command
python3 . --netname ../net/mnist_relu_6_100.tf --epsilon 0.02 --domain refinezono --dataset mnist
after executing , the result is
But in paper, it shows 67 % on 6X100 model
Is there a problem with my command ??
or any ideas to fix it, thanks!
The text was updated successfully, but these errors were encountered: