This is a simple implementation of CROWN and alpha-CROWN algorithms. It's designed to be a simplified version of auto_LiRPA and alpha-beta-CROWN. Here, we only consider fully connected sequential ReLU networks.
To check the correctness of the bounds computed by this implementation, you need to first install auto_LiRPA. The results obtained by this implementation (labeled as CROWN
and alpha-CROWN
) should be the same as the results obtained by the library (labeled as auto_LiRPA: backward (CROWN)
and auto_LiRPA: CROWN-Optimized
).
A simple implementation of CROWN and alpha-CROWN to compute bounds of fully connected sequential ReLU networks. It also uses the library auto_LiRPA
to compute the bounds of the same models for comparison.
The definition of class BoundLinear
, including its interval propagation and backward propagation.
The definition of class BoundReLU
, including its interval propagation and backward propagation (with or without optimization).
The definition of a PyTorch model. All the layers in the model is gathered in an nn.Sequential
object.
A pretrained PyTorch model for debugging. The structure of it is (2, 50, 100, 2)
.
python crown.py
-
Given an input
x
and an L_inf perturbation, we have the upper bound and the lower bound of the input, noted asx_U
andx_L
respectively. Our goal is to compute the lower (or upepr) bound for the output of the network. To do so, we construct a linear function to lower (or upper) bound the original non-linear network. (See Lemma 2.2 of Wang, S., Zhang, H., Xu, K., Lin, X., Jana, S., Hsieh, C. J., & Kolter, J. Z. (2021). Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems, 34, 29909-29921.) -
To compute a bounded relaxation of the original network, we need to bound each non-linear layer first. Here, we use two linear constraints to relax unstable ReLU neurons: a linear upper bound and a linear lower bound. (See Lemma 2.1)
-
With upper bounds and lower bounds of each ReLU layer, we use backward propagation to compute the upper and lower bound for the whole network.
-
Notice that our ReLU relaxation requires the bounds of its input. Thus we need to compute the bounds for intermediate layers, too. Here, we apply backward propagation starting from every linear intermidiate layers to obtain the intermediate bounds.
-
We wrap every linear layers and ReLU layers into
BoundLinear
andBoundReLU
objects. The backward propagations through a single layer are implemented in these classes asbound_backward()
. The backward propagation for linear layers is straightforward. The details of the backward propagation for ReLU layers can be seen in the proof for Lemma 2.1. -
The sequential ReLU model is converted to a
BoundSequential
object, where each layer is converted to a correspondingBoundLinear
orBoundReLU
object. -
backward_range()
ofBoundSequential
is the function to compute the output bounds for each linear layer (which can be either the final layer or a layer that followed by a ReLU layer) with a backward propagation starting from it. -
full_backward_range()
ofBoundSequential
iterates through the whole sequential model layer by layer. For each linear layer, it bounds the output by callingbackward_range()
with assigning this layer as thestart_node
. In the end, it bounds the output of the final layer and thus provides the bounds for the whole model.
When using linear relaxation on each ReLU layer, the slopes of the lower bounds for every single neurons can vary. Apart from preset their values, we can take them as variables alpha
and optimize them based on the tightness of final bounds.
-
_get_optimized_bounds()
ofBoundSequential
is for computing the optimized bounds of the model with alpha-CROWN. -
The first step of the algorithm is to intialize alpha with
init_alpha()
. There, a full CROWN is run to obtain intermediate bounds and initial alpha. Alphas are saved as properties ofBoundReLU
objects. Since we use independent alphas for computing the bound of each intermediate or final neuron, the length of the dictionary ofalpha
in aBoundReLU
object equals to the number of linear layers from which we do backward propagations. -
_set_alpha()
is for gatheringalpha
s from everyBoundReLU
layers and construct a listparameters
to be optimized later. -
We use the current values of
alpha
to compute the upper bound (or the lower bound) with CROWN in each iteration. The loss is the sum ofub
or the negative of the sum oflb
. We use Adam optimizer to optimize alpha.