Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: list index out of range #4

Closed
sauravtii opened this issue Sep 4, 2022 · 10 comments
Closed

IndexError: list index out of range #4

sauravtii opened this issue Sep 4, 2022 · 10 comments

Comments

@sauravtii
Copy link

sauravtii commented Sep 4, 2022

I have trained and saved a model and want to convert it to SNN by using SpKeras, but after running the code to convert I am facing the following issue:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Input In [10], in <cell line: 7>()
      2 from spkeras.spkeras.models import cnn_to_snn
      4 #Current normalisation using cnn_to_snn
      5 ##Default: signed_bit=0, amp_factor=100, method=1, epsilon = 0.001
----> 7 snn_model = cnn_to_snn(signed_bit=0)(cnn_model,x_train)

File ~/Desktop/Toy network/spkeras/spkeras/models.py:29, in cnn_to_snn.__call__(self, mdl, x_train)
     27 self.use_bias = use_bias        
     28 self.get_config()
---> 29 self.model = self.convert(mdl,x_train,                    
     30                           thresholding = self.thresholding,
     31                           scaling_factor = self.scaling_factor,
     32                           method = self.method,
     33                           timesteps=self.timesteps)
     35 return self

File ~/Desktop/Toy network/spkeras/spkeras/models.py:101, in cnn_to_snn.convert(self, mdl, x_train, thresholding, scaling_factor, method, timesteps)
     98     _weights[0] = _weights[0].astype(int)   
     99     _weights[0] = _weights[0]/2**bit
--> 101 _bias = kappa*_weights[1]/lmax[num+1]
    102 _bias = _bias/norm
    103 bias.append(_bias.tolist())    

IndexError: list index out of range

My model:

Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 32, 32, 3)]       0         
                                                                 
 conv2d (Conv2D)             (None, 32, 32, 4)         16        
                                                                 
 conv2d_1 (Conv2D)           (None, 16, 16, 64)        2368      
                                                                 
 conv2d_2 (Conv2D)           (None, 16, 16, 72)        41544     
                                                                 
 conv2d_3 (Conv2D)           (None, 8, 8, 256)         166144    
                                                                 
 conv2d_4 (Conv2D)           (None, 8, 8, 256)         65792     
                                                                 
 conv2d_5 (Conv2D)           (None, 8, 8, 64)          16448     
                                                                 
 flatten (Flatten)           (None, 4096)              0         
                                                                 
 dropout (Dropout)           (None, 4096)              0         
                                                                 
 dense (Dense)               (None, 100)               409700    
                                                                 
 dense_1 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 703,022
Trainable params: 703,022
Non-trainable params: 0

Any solutions ?

I also tried solving it by referring to this, but I wasn't able to.

@Dengyu-Wu
Copy link
Owner

Try adding the Activation Layer after Conv2D and Dense, such as,

inputs = Input((32, 32, 3))
x = Conv2D()(inputs)
x = Activation("relu")(x)

The problem is due to that SpKeras counts Activation layer for conversion.

@sauravtii
Copy link
Author

sauravtii commented Sep 6, 2022

I did that, but still facing the same issue.

My code:

input_shape = (32, 32, 3)
input_layer = Input(input_shape)

layer = Conv2D(filters=4,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(input_layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=64,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)


layer = Conv2D(filters=72,
               kernel_size=(3, 3),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=256,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)


layer = Conv2D(filters=256,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=64,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)


layer = Flatten()(layer)

layer = Dropout(0.5)(layer)

layer = Dense(units=100)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)

layer = Dense(units=10)(layer)

layer = tf.keras.activations.softmax(layer, axis=-1)

model = Model(input_layer, layer)

My model summary:

Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 32, 32, 3)]       0         
                                                                 
 conv2d (Conv2D)             (None, 32, 32, 4)         12        
                                                                 
 tf.nn.relu (TFOpLambda)     (None, 32, 32, 4)         0         
                                                                 
 conv2d_1 (Conv2D)           (None, 16, 16, 64)        2304      
                                                                 
 conv2d_2 (Conv2D)           (None, 16, 16, 72)        41472     
                                                                 
 tf.nn.relu_1 (TFOpLambda)   (None, 16, 16, 72)        0         
                                                                 
 conv2d_3 (Conv2D)           (None, 8, 8, 256)         165888    
                                                                 
 conv2d_4 (Conv2D)           (None, 8, 8, 256)         65536     
                                                                 
 tf.nn.relu_2 (TFOpLambda)   (None, 8, 8, 256)         0         
                                                                 
 conv2d_5 (Conv2D)           (None, 8, 8, 64)          16384     
                                                                 
 flatten (Flatten)           (None, 4096)              0         
                                                                 
 dropout (Dropout)           (None, 4096)              0         
                                                                 
 dense (Dense)               (None, 100)               409700    
                                                                 
 tf.nn.relu_3 (TFOpLambda)   (None, 100)               0         
                                                                 
 dense_1 (Dense)             (None, 10)                1010      
                                                                 
 tf.nn.softmax (TFOpLambda)  (None, 10)                0         
                                                                 
=================================================================
Total params: 702,306
Trainable params: 702,306
Non-trainable params: 0
_____________________________

I also tried adding activation after each Conv2D layer, but faced the same issue.

My code:

input_shape = (32, 32, 3)
input_layer = Input(input_shape)

layer = Conv2D(filters=4,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(input_layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=64,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=72,
               kernel_size=(3, 3),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=256,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)

layer = Conv2D(filters=256,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)


layer = Conv2D(filters=64,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)

layer = Flatten()(layer)

layer = Dropout(0.5)(layer)

layer = Dense(units=100)(layer)

layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)

layer = Dense(units=10)(layer)

layer = tf.keras.activations.softmax(layer, axis=-1)

model = Model(input_layer, layer)

@Dengyu-Wu
Copy link
Owner

Can you change
layer = tf.keras.activations.relu(layer, alpha=0.0, max_value=None, threshold=0.0)
into
layer=Activation('relu')(layer)
and try again.

@sauravtii
Copy link
Author

sauravtii commented Sep 6, 2022

It worked, thanks!
But after evaluating, the accuracy is not good.

Can you help with this ?

Changing model timesteps...
New model generated!
{'timesteps': 256, 'thresholding': 0.5, 'amp_factor': 100, 'signed_bit': 0, 'spike_ext': 0, 'epsilon': 0.001, 'use_bias': False, 'scaling_factor': 1, 'noneloss': False, 'method': 1}
313/313 [==============================] - 3s 9ms/step - loss: 163.4400 - accuracy: 0.1000

I am running it for 25 epochs and getting around 83 % of train accuracy.

@Dengyu-Wu
Copy link
Owner

Have you changed
layer = tf.keras.activations.softmax(layer, axis=-1)
into
layer=Activation('softmax')(layer)
as well

@sauravtii
Copy link
Author

Yes, below is my code for the architecture:

input_shape = (32, 32, 3)
input_layer = Input(input_shape)

layer = Conv2D(filters=4,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(input_layer)

layer=Activation('relu')(layer)

layer = Conv2D(filters=64,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)

layer=Activation('relu')(layer)

layer = Conv2D(filters=72,
               kernel_size=(3, 3),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer=Activation('relu')(layer)

layer = Conv2D(filters=256,
               kernel_size=(3, 3),
               strides=(2, 2),
               padding="same", use_bias=False)(layer)

layer=Activation('relu')(layer)

layer = Conv2D(filters=256,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer=Activation('relu')(layer)

layer = Conv2D(filters=64,
               kernel_size=(1, 1),
               strides=(1, 1),
               padding="same", use_bias=False)(layer)

layer=Activation('relu')(layer)

layer = Flatten()(layer)

layer = Dropout(0.05)(layer)

layer = Dense(units=100)(layer)

layer=Activation('relu')(layer)

layer = Dense(units=10)(layer)

layer=Activation('softmax')(layer)

model = Model(input_layer, layer)

@Dengyu-Wu
Copy link
Owner

Did you normalise your input into [0,1]?

@sauravtii
Copy link
Author

Hi, I did normalize the input into [0,1] but due to some reasons it didn't get normalized. But now after normalizing it properly I am getting a good accuracy (65.47 %) as compared to before but is not that good. Do you have any suggestions in order to improve my SNN accuracy ?

@Dengyu-Wu
Copy link
Owner

Dengyu-Wu commented Sep 7, 2022

You can refer our paper to include BatchNormalisation Layer (after convolutional and dense layer) and clipping methods to improve conversion efficiency.

@sauravtii
Copy link
Author

Okay, I will read that. Thank you for the suggestion and your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants