Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'NoneType' object is not subscriptable #990

Open
zsrabbani opened this issue Apr 5, 2024 · 7 comments
Open

TypeError: 'NoneType' object is not subscriptable #990

zsrabbani opened this issue Apr 5, 2024 · 7 comments
Labels

Comments

@zsrabbani
Copy link

zsrabbani commented Apr 5, 2024

I have QCNN model( I used qkeras library) and when I setup the below configuration and I got an error.
I updated the last version of hls4ml.

Model:

Layer (type) Output Shape Param

rf_input (InputLayer) [(None, 1024, 2)] 0

q_conv1d (QConv1D) (None, 1024, 64) 640

q_batch_normalization (QBa (None, 1024, 64) 256
tchNormalization)

q_activation (QActivation) (None, 1024, 64) 0

max_pooling1d (MaxPooling1 (None, 512, 64) 0
D)

q_conv1d_1 (QConv1D) (None, 512, 32) 10240

q_batch_normalization_1 (Q (None, 512, 32) 128
BatchNormalization)

q_activation_1 (QActivatio (None, 512, 32) 0
n)

max_pooling1d_1 (MaxPoolin (None, 256, 32) 0
g1D)

q_conv1d_2 (QConv1D) (None, 256, 16) 2560

q_batch_normalization_2 (Q (None, 256, 16) 64
BatchNormalization)

q_activation_2 (QActivatio (None, 256, 16) 0
n)

max_pooling1d_2 (MaxPoolin (None, 128, 16) 0
g1D)

flatten (Flatten) (None, 2048) 0

q_dense (QDense) (None, 128) 262144

dropout (Dropout) (None, 128) 0

q_dense_1 (QDense) (None, 128) 16384

dropout_1 (Dropout) (None, 128) 0

q_dense_2 (QDense) (None, 7) 896

activation (Activation) (None, 7) 0

=================================================================
Total params: 293312 (1.12 MB)
Trainable params: 293088 (1.12 MB)
Non-trainable params: 224 (896.00 Byte)

Here is my hls4ml setup:
hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')
hls_config['Model']['ReuseFactor']=16
hls_config['Model']['Strategy']='Resources'

for Layer in hls_config['LayerName'].keys():
hls_config['LayerName'][Layer]['Strategy'] = 'Resources'
hls_config['LayerName'][Layer]['ReuseFactor'] = 16

hls_config['LayerName']['softmax']['exp_table_t'] = 'ap_fixed<16,6>'
hls_config['LayerName']['softmax']['inv_table_t'] = 'ap_fixed<16,6>'
hls_config['LayerName']['output_softmax']['Strategy'] = 'Stable'

cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = hls_config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'CNN_16_6'
hls_model = hls4ml.converters.convert_from_keras_model(
model, hls_config=hls_config, output_dir='CNN_16_6', backend='VivadoAccelerator', board='zcu102')

hls_model.compile()

The Result:
1
2

@zsrabbani zsrabbani added the bug label Apr 5, 2024
@jmitrevs
Copy link
Contributor

jmitrevs commented Apr 9, 2024

It it possible to put the model somewhere or paste a script to create a simple untrained model that has the same issue?

@zsrabbani
Copy link
Author

zsrabbani commented Apr 10, 2024

rf_in = Input(shape=(1024, 2), name = 'rf_input')

x = QConv1D(64, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(rf_in)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)

x = QConv1D(32, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(x)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)

x = QConv1D(16, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(x)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides=2, padding='same') (x)

x = Flatten()(x)

dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)
dropout_1 = Dropout(0.25)(dense_1)
dense_2 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
softmax = QDense(7, kernel_quantizer="quantized_bits(16,6)", use_bias=False)(dropout_2)
softmax = Activation('softmax')(softmax)

opt = keras.optimizers.Adam(learning_rate=0.0001)
model= keras.Model(rf_in, softmax)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])
model.summary()

@jmitrevs
Copy link
Contributor

jmitrevs commented Apr 12, 2024

I think I understand the problem. The hls4ml software assumes that QDense will always have kernel_quantizer defined, but that is not the case here. I will add a check for it, but in the meantime, here is a workaround. Replace:

dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)

by

dense_1_noact = Dense(128, use_bias=False)(x)
dense_1 = QActivation(activation="quantized_relu(16,6)")(dense_1_noact)

@jmitrevs
Copy link
Contributor

@zsrabbani
Copy link
Author

I think I understand the problem. The hls4ml software assumes that QDense will always have kernel_quantizer defined, but that is not the case here. I will add a check for it, but in the meantime, here is a workaround. Replace:

dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)

by

dense_1_noact = Dense(128, use_bias=False)(x)
dense_1 = QActivation(activation="quantized_relu(16,6)")(dense_1_noact)

I added the two code line to my code, but still the same error. nothing change.

dense_1 = Dense(128, use_bias=False)(x)
dense_1 = QActivation("quantized_relu(16,6)")(dense_1)
dropout_1 = Dropout(0.25)(dense_1)
dense_2 = Dense(128, use_bias=False)(dropout_1)
dense_2 = QActivation("quantized_relu(16,6)")(dense_2)
dropout_2 = Dropout(0.5)(dense_2)
softmax = Dense(7, use_bias=False)(dropout_2)
softmax = QActivation("quantized_relu(16,6)")(softmax)
output = Activation('softmax')(softmax)

@zsrabbani
Copy link
Author

Even I installed hls4ml library again and still same error.

@johanjino
Copy link

+1 Same issue when using QKeras layers (QLSTM)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants