Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About QBatchNormalization is not support QKeras po2 quantizer #948

Open
MrFaith2001 opened this issue Dec 23, 2023 · 1 comment
Open

About QBatchNormalization is not support QKeras po2 quantizer #948

MrFaith2001 opened this issue Dec 23, 2023 · 1 comment
Labels

Comments

@MrFaith2001
Copy link

MrFaith2001 commented Dec 23, 2023

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • [Y] Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • [Y] Check that the issue hasn't already been reported, by checking the currently open issues.
  • [Y] If there are steps to reproduce the problem, make sure to write them down below.
  • [Y] If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

When I use hls4ml to convert the QBatchnormalization layer, I will be prompted that the quantizer of po2 is not supported.

Details

My QBatchNormalization is using po2 quantizer, but hls4ml told me that it can't support the quantizer

Steps to Reproduce

  1. My QBN layer is defined as follows:
x = QBatchNormalization(
            gamma_quantizer="quantized_relu_po2(bits=8)",
            beta_quantizer="quantized_po2(bits=8)",
            mean_quantizer="quantized_po2(bits=8)",
            variance_quantizer="quantized_relu_po2(bits=8)",
            axis=bn_axis, epsilon=1.001e-5, name="conv1_0_bn")(x)
  1. I only convert this one layer at a time via hls4ml

Expected behavior

I hope hls4ml can recognize all quantizers and convert successfully

Actual behavior

hls4ml will report :

Layer name: input_5, layer type: InputLayer, input shapes: [[None, 160, 160, 64]], output shape: [None, 160, 160, 64]
Unsupported quantizer: quantized_po2
Unsupported quantizer: quantized_relu_po2
Unsupported quantizer: quantized_po2
Unsupported quantizer: quantized_relu_po2
Layer name: conv2_block1_1_bn, layer type: QBatchNormalization, input shapes: [[None, 160, 160, 64]], output shape: [None, 160, 160, 64]

Possible fix

I browsed the relevant files and found the problem in line 11 of the file in the path "hls4ml/converters/keras/qkeras.py", I found that hls4ml has a special specification for the QBN layer.(

if keras_layer['class_name'] == 'QBatchNormalization':
)

@MrFaith2001 MrFaith2001 changed the title About QBatchNormalization is not support QKeras About QBatchNormalization is not support QKeras po2 quantizer Dec 23, 2023
@jmitrevs
Copy link
Contributor

I remember the model parsing sometimes giving extra warnings in the past due to some convoluted logic but working in the past, so I wanted to make sure that wasn't the case. Did you confirm that the produced code is actually not working properly? That can well be the case, but I wanted to ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants