Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem during CNN synthesis #872

Open
1 task
MagiPrince opened this issue Sep 20, 2023 · 0 comments
Open
1 task

Problem during CNN synthesis #872

MagiPrince opened this issue Sep 20, 2023 · 0 comments
Labels

Comments

@MagiPrince
Copy link

MagiPrince commented Sep 20, 2023

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • [x ] Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • [ x] Check that the issue hasn't already been reported, by checking the currently open issues.
  • [ x] If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

Hi everyone,

I'm trying to build my CNN model quantized, but when I set the filters to 32 for some QConv2d layers, the Concatenation function returns an error and the pre-synthesis failed.

I figure out that it was the Concatenation function that was triggering the error, as I try to read and write the same variable "concatenated_outputs ". Is there an easy way to fix that inside the loop, or should I unroll my loop in my code ?

Details

My model :

input_ = Input(shape=(64, 64, 3))

x = Flatten()(input_)

output_1 = QDense(2, kernel_quantizer= quantized_bits(16, 0, alpha=1),
                    bias_quantizer=quantized_bits(16, 0, alpha=1),
                    kernel_initializer='lecun_uniform', use_bias=True)(x)
concatenated_outputs = QActivation(quantized_relu(16, 6))(output_1)
for _ in range(5):
    output_tmp = QDense(2, kernel_quantizer= quantized_bits(16, 0, alpha=1),
                    bias_quantizer=quantized_bits(16, 0, alpha=1),
                    kernel_initializer='lecun_uniform', use_bias=True)(x)
    output = QActivation(quantized_relu(16, 6))(output_tmp)
    concatenated_outputs = Concatenate(axis=-1)([concatenated_outputs, output])

reshaped_outputs = Reshape((5, 2))(concatenated_outputs)

model = Model(inputs=input_, outputs=reshaped_outputs)

My hls4ml script :

config = hls4ml.utils.config_from_keras_model(model, granularity="name")

config['Model']['Precision'] = 'ap_fixed<16,6>'
config['Model']['ReuseFactor'] = 4000
config['Model']['Strategy'] = 'Resource'

cfg = hls4ml.converters.create_config(backend='Vitis')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'model_1/'
cfg['Part'] = 'xc7z030sbv485-3'

hls_model = hls4ml.converters.keras_to_hls(cfg)

hls_model.compile()

hls_model.build(csim=False, synth=True)

Error returned :

ERROR: [HLS 214-256] in function 'myproject(hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 3u>, 0>&, hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 2u>, 0>&)': Unsupported aggregate pragma/directive on variable 'layer80_cpy1' as the bit-width after aggregation (8192) is larger than 4096 (firmware/myproject.cpp:374:28)
ERROR: [HLS 214-256] in function 'myproject(hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 3u>, 0>&, hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 2u>, 0>&)': Unsupported aggregate pragma/directive on variable 'layer80_cpy2' as the bit-width after aggregation (8192) is larger than 4096 (firmware/myproject.cpp:376:25)

Steps to Reproduce

Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.

  1. Use the last hls4ml version and Vitis HLS 2023.1
  2. Run the hls script

Expected behavior

Synthetize the model

Actual behavior

Raises an error

@MagiPrince MagiPrince added the bug label Sep 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant